This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Mid-Level Data Engineer at LENSA you will work across our AWS-based ecosystem to build, operate, and improve the data pipelines and infrastructure that support analytics, machine learning, and internal products. This role is ideal for engineers who are confident in their foundations, can independently own features end-to-end, and want to deepen their expertise in distributed systems, data modeling, and cloud architecture.
Job Responsibility:
Develop, maintain, and improve batch and streaming ETL/ELT pipelines used across analytics and ML workflows
Build scalable data processes using Python, SQL, Spark, and AWS services (EC2, Lambda, EMR, Step Function, Glue)
Implement and enhance data ingestion and transformation pipelines that load and structure data in our Data Warehouse (Redshift)
Contribute to the design and development of new data models for reporting, metrics, and operational needs
Operate within our AWS environment and contribute to our infrastructure using AWS CDK
Participate in code reviews and help drive best practices for data quality, reliability, and CI/CD workflows
Collaborate with analysts, data scientists, and engineering teams to translate requirements into usable technical solutions
Produce and maintain team documentation, runbooks, and diagrams
Support the development of ML-related pipelines, including training and inference layers
Requirements:
3+ years of hands-on experience in data engineering or a related field
Solid programming experience in Python and strong SQL proficiency (CTEs, window functions, complex transformations)
Experience in building and maintaining data pipelines or data-related backend services
Understanding of data modeling and MPP concepts (Redshift, BigQuery, Snowflake)
Hands-on experience with cloud services, preferably AWS
Familiarity with distributed data tools such as Spark, Flink or similar frameworks
Ability to follow established workflows and deliver high-quality, production ready solutions with minimal supervision
Clear, reliable communication skills and a collaborative mindset
Nice to have:
Experience using AWS CDK or other IaC frameworks
Familiarity with metadata management, data governance, or data quality frameworks
Experience building CI/CD pipelines (GitLab CI or similar)
Previous work with Redshift, Athena, or large-scale analytical modeling
What we offer:
Flexible working hours with home office opportunity
Medicare health insurance
Company breakfast and lunch every day in the office
Office massage
Udemy business account for continuous learning and self-development
Exciting programs and team-building events
Recreation room with darts, ping pong, foosball, XBox, and other games
Modern and fancy office in Buda close to Széll Kálmán tér