CrawlJobs Logo

Middle Data Engineer

ventionteams.com Logo

Vention

Location Icon

Location:
Georgia , Tbilisi

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Vention is a global engineering partner to tech leaders and fast-growing startups, combining 20+ years of product development expertise with an AI-first approach and a reputation for reliable delivery. With 150+ data engineers and over 20 years of experience, we help startups and enterprises turn data into smart decisions and better user experiences. Our teams work with Python, Java, Scala, and R to build scalable data pipelines and infrastructure that power real results.

Job Responsibility:

  • Design and implement automated pipelines to collect data from diverse sources (APIs, RDBMS, Cloud) into a centralized Data Lake/Warehouse
  • Develop logic to transform and map heterogeneous data into a unified, consistent style and schema
  • Build, monitor, and optimize end-to-end data workflows using Apache Hop, AWS Glue, and Lambda
  • Ensure cost-effective data processing by optimizing cloud resource consumption and minimizing LLM token usage
  • Maintain and optimize complex T-SQL queries, schemas, and SSRS reports within MS SQL Server environments

Requirements:

  • Hands-on experience with Apache Hop, including designing, orchestrating, and monitoring ETL/ELT pipelines, managing transformations and workflows, and integrating with relational and cloud data sources
  • Strong experience with Microsoft SQL Server and SSRS, including writing and optimizing complex T-SQL queries and stored procedures, designing relational schemas, and developing, deploying, and maintaining SSRS reports (tabular and paginated) for business stakeholders
  • Technical requirements: Python, SQL Spark, PySpark, Pandas AWS: Glue, S3, DMS, Lambda, Athena, RDS Kubernetes, Helm Terraform, Terraform Cloud
What we offer:
  • EDU corporate community (300+ members): tech communities, interest clubs, events, a small R&D lab, a knowledge base, and a dedicated AI track
  • Licenses for AI tools: GitHub Copilot, Cursor, and others
  • 24 working days of vacation per year
  • Expanded medical insurance
  • Corporate getaways & team building activities
  • Fitpass sport program
  • Support for significant life events
  • Access to discounts across a variety of stores, restaurants & cafes through a corporate discount program
  • Referral bonuses for bringing in new talent

Additional Information:

Job Posted:
March 02, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Middle Data Engineer

Middle Data Engineer

At LeverX, we have had the privilege of delivering over 950 projects. With 20+ y...
Location
Location
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience in data engineering
  • Strong SQL and solid Python for data processing
  • Hands-on experience with at least one cloud and a modern warehouse/lakehouse: Snowflake, Redshift, Databricks, or Apache Spark/Iceberg/Delta
  • Experience delivering on Data Warehouse or Lakehouse projects: star/snowflake modeling, ELT/ETL concepts
  • Familiarity with orchestration (Airflow, Prefect, or similar) and containerization fundamentals (Docker)
  • Understanding of data modeling, performance tuning, cost-aware architecture, and security/RBAC
  • English B1+
Job Responsibility
Job Responsibility
  • Design, build, and maintain batch/streaming pipelines (ELT/ETL) from diverse sources into DWH/Lakehouse
  • Model data for analytics (star/snowflake, slowly changing dimensions, semantic/metrics layers)
  • Write production-grade SQL and Python
  • optimize queries, file layouts, and partitioning
  • Implement orchestration, monitoring, testing, and CI/CD for data workflows
  • Ensure data quality (validation, reconciliation, observability) and document lineage
  • Collaborate with BI/analytics to deliver trusted, performant datasets and dashboards
What we offer
What we offer
  • Projects in different domains: Healthcare, manufacturing, e-commerce, fintech, etc.
  • Projects for every taste: Startup products, enterprise solutions, research & development projects, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay in the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right

Senior Software Engineer - Trade Processing Middle Office Platform

As an experienced Staff / Senior Software Engineer, you’ll shape our flagship Mi...
Location
Location
United States , New York
Salary
Salary:
170000.00 - 240000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Science or Engineering
  • 10+ years of strong proficiency in Java / Spring Boot, Spring, RDBMS, Service Oriented Architecture (SOA), microservice based server side application development
  • Strong experience with distributed systems, event-driven architecture, and tools like Kafka
  • Practical knowledge of relational databases (e.g., Postgres) and schema design
  • You have contributed to systems that deliver solutions to complex business problems that handle massive amounts of data
  • You prioritize end user experience and it shows in your API designs, functionality, and performance
  • You have a strong command over design patterns, data structures, and algorithms
  • You have strong problem-solving skills with a keen eye for performance optimization
  • You can clearly explain the nuances of system design and paradigms to engineers and stakeholders
  • Strong understanding of multi-threading, concurrency, and performance tuning
Job Responsibility
Job Responsibility
  • Architect and build highly available, horizontally scalable mission critical applications in a modern technology stack
  • Design, build, and optimize core components responsible for processing a high volume of trade data in a low latency environment
  • Solve complex performance and scalability challenges, ensuring our systems handle large-scale financial data efficiently
  • Collaborate with product managers, and other engineers to translate financial methodologies into robust software solutions
  • Lead by example in system design discussions, architectural trade-offs, and best practices
  • Mentor team members, contributing to a strong culture of engineering excellence
What we offer
What we offer
  • Competitive compensation, benefits, and perks
  • Company equity
  • 401k matching
  • Gender neutral parental leave
  • Full medical, dental and vision insurance
  • Lunch stipends
  • Fully stocked kitchens
  • Happy hours
  • Fulltime
Read More
Arrow Right

Middle Palantir Foundry Developer

At LeverX, we have had the privilege of delivering 1,500+ projects. With 20+ yea...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years in data/analytics engineering or software development
  • Hands-on experience with Palantir Foundry (pipelines and/or applications)
  • Proficiency in Python and SQL
  • Confidence with Git
  • Ability to translate business requirements into working solutions
  • English B1+
Job Responsibility
Job Responsibility
  • Build and maintain data pipelines and transformations in Foundry
  • Implement application logic, views, and access controls
  • Validate data and ensure basic documentation and support
  • Work with stakeholders to clarify requirements and iterate on features
What we offer
What we offer
  • Impactful use-case delivery on real data
  • Possibility to progress into a Team Lead role: mentoring, design facilitation, and coordination
  • Projects in different domains: Healthcare, manufacturing, e-commerce, fintech, etc.
  • Projects for every taste: Startup products, enterprise solutions, research & development projects, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just for a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay in the company for 4+ years
Read More
Arrow Right

Middle Data Engineer

We are seeking a motivated Data Engineer to join our team. In this role, you wil...
Location
Location
Salary
Salary:
Not provided
n-ix.com Logo
N-iX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience in batch and streaming ETL using Spark, Python, Scala, Snowflake, or Databricks for Data Engineering or Machine Learning workloads
  • 2+ years orchestrating and implementing pipelines with workflow tools like Databricks Workflows, Apache Airflow, or Luigi
  • 2+ years of experience prepping structured and unstructured data for data science models
  • 2+ years of experience with containerization and orchestration technologies (Docker, Kubernetes discussable) and experience with shell scripting in Bash/Unix shell is preferable
  • Proficiency in Oracle & SQL and data manipulation techniques
  • Experience using machine learning in data pipelines to discover, classify, and clean data
  • Implemented CI/CD with automated testing in Jenkins, Github Actions, or Gitlab CI/CD
  • Familiarity with AWS Services not limited to Lambda, S3, and DynamoDB
  • Demonstrated experience implementing data management life cycle, using data quality functions like standardization, transformation, rationalization, linking, and matching
Job Responsibility
Job Responsibility
  • Developing and maintaining robust data pipelines that drive our business intelligence and analytics
What we offer
What we offer
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
Read More
Arrow Right

Middle QA Big Data

The objective of this project is to enhance the QA processes through the impleme...
Location
Location
Ukraine
Salary
Salary:
Not provided
n-ix.com Logo
N-iX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of QA experience with a strong focus on Big Data testing, particularly with hands-on experience in Data Lake environments on any cloud platform (preferably Azure)
  • Experience with Azure
  • Hands-on experience in Azure Data Factory, Azure Synapse Analytics, or similar services
  • Proficiency in SQL, capable of writing and optimizing both simple and complex queries for data validation and testing purposes
  • Experienced in PySpark, with experience in data manipulation and transformation, and a demonstrated ability to write and execute test scripts for data processing and validation (ability to understand the code and convert the logic to SQL)
  • Hands-on experience with Functional & System Integration Testing in big data environments, ensuring seamless data flow and accuracy across multiple systems
  • Knowledge and ability to design and execute test cases in a behavior-driven development environment
  • Fluency in Agile methodologies, with active participation in Scrum ceremonies and a strong understanding of Agile principles
  • Familiarity with tools like Jira, including experience with X-Ray for defect management and test case management
  • Proven experience working on high-traffic and large-scale software products, ensuring data quality, reliability, and performance under demanding conditions
Job Responsibility
Job Responsibility
  • Design and execute data validation tests to ensure completeness in Azure Data Lake Storage (ADLS), Azure Synapse, and Databricks
  • Verify data ingestion, transformation, and loading (ETL/ELT) processes in Azure Data Factory (ADF)
  • Validate data schema, constraints, and format consistency across different storage layers
  • Conduct performance testing on data pipelines
  • Optimize query performance by working with data engineers
  • Identify, log, and track defects in JIRA
  • Collaborate with Data Engineers and Business Analysts to resolve data inconsistencies
  • Generate detailed test reports, dashboards, and documentation for stakeholders
What we offer
What we offer
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
Read More
Arrow Right

Rust Engineer - Platform

As a Platform Backend Engineer (Rust) at Keyrock, you will drive the development...
Location
Location
Salary
Salary:
Not provided
keyrock.com Logo
Keyrock
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or equivalent experience
  • Proven experience in building and maintaining data-intensive, large-scale, high-performance trading data platforms
  • Strong expertise in Rust (or C++), Python, and TypeScript for system development and automation in the financial services industry
  • Good understanding of data engineering principles, including data modeling, ETL pipelines, and stream processing
  • Experience with financial services data workflows, including trading, middle office, and back office operations
  • Extensive experience in cloud-native architectures, with proficiency in AWS
  • Proficient in GitOps tools and methodologies for infrastructure automation and deployment
  • Strong background in DevSecFinOps, ensuring compliance, security, and cost efficiency across the development lifecycle
  • Hands-on experience with CI/CD pipelines, infrastructure as code (IaC), and monitoring tools
Job Responsibility
Job Responsibility
  • Rust Development: Design, build, and maintain high-performance backend services and APIs using Rust, ensuring low latency and high availability for critical trading data platforms
  • Strong systems engineering fundamentals: Concurrency, memory management, networking, serialization, and observability Solid understanding of performance tuning and profiling in real-world systems
  • System Integration: Create seamless integrations between live trading operations (exchanges/DeFi) and backoffice systems, automating workflows to improve operational efficiency
  • Cloud-Native Deployment: Deploy and manage services in a cloud-native environment, leveraging AWS, Kubernetes, and Terraform to scale infrastructure infrastructure-as-code
  • DevOps & Observability: Maintain GitOps-driven workflows, ensuring robust CI/CD pipelines and implementing deep system observability (logging, metrics, tracing) for rapid incident response
  • Database Optimization: Optimize data storage and retrieval strategies (SQL/NoSQL), balancing query performance, cost efficiency, and data integrity in a high-volume financial environment
  • Security & Compliance: Engineer solutions with a "Security-First" mindset, ensuring strict adherence to compliance standards and secure handling of sensitive financial data
  • Cross-Functional Collaboration: Partner with Product Managers, Risk teams, and other engineers to translate complex business requirements into reliable technical specifications and features
  • Technical Excellence: Actively participate in code reviews, contribute to architectural discussions, and mentor fellow engineers to foster a culture of high code quality and innovation
  • Continuous Improvement: Stay updated on emerging trends in the Rust ecosystem, cloud infrastructure, and blockchain technologies to continuously refine the platform’s capabilities
What we offer
What we offer
  • A competitive salary package
  • Autonomy in your time management thanks to flexible working hours and the opportunity to work remotely
  • The freedom to create your own entrepreneurial experience by being part of a team of people in search of excellence
  • Fulltime
Read More
Arrow Right
New

Middle Data Engineer

Vention is a global engineering partner to tech leaders and fast-growing startup...
Location
Location
Georgia , Batumi
Salary
Salary:
Not provided
ventionteams.com Logo
Vention
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with Apache Hop, including designing, orchestrating, and monitoring ETL/ELT pipelines, managing transformations and workflows, and integrating with relational and cloud data sources
  • Strong experience with Microsoft SQL Server and SSRS, including writing and optimizing complex T-SQL queries and stored procedures, designing relational schemas, and developing, deploying, and maintaining SSRS reports (tabular and paginated) for business stakeholders
  • Technical requirements: Python, SQL Spark, PySpark, Pandas AWS: Glue, S3, DMS, Lambda, Athena, RDS Kubernetes, Helm Terraform, Terraform Cloud
Job Responsibility
Job Responsibility
  • Design and implement automated pipelines to collect data from diverse sources (APIs, RDBMS, Cloud) into a centralized Data Lake/Warehouse
  • Develop logic to transform and map heterogeneous data into a unified, consistent style and schema
  • Build, monitor, and optimize end-to-end data workflows using Apache Hop, AWS Glue, and Lambda
  • Ensure cost-effective data processing by optimizing cloud resource consumption and minimizing LLM token usage
  • Maintain and optimize complex T-SQL queries, schemas, and SSRS reports within MS SQL Server environments
What we offer
What we offer
  • EDU corporate community (300+ members): tech communities, interest clubs, events, a small R&D lab, a knowledge base, and a dedicated AI track
  • Licenses for AI tools: GitHub Copilot, Cursor, and others
  • 24 working days of vacation per year
  • Expanded medical insurance
  • Corporate getaways & team building activities
  • Fitpass sport program
  • Support for significant life events
  • Access to discounts across a variety of stores, restaurants & cafes through a corporate discount program
  • Referral bonuses for bringing in new talent
  • Fulltime
Read More
Arrow Right

Middle Data Warehouse Engineer

We are seeking a motivated Middle Data Warehouse Engineer to join our team. In t...
Location
Location
Salary
Salary:
Not provided
n-ix.com Logo
N-iX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 4 years of experience in this or a similar role
  • Intermediate proficiency with Oracle and PL/SQL
  • Experience with ETL processes and data pipeline development
  • Familiarity with shell scripting (Unix) and basic command-line debugging
  • Working knowledge of AWS CLI
  • Basic knowledge of Java
  • A solid understanding of data warehousing concepts
  • Strong problem-solving skills and the ability to learn from technical documentation and training materials
Job Responsibility
Job Responsibility
  • Design, develop, and maintain ETL (Extract, Transform, Load) processes to move data from various sources to our data warehouse
  • Write and optimize complex queries and scripts using PL/SQL and Oracle to transform and load data
  • Automate and orchestrate data workflows using shell scripting (Unix/korn shell)
  • Utilize AWS CLI for tasks such as managing data in S3 or interacting with other AWS services
  • Debug and troubleshoot data pipeline issues to ensure data accuracy and availability for downstream consumers
  • Collaborate with stakeholders and team members to understand data requirements and deliver reliable solutions
What we offer
What we offer
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
Read More
Arrow Right