CrawlJobs Logo

Data Engineer 2

uber.com Logo

Uber

Location Icon

Location:
India , Bangalore

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Delivery Data Solutions (DDS) is a horizontal team responsible to transform data@Delivery to meaningful data to support analytics, metrics, power ML models and support KPIs for the domain teams through real time/batch processing. We lead the optimal data resource utilization and data quality for the organization. We provide visibility and standardization of core business metrics powered through the canonical data sets owned by the team. The team is the centre of excellence for data engineering practices across Uber Delivery org. The team creates efficient tools and processes to help people working on data, designs and maintains a holistic view of delivery data, and manages and optimises delivery data infrastructure resources.

Job Responsibility:

  • Build and maintain data pipelines and data products that power analytics, reporting and machine learning use cases across the Delivery organization
  • Develop batch and real-time data processing workflows that transform large datasets into reliable and well-structured data assets
  • Contribute to the development of core business metrics and analytical datasets used by product, data science and engineering teams
  • Work closely with product engineers, data scientists and analysts to understand data requirements and implement scalable solutions
  • Ensure data quality, reliability and timeliness across pipelines by following established data engineering best practices
  • Support performance optimizations and infrastructure improvements to improve pipeline efficiency and maintain SLA commitments
  • Participate in improving data engineering tools, processes and documentation within the team

Requirements:

  • Bachelor's degree in Computer Science or a related technical field, or equivalent practical experience
  • Experience coding using a general-purpose programming language such as Java, Python, Go or similar
  • Experience working with data processing frameworks such as Spark, Hive or similar technologies
  • Understanding of data warehousing concepts and analytical data modeling
  • Experience writing data transformation logic, queries and scripts for data processing workflows
  • Strong problem-solving skills and ability to work collaboratively with cross-functional teams

Nice to have:

  • Master’s degree in Computer Science or a related technical field, or equivalent practical experience
  • Experience building data pipelines supporting analytics or machine learning workloads
  • Experience working with distributed data processing systems and large datasets
  • Understanding of data quality validation, monitoring and pipeline reliability practices
  • Exposure to real-time or streaming data technologies is a plus
  • Familiarity with marketplace, logistics or delivery domain datasets is a plus

Additional Information:

Job Posted:
May 15, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer 2

Data Engineer

As a Data Engineer at Rearc, you'll contribute to the technical excellence of ou...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience in data engineering, data architecture, or related fields
  • Solid track record of contributing to complex data engineering projects
  • Hands-on experience with ETL processes, data warehousing, and data modelling tools
  • Good understanding of data integration tools and best practices
  • Familiarity with cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery)
  • Strong analytical skills
  • Proficiency in implementing and optimizing data pipelines using modern tools and frameworks
  • Strong communication and interpersonal skills
Job Responsibility
Job Responsibility
  • Collaborate with Colleagues to understand customers' data requirements and challenges
  • Apply DataOps Principles to create scalable and efficient data pipelines and architectures
  • Support Data Engineering Projects
  • Promote Knowledge Sharing through technical blogs and articles
Read More
Arrow Right

Data Engineer

Become a player in our data engineering team, grow on a personal level and help ...
Location
Location
Serbia , Novi Beograd
Salary
Salary:
Not provided
mdpi.com Logo
MDPI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A university degree, ideally in Computer Science or related science, technology or engineering field
  • 2+ years of relevant work experience in data engineering roles
  • Experience in data acquisition, laking, warehousing, modeling, and orchestration
  • Proficiency in SQL (including window functions and CTE)
  • Proficiency in RDBMS (e.g., MySQL, PostgreSQL)
  • Strong programming skills in Python (with libraries like Polars, optionally Arrow / PyArrow API)
  • First exposure to OLAP query engines (e.g., Clickhouse, DuckDB, Apache Spark)
  • Familiarity with Apache Airflow (or similar tools like Dagster or Prefect)
  • Strong teamwork and communication skills
  • Ability to work independently and manage your time effectively
Job Responsibility
Job Responsibility
  • Assist in designing, building, and maintaining efficient data pipelines
  • Work on data modeling tasks to support the creation and maintenance of data warehouses
  • Integrate data from multiple sources, ensuring data consistency and reliability
  • Collaborate in implementing and managing data orchestration processes and tools
  • Help establish monitoring systems to maintain high standards of data quality and availability
  • Work closely with the Data Architect, Senior Data Engineers, and other members across the organization on various data infrastructure projects
  • Participate in the optimization of data processes, seeking opportunities to enhance system performance
What we offer
What we offer
  • Competitive salary and benefits package
Read More
Arrow Right

Data Engineer

We are seeking a skilled and innovative Data Engineer to join our team in Nieuwe...
Location
Location
Netherlands , Nieuwegein
Salary
Salary:
3000.00 - 6000.00 EUR / Month
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BSc or MSc degree in IT or a related field
  • Minimum of 2 years of relevant work experience in data engineering
  • Proficiency in building data pipelines using tools such as Azure Data Factory, Informatica Cloud, Synapse Pro, Spark, Python, R, Kubernetes, Snowflake, Databricks, or AWS
  • Advanced SQL knowledge and experience with relational databases
  • Hands-on experience in data modelling and data integration (both on-premise and cloud-based)
  • Strong problem-solving skills and analytical mindset
  • Knowledge of data warehousing concepts and big data technologies
  • Experience with version control systems, preferably Git
  • Excellent communication skills and ability to work collaboratively in a team environment
  • Fluency in Dutch language (required)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable data pipelines and ETL/ELT processes
  • Collaborate with Information Analysts to provide technical frameworks for business requirements of medium complexity
  • Contribute to architecture discussions and identify potential technical and process bottlenecks
  • Implement data quality checks and ensure data integrity throughout the data lifecycle
  • Optimise data storage and retrieval systems for improved performance
  • Work closely with cross-functional teams to understand data needs and deliver efficient solutions
  • Stay up-to-date with emerging technologies and best practices in data engineering
  • Troubleshoot and resolve data-related issues in a timely manner
  • Document data processes, architectures, and workflows for knowledge sharing and future reference
What we offer
What we offer
  • A permanent contract and a gross monthly salary between €3,000 and €6,000 (based on 40 hours per week)
  • 8% holiday allowance
  • A generous mobility budget, including options such as an electric lease car with an NS Business Card, a lease bike, or alternative transportation that best suits your travel needs
  • 8% profit sharing on target (or a fixed OTB amount, depending on the role)
  • 27 paid vacation days
  • A flex benefits budget of €1,800 per year, plus an additional percentage of your salary. This can be used for things like purchasing extra vacation days or contributing more to your pension
  • A home office setup with a laptop, phone, and a monthly internet allowance
  • Hybrid working: from home or at the office, depending on what works best for you
  • Development opportunities through training, knowledge-sharing sessions, and inspiring (networking) events
  • Social activities with colleagues — from casual drinks to sports and content-driven outings
  • Fulltime
Read More
Arrow Right

Sap Btp Data Engineer

At LeverX, we have had the privilege of working on over 950 SAP projects, includ...
Location
Location
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience designing and developing SAP data solutions within SAP and non-SAP enterprise landscapes
  • Strong knowledge of data modeling in Data Warehouses
  • Strong knowledge of the visualization patterns, approaches, and techniques in SAP and non-SAP landscapes
  • Understanding of Data Engineering solutions within the SAP BDC landscape, such as SAP Databricks
  • Proven experience in data transformation and integration with SAP ERP, S/4HANA, and equivalent external systems
  • Good understanding of SAP data integration techniques (SDI, SDA, APIs, ODP) and protocols (OData, REST, JDBC)
  • Bachelor’s degree in Computer Science, Information Systems, or equivalent
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and deploy enterprise data solutions on SAP BTP, integrating SAP and non-SAP systems
  • Analyze and resolve complex data and integration challenges, ensuring reliable and scalable solutions
  • Collaborate with data architects, functional analysts, and business stakeholders to translate requirements into data models, dashboards, and analytics
  • Lead small project teams (2–3 members) and contribute to cross-regional collaboration for consistent delivery
  • Facilitate client enablement through workshops, webinars, and hands-on sessions
  • Continuously grow expertise by staying current with SAP data and analytics technologies (e.g., SAP Business Data Cloud, AI/ML) and pursuing relevant certifications
What we offer
What we offer
  • 89% of projects use the newest SAP technologies and frameworks
  • Expert communities and internal courses
  • Valuable perks to support your growth and well-being
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay in the company for 4+ years
Read More
Arrow Right

Software Engineer 2 / Senior Software Engineer

We are looking for an experienced Software Engineers for our Bangalore location ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
komprise.com Logo
Komprise, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Solid grasp of computer science fundamentals and especially data structures, algorithms, multi-threading
  • Ability to solve difficult problems with a simple elegant solution
  • Should have solid object-oriented programming background with impeccable design skills
  • Experience in developing management applications and performance management applications is ideal
  • Experience with object-based file systems and REST interfaces is a plus (e.g. Amazon S3, Azure, Google Cloud Service)
  • Should have a BE or higher in CS, EE, Math or related engineering or science field
  • At least 5+ years of experience in software deployment
  • Tech Stack: Java, Maven Virtualisation, SaaS, Github, Jira, Slack, Cloud Solutions and Hypervisors
Job Responsibility
Job Responsibility
  • Responsible for designing and developing features that powers Komprise data management platform to manage billions of files and petabytes of data
  • Responsible for designing of major components and systems of our product architecture, ensuring that Komprise data management platform is highly available and scalable
  • Responsible for writing performance code, evaluate feasibility, develop for quality and optimize for maintainability
  • Work in agile, customer focused and fast paced team with direct interaction with the customers
  • Responsible for analysing customer escalated issues and provide resolutions in a timely manner
  • Should be able to design and implement highly performant, scalable distributed systems
Read More
Arrow Right

Data Engineer

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
163200.00 - 223200.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of dedicated data engineering experience, solving complex data pipeline issues at scale
  • Experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)
  • Value SQL as a flexible and extensible tool and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow
Job Responsibility
Job Responsibility
  • Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles
  • Have data quality and performance top of mind while designing datasets
  • Advocating for adopting industry tools and practices at the right time
  • Owning core SQL and Python data pipelines that power our data lake and data warehouse
  • Well-documented data with defined dataset quality, uptime, and usefulness
What we offer
What we offer
  • medical, dental, vision, and 401(k)
  • Fulltime
Read More
Arrow Right

Data Engineer

Data sits at the heart of the company. This role is to ensure that Awin is able ...
Location
Location
Romania , Iași
Salary
Salary:
Not provided
awin.com Logo
Awin Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree or higher in Data Science, Data Engineering, Business Intelligence, Business Administration, or a related field with a focus on data engineering, analytics, or business intelligence
  • 2+ years of working experience as a Data Engineer or Data Warehouse Developer using Python or similar technologies
  • Hands-on experience with Microsoft Azure and AWS data engineering pipelines
  • Excellent knowledge of database and dimensional modeling techniques
  • Strong experience with Azure Data Factory, Azure Databricks, Python (NumPy, Pandas), ADLS, Azure Event Hub, and Azure SQL DB
  • Excellent command of SQL, T-SQL, Python, and PySpark
  • Practical experience with Agile and Scrum methodologies
  • Strong communication and analytical skills
Job Responsibility
Job Responsibility
  • Write clean, elegant, and maintainable code following data warehousing best practices and modeling techniques (Kimball, Star Schema, etc.)
  • Troubleshoot data transformation errors by debugging and developing effective solutions
  • Clearly and confidently present findings, bugs, and proposed system improvements to stakeholders
  • Design and implement new Azure and AWS data pipelines and Databricks notebooks to integrate additional data sources
  • Add new data points to existing Azure Analysis Services (AAS) cubes or equivalent data models
  • Continuously identify and implement performance improvements
  • Ensure the quality, accuracy, and reliability of all data and reports
What we offer
What we offer
  • Flexi-Week and Work-Life Balance: We prioritise your mental health and wellbeing, offering you a flexible four-day Flexi-Week at full pay and with no reduction to your annual holiday allowance. We also offer a variety of different paid special leaves
  • Flexi-Office: We offer an international culture and flexibility through our Flexi-Office and hybrid/remote work possibilities to work across Awin regions
  • Remote Working Allowance: You will receive a monthly allowance to cover part of your running costs. In addition, we will support you in setting up your remote workspace appropriately
  • Health & Well Being: With our support and access to various initiatives and sports offers, you can devote yourself to your mental and physical well-being. Additionally to our initiatives on our Awin platform, we offer Multisport Card, and Medicover or Luxmed health insurance
  • Development: We’ve built our extensive multidisciplinary training suite Awin Academy (to cover a wide range of skills that nurture you professionally and personally,) with trainings conveniently packaged together to support your overall development. You can also improve your foreign language skills by participating in our local language course
  • Appreciation: Thank and reward colleagues by sending them a voucher through our peer-to-peer program
  • Fulltime
Read More
Arrow Right

Data Engineer

Purpose of position: Data sits at the heart of the company. This role is to ensu...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
awin.com Logo
Awin Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree or higher in Data Science, Data Engineering, Business Intelligence, Business Administration, or a related field with a focus on data engineering, analytics, or business intelligence
  • 2+ years of working experience as a Data Engineer or Data Warehouse Developer using Python or similar technologies
  • Hands-on experience with Microsoft Azure and AWS data engineering pipelines
  • Excellent knowledge of database and dimensional modeling techniques
  • Strong experience with Azure Data Factory, Azure Databricks, Python (NumPy, Pandas), ADLS, Azure Event Hub, and Azure SQL DB
  • Excellent command of SQL, T-SQL, Python, and PySpark
  • Practical experience with Agile and Scrum methodologies
  • Strong communication and analytical skills
Job Responsibility
Job Responsibility
  • Write clean, elegant, and maintainable code following data warehousing best practices and modeling techniques (Kimball, Star Schema, etc.)
  • Troubleshoot data transformation errors by debugging and developing effective solutions
  • Clearly and confidently present findings, bugs, and proposed system improvements to stakeholders
  • Design and implement new Azure and AWS data pipelines and Databricks notebooks to integrate additional data sources
  • Add new data points to existing Azure Analysis Services (AAS) cubes or equivalent data models
  • Continuously identify and implement performance improvements
  • Ensure the quality, accuracy, and reliability of all data and reports
What we offer
What we offer
  • Flexi-Week at full pay with no reduction to annual holiday allowance
  • Various paid special leaves
  • Flexi-Office and hybrid/remote work possibilities
  • Monthly remote working allowance
  • Support for mental and physical well-being initiatives
  • Multisport Card
  • Medicover or Luxmed health insurance
  • Access to Awin Academy training suite
  • Local language courses
  • Peer-to-peer voucher program
  • Fulltime
Read More
Arrow Right