This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Delivery Data Solutions (DDS) is a horizontal team responsible to transform data@Delivery to meaningful data to support analytics, metrics, power ML models and support KPIs for the domain teams through real time/batch processing. We lead the optimal data resource utilization and data quality for the organization. We provide visibility and standardization of core business metrics powered through the canonical data sets owned by the team. The team is the centre of excellence for data engineering practices across Uber Delivery org. The team creates efficient tools and processes to help people working on data, designs and maintains a holistic view of delivery data, and manages and optimises delivery data infrastructure resources.
Job Responsibility:
Build and maintain data pipelines and data products that power analytics, reporting and machine learning use cases across the Delivery organization
Develop batch and real-time data processing workflows that transform large datasets into reliable and well-structured data assets
Contribute to the development of core business metrics and analytical datasets used by product, data science and engineering teams
Work closely with product engineers, data scientists and analysts to understand data requirements and implement scalable solutions
Ensure data quality, reliability and timeliness across pipelines by following established data engineering best practices
Support performance optimizations and infrastructure improvements to improve pipeline efficiency and maintain SLA commitments
Participate in improving data engineering tools, processes and documentation within the team
Requirements:
Bachelor's degree in Computer Science or a related technical field, or equivalent practical experience
Experience coding using a general-purpose programming language such as Java, Python, Go or similar
Experience working with data processing frameworks such as Spark, Hive or similar technologies
Understanding of data warehousing concepts and analytical data modeling
Experience writing data transformation logic, queries and scripts for data processing workflows
Strong problem-solving skills and ability to work collaboratively with cross-functional teams
Nice to have:
Master’s degree in Computer Science or a related technical field, or equivalent practical experience
Experience building data pipelines supporting analytics or machine learning workloads
Experience working with distributed data processing systems and large datasets
Understanding of data quality validation, monitoring and pipeline reliability practices
Exposure to real-time or streaming data technologies
Familiarity with marketplace, logistics or delivery domain datasets