This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks. Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.
Job Responsibility:
Monitor automated batch and streaming data pipelines in AWS
Identify, troubleshoot, and resolve data processing failures
Investigate file‑level errors, schema mismatches, and transformation issues
Perform root‑cause analysis and document resolutions
Ensure data integrity, completeness, and timeliness across environments
Escalate architectural or systemic issues to the Data Engineering team
Collaborate directly with customers to understand their file formats and data structures
Create and maintain mapping templates to align customer data to a normalized data model
Validate sample files and run tests on ingestion workflows
Configure ingestion parameters within predefined frameworks
Support customer go‑live processes and initial data processing cycles
Write SQL queries to validate data accuracy and research anomalies
Develop lightweight Python scripts for validation, transformation checks, or automation tasks
Improve monitoring processes, internal documentation, and operational playbooks
Work with engineering teams to strengthen platform reliability and observability
Communicate clearly with customers regarding file issues or data discrepancies
Partner with internal teams including Data Engineering, Product, and Support
Provide feedback to enhance scalability, resilience, and overall platform performance
Requirements:
6–10 years of experience in data operations, data engineering, or data support roles
Experience with ETL/ELT orchestration tools
Hands‑on experience with AWS services such as S3, Glue, Lambda, CloudWatch, Redshift, or similar
Strong SQL skills (joins, aggregations, troubleshooting data discrepancies)
Working knowledge of Python for scripting and data validation
Experience troubleshooting automated data pipelines
Familiarity with structured and semi‑structured data formats (CSV, JSON, Parquet)
Strong analytical and problem‑solving skills
Comfortable interacting with customers in technical discussions
Experience working in a healthcare or pharmaceutical environment
Nice to have:
Experience working within a data lake or data‑warehouse architecture
Familiarity with data catalogs or governance frameworks
Understanding of data normalization and schema design