This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Create and manage a single master record for each business entity, ensuring data consistency, accuracy, and reliability
Implement data governance processes, including data quality management, data profiling, data remediation, and automated data lineage
Create and maintain multiple robust and high-performance data processing pipelines within Cloud, Private Data Centre, and Hybrid data ecosystems
Assemble large, complex data sets from a wide variety of data sources
Collaborate with Data Scientists, Machine Learning Engineers, Business Analysts, and Business users to derive actionable insights and reliable foresights into customer acquisition, operational efficiency, and other key business performance metrics
Develop, deploy, and maintain multiple microservices, REST APIs, and reporting services
Design and implement internal processes to automate manual workflows, optimize data delivery, and re-design infrastructure for greater scalability
Establish expertise in designing, analyzing, and troubleshooting large-scale distributed systems
Support and work with cross-functional teams in a dynamic environment
Requirements:
Data Engineer with strong Hadoop / Spark / Talend experience
Experience building and operating large-scale data lakes and data warehouses
Experience with Hadoop ecosystem and big data tools, including Spark and Kafka
Experience with Master Data Management (MDM) tools and platforms such as Informatica MDM, Talend Data Catalog, Semarchy xDM, IBM PIM & IKC, or Profisee
Familiarity with MDM processes such as golden record creation, survivorship, reconciliation, enrichment, and quality
Experience in data governance, including data quality management, data profiling, data remediation, and automated data lineage
Experience with stream-processing systems including Spark-Streaming
Experience working with Cloud services using one or more Cloud providers such as Azure, GCP, or AWS
Experience with Delta Lake and Databricks
Advanced working experience with relational SQL and NoSQL databases, including Hive, HBase, and Postgres
Deep understanding of SQL and the ability to optimize data queries
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
A successful history of manipulating, processing, and extracting value from large, disconnected datasets
Experience applying modern development principles (Scrum, TDD, continuous integration, and code reviews)
Proven ability to support and work with cross-functional teams in a dynamic environment
5+ years of experience
Bachelor’s Degree
What we offer:
Attractive compensation package: 14-month salary scheme plus annual bonus and additional allowances
Annual bonus package tailored based on performance and contribution
Young, open, and dynamic working environment that promotes innovation and creativity
Ongoing learning and development with regular professional training and opportunities to enhance both technical and soft skills
Exposure to cutting-edge technologies and diverse real-world enterprise projects
Vibrant company culture with regular team-building activities, sports tournaments, arts events, Family Day, and more
Full compliance with Vietnamese labor laws, plus additional internal perks such as annual company trips, special holidays, and other corporate benefits