This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are currently seeking a Developer / Engineer to join our team in Irving, Texas (US-TX), United States (US). Day to Day job Duties: Collaborative working with the client’s technology and business staff day-to-day. Codes, tests, debugs, implements, and documents complex global applications. Negotiate features and associated priorities and help the team and their customers reach consensus. Develops and/or leads the development of prototypes. Identify problem causality, business impact and root causes. Coming up with exact solutions for problems related to object identity and error handling. Basic Qualifications: Minimum 7+ years of work experience in building data pipelines using Python, PySpark, DJango. Should have hands on experience on the MLOps. Minimum 7+ years of Hands-On experience in working with Python and related packages (like NumPy, pandas etc.) to load and scrap the data. Minimum 7+ years of Hands-on experience with at least one of the tools the Hadoop eco-system (HDFS, AWS Glue, MapReduce, Yarn, Hive, Pig, Impala, Spark, Kafka). Minimum 7+ years of Working experience on Relational/Non-relational databases and familiarity with data model concepts. Minimum 7+ years of Working exposure in blending as part of larger scrum team and understanding of related scrum ceremonies. Minimum 7+ years of Working knowledge of Unix/Linux. Nice to Have: Knowledge of cloud platforms (e.g., AWS, Azure, GCP).
Job Responsibility:
Collaborative working with the client’s technology and business staff day-to-day
Codes, tests, debugs, implements, and documents complex global applications
Negotiate features and associated priorities and help the team and their customers reach consensus
Develops and/or leads the development of prototypes
Identify problem causality, business impact and root causes
Coming up with exact solutions for problems related to object identity and error handling
Requirements:
Minimum 7+ years of work experience in building data pipelines using Python, PySpark, DJango
Should have hands on experience on the MLOps
Minimum 7+ years of Hands-On experience in working with Python and related packages (like NumPy, pandas etc.) to load and scrap the data
Minimum 7+ years of Hands-on experience with at least one of the tools the Hadoop eco-system (HDFS, AWS Glue, MapReduce, Yarn, Hive, Pig, Impala, Spark, Kafka)
Minimum 7+ years of Working experience on Relational/Non-relational databases and familiarity with data model concepts
Minimum 7+ years of Working exposure in blending as part of larger scrum team and understanding of related scrum ceremonies
Minimum 7+ years of Working knowledge of Unix/Linux
Nice to have:
Knowledge of cloud platforms (e.g., AWS, Azure, GCP)