This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Principal Software Engineer - Machine Learning. This Principal role is vital to the --- ML team within T.Data, which specializes in delivering hyper personalized scoring and recommendations based on hundreds of customer attributes and interactions. By leveraging these insights, we can offer relevant and timely information to customers across both digital and agent assisted channels, thereby significantly enhancing customer experience. The individual in this role will be responsible for developing high performance, distributed machine learning models using a variety of tools and technologies, including Python, SQL, Databricks, Snowflake, Palantir Foundry, Docker, and Kubernetes, all within a distributed cloud environment. This is a senior level role where this person is responsible for the development of high performance, distributed modeling tasks using Machine Learning and Data Science.
Job Responsibility:
Model Development and Training: Select and implement appropriate machine learning algorithms
Feature Engineering: Develop and transform features from raw data
Model Training: Train machine learning models using historical data
Model Evaluation and Tuning: Evaluate model performance
Hyperparameter Tuning
Cross-Validation
Model Deployment and Integration: Deploy machine learning models into production
API Development
Integration with existing systems
Monitoring and Maintenance: Monitor performance of deployed models
Model Maintenance
Error Analysis
Infrastructure and Tooling: Set up and manage infrastructure
Automation of repetitive tasks
Utilize and maintain ML frameworks
Collaboration and Communication: Cross-Functional Collaboration
Documentation
Stakeholder Communication
Uses Big Data programming languages and technology to write code
Completes programming and documentation
Performs testing and debugging of applications
Analyzes, designs, programs, debugs and modifies software enhancements and/or new products
Interacts with data scientists and industry experts to understand data needs
Works in a highly agile environment
Requirements:
Bachelor’s or Higher Degree
8.00 to 10.00 Years total experience
Expertise in Python, SQL, Databricks, Snowflake, Palantir Foundry, Docker, Kubernetes under a distributed cloud environment
Familiarity with JVM-based function languages including Scala and Java
Familiarity with Hadoop query languages including Pig, Hive, Scalding, Cascalog, PyCascading
Familiarity with Apache Flink computing frameworks including Spark/Databricks and IBM InfoSphere Streams