This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Principal Software Engineer - Machine Learning in Atlanta, GA --Day1 onsite High-Level Responsibilities .Data, which specializes in delivering hyper-personalized scoring and recommendations based on hundreds of customer attributes and interactions. By leveraging these insights, we can offer relevant and timely information to customers across both digital and agent-assisted channels, thereby significantly enhancing customer experience. The individual in this role will be responsible for developing high-performance, distributed machine learning models using a variety of tools and technologies, including Python, SQL, Databricks, Snowflake, Palantir Foundry, Docker, and Kubernetes, all within a distributed cloud environment. A Machine Learning (ML) Engineer plays a crucial role in designing, implementing, and maintaining machine learning models and systems. They bridge the gap between data science and software engineering, ensuring that ML models are scalable, efficient, and integrated into production environments. This is a senior level role where this person is responsible for the development of high performance, distributed modeling tasks using Machine Learning and Data Science.
Job Responsibility:
Model Development and Training: Algorithm Selection, Feature Engineering, Model Training
Model Evaluation and Tuning: Model Evaluation, Hyperparameter Tuning, Cross-Validation
Model Deployment and Integration: Model Deployment, API Development, Integration
Monitoring and Maintenance: Model Monitoring, Model Maintenance, Error Analysis
Infrastructure and Tooling: Infrastructure Management, Automation, Tooling
Collaboration and Communication: Cross-Functional Collaboration, Documentation, Stakeholder Communication
Uses Big Data programming languages and technology to write code
Completes programming and documentation, and then performs testing and debugging of applications
Analyzes, designs, programs, debugs and modifies software enhancements and/or new products used in distributed, large-scale analytics and visualization solutions
Interacts with data scientists and industry experts to understand how data needs to be converted, loaded and presented
Works in a highly agile environment
Requirements:
Bachelor’s or Higher Degree
10+ years of experience in software development, architecture, Big-data and SQL
SQL(MySQL/ PostgreSQL)
Java
Scala
Python
No-Sql technologies (Cassandra/MongoDB, Redis)
Docker
Kubernetes
Jenkins
CI/CD
Git
Jira
Azure DevOps
Data exploration, analysis, summarization, visualization using necessary tools like Tableau, excel, etc.
Experience with tools like Snowflake, Talend, and Informatica for extracting data from various sources
Expertise in Extract, Transform, Load (ETL) processes using tools like Apache NiFi, Talend, and Informatica
Knowledge of building and managing data pipelines with tools like Apache Kafka, Apache Flume, and Apache Storm, Apache Flink, BI Analytics and Databricks
Experience with REST services, MQ/Rabbit, Redis/Hazelcast
Proficiency in Python, Java, or Scala
Understanding of data warehousing concepts and platforms like Snowflake
Knowledge of Telecom Domain
Cloud Technologies: Azure ML, Databricks, Snowflake and Palantir Foundry