This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join Amgen’s Mission of Serving Patients. In this vital role as a Python Developer, you will develop scalable, Python-based microservices and APIs as per the design laid out by technical lead. As a developer, you will follow best practices of engineering, containerization, and Kubernetes deployments. A key focus of your work will be developing solutions for data platform and data engineering teams and building the microservices that power a self-service portal for data engineers, enabling them to create the cloud infrastructure, data platform resources and services they need. These solutions will integrate with Databricks and AWS cloud platforms to deliver secure, efficient, and enterprise-scale capabilities.
Job Responsibility:
Develop scalable, Python-based microservices and APIs as per the design laid out by technical lead.
Follow best practices of engineering, containerization, and Kubernetes deployments.
Develop solutions for data platform and data engineering teams and building the microservices that power a self-service portal for data engineers, enabling them to create the cloud infrastructure, data platform resources and services they need.
Integrate solutions with Databricks and AWS cloud platforms to deliver secure, efficient, and enterprise-scale capabilities.
Development of API services for managing Databricks resources, services & features and to support data governance applications to manage security of data assets following the standards
Development, testing, and deployment of enterprise-level re-usable components, frameworks and services to enable data engineers, following agile methodologies
Development, testing, and deployment of solutions that enable platform reliability, optimization and standardization
Collaborate with Technical Lead, Business SMEs, and Data Engineers to develop cloud data solutions
Proactively work on challenging data integration problems by implementing efficient data solutions, frameworks for structured and unstructured data
Automate and optimize data pipeline and framework for easier and efficient development process
Overall management of the Enterprise Data Fabric/Lake on AWS environment to ensure that the service delivery is efficient and business SLAs around uptime, performance and capacity are met
Follow defined guidelines, standards, strategies, security policies and change management policies to support Enterprise Data Fabric/Lake
Work with technical lead, business analysts, and team members to provide technical solutions on cloud platforms (AWS, Databricks), ensuring the delivery of robust, scalable, and maintainable Data Lake and Big Data solutions.
Experience developing in an Agile development environment and ceremonies
Familiarity with code versioning using GITLAB, and code deployment tools
Requirements:
Doctorate degree / Master's degree / Bachelor's degree and 3 to 5 years in Computer Science or Engineering
Expertise in Python programming
Proficiency with SQL and data modeling for scalable systems
Proficiency in Python-based microservices development and deployment.
Knowledge of microservices design patterns, distributed systems, and API lifecycle management.
Knowledge of containerization (Docker) and orchestration platforms (Kubernetes/EKS).
Familiarity with CI/CD pipelines, Git-based version control (GitLab/GitHub), and automated testing.
Hands-on experience with AWS services (EKS, EC2, S3, RDS, SQS).
Strong problem-solving skills and ability to develop for scalability, security, and resilience.
Nice to have:
Experience developing self-service portals using front-end frameworks like React.js.
Knowledge in developing AI solutions productionized for solving business problems.
Knowledge building APIs and services for provisioning and managing AWS Databricks’ environments.
Knowledge of Databricks SDK and REST APIs for managing workspaces, clusters, jobs, users, and permissions.
Knowledge to build enterprise-grade, performance-optimized data pipelines in Databricks using Python and PySpark, following best practices and standards.
Exposure to building AI/ML solutions using Databricks-native features.
Knowledge working with SQL/NoSQL databases and vector databases for large language model (LLM) applications.
Exposure to model fine-tuning and timely engineering practices.
Good communication skills to effectively present technical information to leadership and respond to collaborator inquiries.
AWS Certified Data Engineer
Databricks Certification
SAFe Agile Certification
Strong analytical and problem-solving attitude with the ability to troubleshoot sophisticated data and platform issues.
Good communication skills—able to translate technical concepts into clear, business-relevant language for diverse audiences.
Collaborative and globally minded, with experience working effectively in distributed, multi-functional teams.
Self-motivated and proactive, demonstrating a high degree of ownership and initiative in driving tasks to completion.
Skilled at managing multiple priorities in fast-paced environments while maintaining attention to detail and quality.
Team-oriented with a growth mindset, contributing to shared goals and fostering a culture of continuous improvement.
Effective time and task management, with the ability to estimate, plan, and deliver work across multiple projects while ensuring consistency and quality.
What we offer:
Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.