CrawlJobs Logo

Data engineer & mlops lead

https://www.randstad.com Logo

Randstad

Location Icon

Location:
India , Bangalore

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Data Platform & Architecture; MLOps & Model Lifecycle; Leadership & Delivery; Security, Compliance, & Governance; Innovation & Thought Leadership

Job Responsibility:

  • Design and evolve cloud-native data lakes / warehouses (e.g., Snowflake, Databricks, BigQuery)
  • Establish scalable batch & streaming pipelines using Spark/Flink, Kafka, Airflow/Dagster, and dbt
  • Implement robust data-quality, catalog, and governance frameworks (e.g., Great Expectations, Unity Catalog)
  • Build automated CI/CD pipelines for ML (MLflow, Kubeflow, SageMaker, Vertex AI)
  • Set up feature stores, model registries, and canary rollout processes
  • Create monitoring & alerting for drift, bias, and performance (Prometheus, Evidently, Arize)
  • Recruit, coach, and promote a high-performing team of data engineers, ML engineers, and DevOps specialists
  • Drive quarterly OKRs, roadmaps, and architectural review boards
  • Manage budgets, vendor contracts, and cloud cost optimization
  • Enforce IAM, data-encryption, and least-privilege practices
  • Ensure adherence to GDPR, PDPA, HIPAA, or other relevant regulations
  • Champion reproducibility and auditability across data and ML assets
  • Evaluate emerging paradigms like data mesh, vector databases, LLMOps, and GenAI for business fit
  • Publish best-practice playbooks and present at internal tech forums or external meet-ups

Requirements:

  • 8+ years combined experience in data engineering, software engineering, or ML infrastructure, with 3+ years leading teams
  • Deep proficiency with Python/Scala/SQL and modern data processing frameworks (Spark, Flink)
  • Hands-on with Docker, Kubernetes, Terraform, CI/CD (GitHub Actions, Jenkins)
  • Proven record of shipping and operating ML models in production at scale
  • Solid grasp of distributed-system design, data modeling, and micro-service architectures
  • Excellent stakeholder management and communication skills

Nice to have:

  • Experience in GenAI or LLM pipelines, vector similarity search (FAISS, Pinecone, Weaviate)
  • Multi-cloud (AWS, GCP, Azure) certification or FinOps expertise
  • Contributions to open-source data or MLOps projects
  • Familiarity with privacy-preserving ML (federated learning, differential privacy)

Additional Information:

Job Posted:
February 17, 2026

Expiration:
March 20, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data engineer & mlops lead

Senior Data & AI/ML Engineer - GCP Specialization Lead

We are on a bold mission to create the best software services offering in the wo...
Location
Location
United States , Menlo Park
Salary
Salary:
Not provided
techjays.com Logo
techjays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI
  • ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow
  • Programming: Python & SQL
  • MLOps: CI/CD for ML, Model deployment & monitoring
  • Infrastructure-as-Code: Terraform
  • Data Engineering: ETL/ELT, real-time & batch pipelines
  • AI/ML Tools: TensorFlow, scikit-learn, XGBoost
  • Min Experience: 10+ Years
Job Responsibility
Job Responsibility
  • Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage
  • Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines
  • Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry
  • Define and implement data governance, lineage, monitoring, and quality frameworks
  • Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions
  • Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP
  • Contribute to building repeatable solution accelerators in Data & AI/ML
  • Work with the leadership team to align with Google Cloud Partner Program metrics
  • Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning
  • Organize and lead internal GCP AI/ML enablement sessions
What we offer
What we offer
  • Best in class packages
  • Paid holidays and flexible paid time away
  • Casual dress code & flexible working environment
  • Medical Insurance covering self & family up to 4 lakhs per person
Read More
Arrow Right

Senior/Architect Data Engineer

We are seeking a highly skilled and experienced Senior/Architect Data Engineer t...
Location
Location
Poland , Warsaw; Poznań; Lublin; Katowice; Rzeszów
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience architecting solutions on the Databricks Lakehouse using Unity Catalog, Delta Lake, MLflow, Model Serving, Feature Store, AutoML, and Databricks Workflows
  • Expertise in real-time/low latency model serving architectures with auto-scaling, confidence-based routing, and A/B testing
  • Strong knowledge of cloud security and governance on Azure or AWS, including Azure AD/AWS IAM, encryption, audit trails, and compliance frameworks
  • Hands-on MLOps skills across experiment tracking, model registry/versioning, drift monitoring, automated retraining, and production rollout strategies
  • Proficiency in Python and Databricks native tooling, with practical integration of REST APIs/SDKs and Databricks SQL in analytics products
  • Familiarity with React dashboards and human-in-the-loop operational workflows for ML and data quality validation
  • Demonstrated ability to optimize performance, reliability, and cost for large-scale analytics/ML platforms with strong observability
  • Experience leading multi-phase implementations with clear success metrics, risk management, documentation, and training/change management
  • Domain knowledge in telemetry, time series, or industrial data (aerospace a plus) and prior work with agentic patterns on Mosaic AI
  • Databricks certifications and experience in enterprise deployments of the platform are preferred
Job Responsibility
Job Responsibility
  • Lead the design and implementation of a Databricks-centric multi-agent processing engine
  • Design governed data ingestion, storage, and real-time processing workflows using Delta Lake, Structured Streaming, and Databricks Workflows
  • Own the model lifecycle with MLflow, including experiment tracking, registry/versioning, A/B testing, drift monitoring, and automated retraining pipelines
  • Architect low latency model serving endpoints with auto-scaling and confidence-based routing for sub-second agent decisioning
  • Establish robust data governance practices with Unity Catalog, including access control, audit trails, data quality, and compliance
  • Drive performance and cost optimization strategies, including auto-scaling, spot usage, and observability dashboards
  • Define production release strategies (blue-green), monitoring and alerting mechanisms, operational runbooks, and Service Level Objectives (SLOs)
  • Partner with engineering, MLOps, and product teams to deliver human-in-the-loop workflows and dashboards
  • Lead change management, training, and knowledge transfer while managing a parallel shadow processing path
  • Plan and coordinate phased delivery, success metrics, and risk mitigation
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses (up to PLN6,000)
  • Additional revenue sharing opportunities
  • Ongoing guidance from dedicated Team Manager
  • Tailored technical mentoring from assigned technical leader
  • Dedicated team-building budget for online and on-site team events
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

Our Principal Data Engineers are responsible for leading and delivering strategi...
Location
Location
United Kingdom , Bristol; London; Manchester; Swansea
Salary
Salary:
100000.00 - 115000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Understanding of the issues and challenges that the public sector faces in delivering services that make the best use of data and digital capabilities, transforming legacy infrastructure, and taking an innovative and user-centric approach
  • Ability to innovate and take learnings from the commercial sector, other countries and advances in technology and apply them to UK Public Sector challenges to create tangible solutions for our clients
  • Experience building trusted advisor relationships with senior client stakeholders within the public sector.
  • Experience of building and leading high performing, consulting teams and creating the leveraged engagements to provide a cost-effective, profitable, successful client-facing delivery
  • Leadership of bids and solution shaping to produce compelling proposals that help Made Tech win new business and grow the industry
  • Experience of managing third-party partnerships and suppliers (in conjunction with Made Tech colleagues) to provide a consolidated and seamless delivery team to clients.
  • Experience in delivering complex and difficult engagements that span multiple capabilities for user-facing digital and data services in the public sector
  • Experience in identifying opportunities based on client needs and developing targeted solutions to progress the development of the opportunity
  • Experience of working with sales professionals and commercial responsibility for strategic organisational goals.
  • Experience working directly with customers and users within a technology consultancy
Job Responsibility
Job Responsibility
  • Collaborate with clients to understand their needs, provide solution advice in your role as a trusted advisor and shape solutions that leverage Made Tech's wider capabilities and credentials
  • Assess project performance as a part of the billable delivery team, Quality Assure (QA) the deliverables and outcomes, and ensure client satisfaction. Coach and mentor team members as well as providing direction to enable them to achieve their engagement outcomes and to develop their careers
  • Act as a Technical Authority of the Data & AI capability to provide oversight and ensure alignment with internal and industry best practices. Ensure engagement experience is captured and used to improve standards and contribute to Made Tech knowledge
  • Participate in business development activities, including bids and pre-sales within the account, industry and practice. Coach team members on their contributions and oversee the relevant technical aspects of the proposal submission
  • Undertake people management responsibilities, including performance reviews and professional development of your engagement and practice colleagues
  • Serve as a thought leader within Made Tech, our account engagements and the wider public sector and represent the company at industry events
What we offer
What we offer
  • 30 days of paid annual leave + bank holidays
  • Flexible Parental Leave
  • Remote Working
  • Paid counselling as well as financial and legal advice
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Consulting AI / Data Engineer

As a Consulting AI / Data Engineer, you will design, build, and optimise enterpr...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on data engineering experience in production environments
  • Strong proficiency in Python and SQL
  • Experience with at least one additional language (e.g. Java, Typescript/Javascript)
  • Experience with modern frameworks such as Apache Spark, Airflow, dbt, Kafka, or Flink
  • Background in building ML pipelines, MLOps practices, or feature stores is highly valued
  • Proven expertise in relational databases, data modelling, and query optimisation
  • Demonstrated ability to solve complex technical problems independently
  • Excellent communication skills with ability to engage clients and stakeholders
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Build and maintain scalable data pipelines for ingesting, transforming, and delivering data
  • Manage and optimise databases, warehouses, and cloud storage solutions
  • Implement data quality frameworks and testing processes to ensure reliable systems
  • Design and deliver cloud-based solutions (AWS, Azure, or GCP)
  • Take technical ownership of project components and lead small development teams
  • Engage directly with clients, translating business requirements into technical solutions
  • Champion best practices including version control, CI/CD, and infrastructure as code
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, remote working, additional purchased leave, and company-provided laptop
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with mentoring from senior engineers
  • Exposure to diverse industries and client environments
  • Fulltime
Read More
Arrow Right

Principal Consulting AI / Data Engineer

As a Principal Consulting AI / Data Engineer, you will design, build, and optimi...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven expertise in delivering enterprise-grade data engineering and AI solutions in production environments
  • Strong proficiency in Python and SQL, plus experience with Spark, Airflow, dbt, Kafka, or Flink
  • Experience with cloud platforms (AWS, Azure, or GCP) and Databricks
  • Ability to confidently communicate and present at C-suite level, simplifying technical concepts into business impact
  • Track record of engaging senior executives and influencing strategic decisions
  • Strong consulting and stakeholder management skills with client-facing experience
  • Background in MLOps, ML pipelines, or AI solution delivery highly regarded
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable data and AI solutions using Databricks, cloud platforms, and modern frameworks
  • Lead solution architecture discussions with clients, ensuring alignment of technical delivery with business strategy
  • Present to and influence executive-level stakeholders, including boards, C-suite, and senior directors
  • Translate highly technical solutions into clear business value propositions for non-technical audiences
  • Mentor and guide teams of engineers and consultants to deliver high-quality solutions
  • Champion best practices across data engineering, MLOps, and cloud delivery
  • Build DyFlex’s reputation as a trusted partner in Data & AI through thought leadership and client advocacy
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with opportunities to lead large-scale client programs
  • Exposure to diverse industries and client environments, including executive-level engagement
  • Fulltime
Read More
Arrow Right

AIML Lead Engineer

We build breakthrough software products that power digital businesses. We are an...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
3pillarglobal.com Logo
3Pillar Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of total IT experience
  • at least 4+ years in AI/ML development
  • Strong proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-Learn)
  • Experience with NLP libraries such as spaCy, Hugging Face Transformers, etc.
  • Solid understanding of AI/ML algorithms, data preprocessing, and model evaluation techniques
  • Hands-on experience with Generative AI, LLMs, and Agentic AI
  • Working knowledge of MLOps tools and CI/CD pipelines for AI model deployment
  • Familiarity with computer vision frameworks (OpenCV, etc.)
  • Excellent problem-solving and communication skills
  • Ability to lead and mentor junior engineers
Job Responsibility
Job Responsibility
  • Lead the design and implementation of AI/ML models and solutions for complex business problems
  • Work on NLP, LLMs, and Generative AI to build intelligent systems and conversational agents
  • Develop and optimize models using Python, TensorFlow, PyTorch, and Scikit-Learn
  • Apply deep learning and transformer-based architectures (e.g., BERT, GPT, etc.) for NLP and vision tasks
  • Implement computer vision solutions using OpenCV and related tools
  • Collaborate with cross-functional teams to integrate AI models into production systems
  • Apply MLOps best practices and manage CI/CD pipelines for model deployment
  • Stay updated with the latest AI research, LLM, and Agentic AI trends, and drive innovation across teams
  • Fulltime
Read More
Arrow Right

Senior AI Data Engineer

We are looking for a Senior AI Data Engineer to join an exciting project for our...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Science, Artificial Intelligence, or a related field
  • Several years of experience in AI and Machine Learning development, preferably in Customer Care solutions
  • Strong proficiency in Python and NLP frameworks
  • Hands-on experience with Azure AI services (e.g., Azure Machine Learning, Cognitive Services, Bot Services)
  • Solid understanding of cloud architectures and microservices on Azure
  • Experience with CI/CD pipelines and MLOps
  • Excellent leadership and communication skills
  • Analytical mindset with strong problem-solving abilities
  • Polish and English at a minimum B2 level.
Job Responsibility
Job Responsibility
  • Lead the development and implementation of AI-powered features for a Customer Care platform
  • Design and deploy Machine Learning and NLP models to automate customer inquiries
  • Collaborate with DevOps and cloud architects to ensure a high-performance, scalable, and secure Azure-based architecture
  • Optimize AI models to enhance customer experience
  • Integrate Conversational AI, chatbots, and language models into the platform
  • Evaluate emerging technologies and best practices in Artificial Intelligence
  • Mentor and guide a team of AI/ML developers.
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model, allowing employees to divide their time between home and modern offices in key Polish cities
  • A cafeteria system that allows employees to personalize benefits by choosing from a variety of options
  • Generous referral bonuses, offering up to PLN6,000 for referring specialists
  • Additional revenue sharing opportunities for initiating partnerships with new clients
  • Ongoing guidance from a dedicated Team Manager for each employee
  • Tailored technical mentoring from an assigned technical leader, depending on individual expertise and project needs
  • Dedicated team-building budget for online and on-site team events
  • Opportunities to participate in charitable initiatives and local sports programs
  • A supportive and inclusive work culture with an emphasis on diversity and mutual respect.
  • Fulltime
Read More
Arrow Right

Lead Software Engineer

Prism Data is building the future of credit scoring with modern technology and d...
Location
Location
United States , NYC or San Diego (La Jolla/UTC)
Salary
Salary:
160000.00 - 195000.00 USD / Year
prismdata.com Logo
Prism Data
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Software engineering experience, ideally in a high-growth or early stage startup
  • Strong expertise in modern software practices and technologies, with ability to quickly adapt to Prism’s services stack
  • Deep hands-on experience with Python, Kubernetes, and AWS, with specific experience supporting machine-learning models from prototyping through production operations
  • Proactive bias-to-solve technical problems and navigate complex design decisions
  • Excellent communication skills with the ability to bridge gaps between technical teams and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Contribute to Prism’s engineering culture by regularly mentoring other engineers and facilitating knowledge sharing, learning, and continuous improvement
  • Advance an architectural runway to support product extensibility and scalability while balancing sustainability
  • Architect and lead the enhancement of production ML-serving infrastructure, including continuous advancement of MLOps capabilities
  • Design, build, and operate enterprise-grade APIs and other platform services, and lead strategic co-development opportunities with key partners and data providers
  • Drive technical direction across platform and model-serving layers, ensuring observability, security, and performance at scale
  • Partner cross-functionally with product, legal, and go-to-market teams to cohesively develop new capabilities that expand cash flow analytics use cases
What we offer
What we offer
  • medical
  • dental
  • vision
  • 401(k)
  • equity-based compensation
  • Fulltime
Read More
Arrow Right