CrawlJobs Logo

Data & AI Engineer

amaris.com Logo

Amaris Consulting

Location Icon

Location:
United States , Cambridge

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

58.00 - 79.00 USD / Hour

Job Description:

We are looking for a Data & AI Engineer, to design modern data platforms and contribute to advanced AI and generative AI use cases in a fast-growing, international environment. You will work with teams across data, analytics, AI, and cloud to create scalable, secure, and high-impact solutions for our clients.

Job Responsibility:

  • Design and implement scalable data lake architectures using Amazon S3
  • Build and optimize serverless analytics solutions with Amazon Athena
  • Define efficient data models, partitioning strategies, and metadata management
  • Develop robust ETL / ELT pipelines using AWS Glue, Lambda, and Step Functions
  • Ingest structured and semi-structured data from internal systems, APIs, and external sources
  • Ensure data quality, validation, and schema evolution across pipelines
  • Implement data governance with AWS Lake Formation and IAM
  • Manage encryption, access control, and auditability for sensitive datasets
  • Monitor and optimize performance and cloud costs using CloudWatch and AWS cost tools
  • Enable self-service data access for analytics, BI, and data science teams
  • Support downstream use cases: dashboards, analytics platforms, and ML workloads
  • Contribute to documentation, standards, and best practices
  • Design, train, and evaluate NLP and LLM-based solutions, including: Retrieval-Augmented Generation (RAG)
  • Chatbots and information retrieval
  • Text generation, summarization, and information extraction
  • Prepare and curate data for training, fine-tuning, and evaluation
  • Build and manage AI agents and workflows using tools such as AWS Bedrock, LangGraph, or similar
  • Stay up to date with generative AI innovations and apply them in real-world solutions

Requirements:

  • 4+ years of experience in data engineering or cloud data platforms
  • Strong hands-on experience with Amazon S3 and Amazon Athena
  • Excellent SQL skills for analytical querying and optimization
  • Solid experience with AWS services: Glue, Lambda, IAM, CloudWatch
  • Experience provisioning cloud infrastructure and setting up CI/CD pipelines (GitHub Actions)
  • Strong knowledge of data formats (Parquet, ORC, JSON, CSV)
  • Programming experience in Python, PySpark, or similar
  • Experience designing and operating data lakes or analytics platforms
  • Good understanding of data lifecycle management
  • Familiarity with BI tools and analytics consumption patterns
  • Hands-on experience with LLMs and generative AI in a professional environment
  • Knowledge of deep learning and data science libraries (NumPy, Pandas, TensorFlow, PyTorch)
  • Collaboration mindset
  • Comfortable working in cross-functional and international teams
  • Strong collaboration with platform, frontend, and AI engineers
  • Ability to translate business needs into data-driven solutions

Nice to have:

Experience in healthcare or clinical data environments is a strong plus

What we offer:
  • An international community bringing together more than 110 different nationalities
  • An environment where trust is central: 70% of our leaders started their careers at the entry level
  • A strong training system with our internal Academy and more than 250 modules available
  • A dynamic work environment that frequently comes together for internal events (afterworks, team buildings, etc.)

Additional Information:

Job Posted:
January 16, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data & AI Engineer

Data Engineer – AI Insights

We are looking for an experienced Data Engineer with AI Insights to design and d...
Location
Location
United States
Salary
Salary:
Not provided
thirdeyedata.ai Logo
Thirdeye Data
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of Data Engineering experience with exposure to AI/ML workflows
  • Advanced expertise in Python programming and SQL
  • Hands-on experience with Snowflake (data warehousing, schema design, performance tuning)
  • Experience building scalable ETL/ELT pipelines and integrating structured/unstructured data
  • Familiarity with LLM and RAG workflows, and how data supports these AI applications
  • Experience with reporting/visualization tools (Tableau)
  • Strong problem-solving, communication, and cross-functional collaboration skills
Job Responsibility
Job Responsibility
  • Develop and optimize ETL/ELT pipelines using Python, SQL, and Snowflake to ensure high-quality data for analytics, AI, and LLM workflows
  • Build and manage Snowflake data models and warehouses, focusing on performance, scalability, and security
  • Collaborate with AI/ML teams to prepare datasets for model training, inference, and LLM/RAG-based solutions
  • Automate data workflows, validation, and monitoring for reliable AI/ML execution
  • Support RAG pipelines and LLM data integration, enabling AI-driven insights and knowledge retrieval
  • Partner with business and analytics teams to transform raw data into actionable AI-powered insights
  • Contribute to dashboarding and reporting using Tableau, Power BI, or equivalent tools
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Quality Engineer - AI and Data Platforms

This is a pioneering Quality Engineer role at the intersection of data engineeri...
Location
Location
United Kingdom , Manchester
Salary
Salary:
44000.00 - 66000.00 GBP / Year
matillion.com Logo
Matillion
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Solid foundation in data engineering: including SQL, ETL/ELT design, and specific experience building data pipelines and managing data movement
  • Strong practical AI experience: you have used, experimented with, and are an advocate for an AI-first approach to quality engineering
  • Proficiency in coding in Java or JavaScript to navigate the codebase and implement quality frameworks
  • Demonstrated Autonomy, Curiosity, and Problem-solving skills, with a willingness to look at challenges in a different way and ask for assistance as needed
  • Experience in managing end-to-end testing of SaaS applications, including developing and maintaining efficient test automation tooling
Job Responsibility
Job Responsibility
  • Leveraging AI and agentic solutions, including our agentic AI product Maia, to accelerate investigation, generate test cases, and increase quality assurance across the Data Productivity Cloud
  • Performing root cause analysis on pipeline stability issues, particularly identifying why DPC pipelines run out of memory (OOM) within the agents
  • Building pipelines to automate every process, solutionizing problems to increase overall team and product productivity
  • Acting as a crucial bridge by collaborating extensively with various teams, raising problems, and ensuring that fixes are implemented effectively
  • Adopting, implementing, and championing shift-left testing practices across the team, leading an automation-first approach
What we offer
What we offer
  • Company Equity
  • 30 days holiday + bank holidays
  • 5 days paid volunteering leave
  • Health insurance
  • Life Insurance
  • Pension
  • Access to mental health support
  • Fulltime
Read More
Arrow Right

Data Engineer with Generative AI Expertise

We are looking for a skilled Data Engineer with expertise in Generative AI to jo...
Location
Location
India , Jaipur
Salary
Salary:
Not provided
infoobjects.com Logo
InfoObjects
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields
  • 2-6 years of hands-on experience in Data Engineering
  • Proficiency in Generative AI frameworks (e.g., GPT, DALL-E, Stable Diffusion)
  • Strong programming skills in Python, SQL, and familiarity with Java or Scala
  • Experience with data tools and platforms such as Apache Spark, Hadoop, or similar
  • Knowledge of cloud platforms like AWS, Azure, or GCP
  • Familiarity with MLOps practices and AI model deployment
  • Excellent problem-solving and communication skills
Job Responsibility
Job Responsibility
  • Design, develop, and maintain robust data pipelines and workflows
  • Integrate Generative AI models into existing data systems to enhance functionality
  • Collaborate with cross-functional teams to understand business needs and translate them into scalable data and AI solutions
  • Optimize data storage, processing, and retrieval systems for performance and scalability
  • Ensure data security, quality, and governance across all processes
  • Stay updated with the latest advancements in Generative AI and data engineering practices
Read More
Arrow Right

Principal Consulting AI / Data Engineer

As a Principal Consulting AI / Data Engineer, you will design, build, and optimi...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven expertise in delivering enterprise-grade data engineering and AI solutions in production environments
  • Strong proficiency in Python and SQL, plus experience with Spark, Airflow, dbt, Kafka, or Flink
  • Experience with cloud platforms (AWS, Azure, or GCP) and Databricks
  • Ability to confidently communicate and present at C-suite level, simplifying technical concepts into business impact
  • Track record of engaging senior executives and influencing strategic decisions
  • Strong consulting and stakeholder management skills with client-facing experience
  • Background in MLOps, ML pipelines, or AI solution delivery highly regarded
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable data and AI solutions using Databricks, cloud platforms, and modern frameworks
  • Lead solution architecture discussions with clients, ensuring alignment of technical delivery with business strategy
  • Present to and influence executive-level stakeholders, including boards, C-suite, and senior directors
  • Translate highly technical solutions into clear business value propositions for non-technical audiences
  • Mentor and guide teams of engineers and consultants to deliver high-quality solutions
  • Champion best practices across data engineering, MLOps, and cloud delivery
  • Build DyFlex’s reputation as a trusted partner in Data & AI through thought leadership and client advocacy
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with opportunities to lead large-scale client programs
  • Exposure to diverse industries and client environments, including executive-level engagement
  • Fulltime
Read More
Arrow Right

Consulting AI / Data Engineer

As a Consulting AI / Data Engineer, you will design, build, and optimise enterpr...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on data engineering experience in production environments
  • Strong proficiency in Python and SQL
  • Experience with at least one additional language (e.g. Java, Typescript/Javascript)
  • Experience with modern frameworks such as Apache Spark, Airflow, dbt, Kafka, or Flink
  • Background in building ML pipelines, MLOps practices, or feature stores is highly valued
  • Proven expertise in relational databases, data modelling, and query optimisation
  • Demonstrated ability to solve complex technical problems independently
  • Excellent communication skills with ability to engage clients and stakeholders
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Build and maintain scalable data pipelines for ingesting, transforming, and delivering data
  • Manage and optimise databases, warehouses, and cloud storage solutions
  • Implement data quality frameworks and testing processes to ensure reliable systems
  • Design and deliver cloud-based solutions (AWS, Azure, or GCP)
  • Take technical ownership of project components and lead small development teams
  • Engage directly with clients, translating business requirements into technical solutions
  • Champion best practices including version control, CI/CD, and infrastructure as code
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, remote working, additional purchased leave, and company-provided laptop
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with mentoring from senior engineers
  • Exposure to diverse industries and client environments
  • Fulltime
Read More
Arrow Right

Data & AI Impact Consultant Engineer

As a Data Consultant, you are a cornerstone of our Data & AI Business Unit – tec...
Location
Location
Belgium
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering or analytics projects
  • Good knowledge of Dutch and English (French is an asset)
  • Familiar with Azure stack (Data Factory, Synapse, Storage, Purview, Functions) and/or Databricks
  • Structural and flexible thinking
  • Interest in AI and its role in modern data products (prompt engineering, GenAI, monitoring, automation)
  • Ability to create business impact and understand outcomes
  • Team-oriented mindset
Job Responsibility
Job Responsibility
  • Design and build modern data platforms (Azure, Databricks, Data Fabric) with focus on reusability and AI-readiness
  • Deliver customer- and future-oriented value
  • Help colleagues grow through coaching, feedback, or knowledge sharing
  • Provide consultancy with solution-oriented approach
  • Take initiative in client development, talent growth, or community engagement
What we offer
What we offer
  • Company car
  • Fuel/charging card
  • Group insurance
  • Hospitalization coverage
  • 32 days of annual leave
  • Hybrid working options
  • Satellite offices
  • Continuous learning & development
  • Training and certification programs
  • Fulltime
Read More
Arrow Right

Data & AI Impact Consultant Engineer

Data Consultant role in Data & AI Business Unit, designing and building modern d...
Location
Location
Belgium
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering or analytics projects
  • Good knowledge of Dutch and English (French is an asset)
  • Familiarity with Azure stack (Data Factory, Synapse, Storage, Purview, Functions) and/or Databricks
  • Structural and flexible thinking
  • Interest in AI and its role in modern data products (prompt engineering, GenAI, monitoring, automation)
  • Ability to create business impact and understand outcomes
  • Team-oriented mindset
Job Responsibility
Job Responsibility
  • Design and build modern data platforms (Azure, Databricks, Data Fabric) with focus on reusability and AI-readiness
  • Deliver value today while preparing for tomorrow
  • Help colleagues grow through coaching, feedback, or knowledge sharing
  • Provide consultancy with solution-oriented approach
  • Take initiative beyond projects to help build Inetum
What we offer
What we offer
  • Company car
  • Fuel/charging card
  • Group insurance
  • Hospitalization coverage
  • 32 days of annual leave
  • Hybrid working options
  • Satellite offices
  • Training and certification programs
  • Fulltime
Read More
Arrow Right