CrawlJobs Logo

Data Migration AI Engineer

nttdata.com Logo

NTT DATA

Location Icon

Location:
India , Bangalore

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are currently seeking a Data Migration AI Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). "Job Duties: Role Overview The Data Migration AI Engineer bridges NTT DATA's Data as Agentic Product (DaaP) platform and the hands-on execution of SEI's Informatica-to-dbt migration program. This role operates in two sequential phases, each with a distinct mandate. 1. In Phase 1, the AI Engineer works closely with the Onshore Technical Lead and dbt SMEs to perform context engineering — crafting, iterating, and validating the prompts, agent configurations, and system context that guide DaaP's code generation agents toward producing Python EL pipelines and dbt transformation models that conform to SEI's architecture standards, naming conventions, and data modeling patterns. The goal of Phase 1 is a validated, repeatable DaaP configuration that consistently produces migration-ready code output with minimal rework. 2. Once the DaaP configuration is deemed satisfactory, the AI Engineer transitions into Phase 2 — active participation in the migration execution itself. In this phase, the AI Engineer applies AI-assisted generation to accelerate the conversion of Informatica mappings, sessions, and workflows into Python extract/load scripts and dbt SQL models, working alongside Data Engineers, Informatica SMEs, and dbt SMEs across the SWP and IMS migration workstreams. Key Responsibilities Prompt & Agent Configuration • Design, iterate, and validate prompt templates that guide DaaP agents to produce Python EL and dbt code aligned with SEI's architecture and standards. • Configure DaaP's Data Discovery, Data Mapping, and ETL & Code Review agents for the SEI migration context. • Establish system context, few-shot examples, and output constraints that enforce SEI's coding conventions, model layering (staging → intermediate → marts), and dbt testing patterns. • Define and document the context engineering artifacts (prompts, agent configs, example inputs/outputs) used to tune DaaP output. Output Validation & Iteration • Evaluate DaaP-generated Python EL scripts and dbt models against agreed quality criteria in collaboration with the Onshore Technical Lead and dbt SMEs. • Identify failure modes, hallucinations, and structural deviations in generated code and iterate on context configuration to resolve them. • Establish a validation gate — a defined set of criteria the DaaP output must meet before Phase 2 execution begins. • Document context engineering decisions, prompt versions, and validation results for program governance and knowledge transfer. • Monitor DaaP output quality across migration waves and refine context configurations as new Informatica complexity patterns emerge. AI-Assisted Code Generation • Apply DaaP and validated prompt configurations to accelerate conversion of Informatica mappings and sessions into Python EL scripts. • Generate dbt model scaffolding (staging, intermediate, marts) from Informatica transformation logic in collaboration with Informatica SMEs. • Review and refine AI-generated code before handoff to Data Engineers for integration testing and validation. • Identify patterns in Informatica constructs (lookups, aggregators, routers, update strategies) that benefit from AI-assisted translation and develop reusable generation templates. • Work with Informatica SMEs to accurately capture source transformation logic as structured input context for DaaP agents. • Partner with dbt SMEs to ensure generated dbt models conform to architectural standards and pass code review. Continuous Improvement • Monitor DaaP output quality across migration waves and refine context configurations as new Informatica complexity patterns emerge. • Maintain a prompt and configuration library versioned in GitLab alongside migration artifacts. • Contribute to post-migration documentation and AI tooling runbooks for steady-state use.

Job Responsibility:

  • Design, iterate, and validate prompt templates that guide DaaP agents to produce Python EL and dbt code aligned with SEI's architecture and standards
  • Configure DaaP's Data Discovery, Data Mapping, and ETL & Code Review agents for the SEI migration context
  • Establish system context, few-shot examples, and output constraints that enforce SEI's coding conventions, model layering (staging → intermediate → marts), and dbt testing patterns
  • Define and document the context engineering artifacts (prompts, agent configs, example inputs/outputs) used to tune DaaP output
  • Evaluate DaaP-generated Python EL scripts and dbt models against agreed quality criteria in collaboration with the Onshore Technical Lead and dbt SMEs
  • Identify failure modes, hallucinations, and structural deviations in generated code and iterate on context configuration to resolve them
  • Establish a validation gate — a defined set of criteria the DaaP output must meet before Phase 2 execution begins
  • Document context engineering decisions, prompt versions, and validation results for program governance and knowledge transfer
  • Monitor DaaP output quality across migration waves and refine context configurations as new Informatica complexity patterns emerge
  • Apply DaaP and validated prompt configurations to accelerate conversion of Informatica mappings and sessions into Python EL scripts
  • Generate dbt model scaffolding (staging, intermediate, marts) from Informatica transformation logic in collaboration with Informatica SMEs
  • Review and refine AI-generated code before handoff to Data Engineers for integration testing and validation
  • Identify patterns in Informatica constructs (lookups, aggregators, routers, update strategies) that benefit from AI-assisted translation and develop reusable generation templates
  • Work with Informatica SMEs to accurately capture source transformation logic as structured input context for DaaP agents
  • Partner with dbt SMEs to ensure generated dbt models conform to architectural standards and pass code review
  • Monitor DaaP output quality across migration waves and refine context configurations as new Informatica complexity patterns emerge
  • Maintain a prompt and configuration library versioned in GitLab alongside migration artifacts
  • Contribute to post-migration documentation and AI tooling runbooks for steady-state use

Requirements:

3+ years of hands-on experience with LLM-based tools, prompt engineering, or AI agent configuration

Additional Information:

Job Posted:
May 16, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Migration AI Engineer

Data Architect - Enterprise Data & AI Solutions

We are looking for a visionary Data Architect who can translate enterprise data ...
Location
Location
India , Chennai; Madurai; Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong background in RDBMS design, data modeling, and schema optimization
  • Advanced SQL skills, including performance tuning and analytics functions
  • Proven expertise in data warehouses, data lakes, and lakehouse architectures
  • Proficiency in ETL/ELT tools (Informatica, Talend, dbt, Glue)
  • Hands-on with cloud platforms (AWS Redshift, Azure Synapse, GCP BigQuery, Snowflake)
  • Familiarity with GenAI frameworks (OpenAI, Vertex AI, Bedrock, Azure OpenAI)
  • Experience with real-time streaming (Kafka, Kinesis, Flink) and big data ecosystems (Hadoop, Spark)
  • Strong communication skills with the ability to present data insights to executives
  • 8+ years in data architecture, enterprise data strategy, or modernization programs
  • Hands-on with AI-driven analytics and GenAI adoption
Job Responsibility
Job Responsibility
  • Design scalable data models, warehouses, lakes, and lakehouse solutions
  • Build data pipelines to support advanced analytics, reporting, and predictive insights
  • Integrate GenAI frameworks to enhance data generation, automation, and summarization
  • Define and enforce enterprise-wide data governance, standards, and security practices
  • Drive data modernization initiatives, including cloud migrations
  • Collaborate with stakeholders, engineers, and AI/ML teams to align solutions with business goals
  • Enable real-time and batch insights through dashboards, AI-driven recommendations, and predictive reporting
  • Mentor teams on best practices in data and AI adoption
What we offer
What we offer
  • Opportunity to design next-generation enterprise data & AI architectures
  • Exposure to cutting-edge GenAI platforms to accelerate innovation
  • Collaborate with experts across cloud, data engineering, and AI practices
  • Access to learning, certifications, and leadership mentoring
  • Competitive pay with opportunities for career growth and leadership visibility
  • Fulltime
Read More
Arrow Right

Data Migration Consultant

As a Data Migration Consultant you will help our clients with their digital tran...
Location
Location
Belgium , Flanders/Brussels
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 3 years of experience as a Database Administrator, Data Engineer or Software Engineer
  • Affinity for data, data structures and data mapping
  • Knowledge and experience with data modeling and relational algebra
  • Good knowledge and experience with SQL
  • Knowledge of a programming language like Python is a plus
  • Practical experience with Microsoft cloud or comparable alternatives
  • Experience with using and implementing AI in your and the client's workflow is a plus
  • Work accurately and precisely
  • Analytical thinking, communication skills and a practical attitude
  • Proficient in English
Job Responsibility
Job Responsibility
  • Help clients with their digital transformation
  • Build data migration
  • Ensure data from source systems is converted to fit the target system
  • Develop selections, conversion rules, controls
  • Ensure data migration is complete and correct
  • Ensure client can successfully deploy the new system filled with data
What we offer
What we offer
  • Mobility options (including a company car)
  • Insurance coverage
  • Meal vouchers
  • Eco-cheques
  • Continuous learning opportunities through the Sopra Steria Academy
  • Team events
Read More
Arrow Right

Principal Data Engineer

We are on the lookout for a Principal Data Engineer to help define and lead the ...
Location
Location
United Kingdom
Salary
Salary:
Not provided
dotdigital.com Logo
Dotdigital
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience delivering python-based projects in the data engineering space
  • Extensive experience working with SQL and NoSQL database technologies (e.g. SQL Server, MongoDB & Cassandra)
  • Proven experience with modern data warehousing and large-scale data processing tools (e.g. Snowflake, DBT, BiqQuery, Clickhouse)
  • Hands on experience with data orchestration tools like Airflow, Dagster or Prefect
  • Experience using cloud environments (e.g. Azure, AWS, GCP) to process, store and surface large scale data
  • Experience using Kafka or similar event-based architectures e.g. (Pub/Sub via AWS SQS, Azure EventHubs, AWS Kinesis)
  • Strong grasp of data architecture and data modelling principles for both OLAP and OLTP workloads
  • Capable in the wider software development lifecycle in terms of agile ways of working and continuous integration/deployment of data solutions
  • Experience as a lead or Principal Engineer on large-scale data initiative or product builds
  • Demonstrated ability to architect data systems and data structures for high volume, high throughput systems
Job Responsibility
Job Responsibility
  • Lead the design and implementation of scalable, secure and resilient data systems across streaming, batch and real-time use cases
  • Architect data pipelines, model and storage solutions that power analytical and product use cases
  • using primarily Python and SQL via orchestration tooling that run workloads in the cloud
  • Leverage AI to automate both data processing and engineering processes
  • Assure and drive best practices relating to data infrastructure, governance, security and observability
  • Work with technologists across multiple teams to deliver coherent features and data outcomes
  • Support the data team to help adopt data engineering principles
  • Identify, validate and promote new tools and technologies that improve the performance and stability of data services
What we offer
What we offer
  • Parental leave
  • Medical benefits
  • Paid sick leave
  • Dotdigital day
  • Share reward
  • Wellbeing reward
  • Wellbeing Days
  • Loyalty reward
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right

Ai Solution Engineer

This role exists to bring AI-powered automation into real-world use across our c...
Location
Location
Canada , Mississauga
Salary
Salary:
139500.00 - 150000.00 CAD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Information Systems, or equivalent practical experience
  • Experience building AI-powered workflows using Azure AI Foundry, Copilot Studio and/or Now Assist
  • 2–5 years of experience in software or cloud engineering
  • Exposure to AI agent orchestration or multi-agent systems
  • Hands-on experience with Microsoft Azure services (Azure AI, Azure Functions, Logic Apps, Cognitive Services)
  • Familiarity with retrieval-augmented generation (RAG) or vector database integration
  • Experience working with multiple LLMs (OpenAI GPT, Azure OpenAI, Gemini), prompt engineering, and AI-driven automation
  • Proficient in Python, C#, or JavaScript/Type Script
  • 3-5 years' experience working in SaaS or enterprise environments
  • Familiar with CI/CD pipelines, Git, and cloud deployment practices
Job Responsibility
Job Responsibility
  • Build and support AI agents and intelligent workflows using Microsoft Azure tools such as Azure OpenAI, AI Foundry, and Copilot Studio
  • Design and implement AI-powered orchestration and automation for use cases such as configuration streamlining, onboarding automation, and data migration
  • Collaborate with cross-functional teams (integration, implementation, product, support) to deliver high-quality, scalable AI-driven solutions
  • Develop APIs, scripts, and tools to connect LLM-based agents with existing enterprise systems
  • Support testing, deployment, monitoring, and continuous improvement of AI workflows in production
  • Stay current with Microsoft’s AI platform roadmap and emerging industry trends
  • Contribute to the evolution of our internal AI delivery model and promote AI best practices across teams
What we offer
What we offer
  • Benefits starting from Day 1
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition
  • Fulltime
Read More
Arrow Right

Data Engineer - Automation & AI

Riverflex is partnering with a leading financial institution in the UAE on a str...
Location
Location
United Arab Emirates , Abu Dhabi or Dubai
Salary
Salary:
Not provided
riverflex.com Logo
Riverflex
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering or closely related engineering roles
  • Proven experience owning and shaping data engineering solutions, not only implementing individual pipelines
  • Strong hands-on experience with AWS-based data engineering, including: AWS Glue (jobs, transformations, orchestration), Spark (batch processing and transformations), Advanced SQL (complex logic, optimisation, performance tuning), End-to-end pipeline and workflow design
  • Solid (Python) engineering experience, including building reusable components and internal tooling
  • Demonstrated, practical experience applying Generative AI in engineering workflows, such as: Working with LLM APIs (e.g. AWS Bedrock, Azure AI Foundry, OpenAI), Prompt design for code generation, refactoring, and transformation, Understanding the limitations, failure modes, and risks of LLM-based automation
  • Experience designing AI-assisted engineering workflows or tools, for example: API-based services (e.g. FastAPI), MCP (or agent)-like orchestration patterns
  • Able to balance short-term PoC delivery with longer-term capability building
  • Experience in financial services or other regulated environments is a strong advantage
  • Ability to be based in the UAE for a minimum of 3 months, working full-time on-site (Abu Dhabi or Dubai)
Job Responsibility
Job Responsibility
  • Data engineering & pipeline delivery: Design, build, and evolve AWS Glue–based data pipelines using Spark and SQL
  • Translate legacy SQL scripts and stored procedures into AWS Glue pipelines
  • Ensure migrated and newly built pipelines meet agreed standards for correctness, performance, and maintainability
  • AI-driven engineering acceleration: Apply Generative AI and agent-based techniques to accelerate data engineering tasks, including code generation/refactoring, pipeline dev. and standardisation
  • Own the design and implementation of AI-assisted tooling that integrates directly into day-to-day engineering workflows
  • Codify successful patterns, reusable tools, and recommended ways of working for scaling beyond the PoC
  • AI tooling & experimentation: Work hands-on with Python and LLM APIs to build pragmatic, internal DE tools
  • Design effective prompts & interaction patterns for code generation & transformation
  • Evaluate and work with enterprise-grade AI platforms (e.g. AWS Bedrock, Azure AI Foundry) using GPT-4 / Claude-class models
  • Define practical rules of thumb and guardrails (e.g. where automation works, where it breaks down, where human intervention is required)
  • Fulltime
Read More
Arrow Right

Forward Deployed Engineer - Data Migration & Data Consolidation Platforms

As a Forward Deployed Engineer (FDE) for Data Migration & Data Consolidation Pla...
Location
Location
United States
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles within complex, multi-platform environments
  • 3-5+ years directly leading end-to-end data migration or multi-system consolidation programs for Global Enterprises and Industry Leaders, with full ownership of technical delivery and client outcomes
  • Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives, enterprise architecture teams, and cross-functional business stakeholders
  • Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector
  • Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records, complex master data harmonization, and multi-phase cutover execution
  • Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript
  • Hands-on expertise with modern ETL/ELT and data integration platforms (Informatica, Talend, Matillion, Fivetran, AWS Glue, Azure Data Factory)
  • Proven ability to build scalable, version-controlled data pipelines with error handling, incremental loading, and Change Data Capture (CDC)
  • Strong working knowledge of at least one major cloud provider (AWS, Azure, or GCP), including core infrastructure, managed data services, and security configurations
  • Experience with enterprise data warehouse and lakehouse platforms (Snowflake, Databricks, BigQuery, Redshift, Synapse Analytics, Delta Lake)
Job Responsibility
Job Responsibility
  • Migration Execution & Cloud Architecture: Lead end-to-end delivery of enterprise data migrations from corporate systems (SAP, Oracle, Epic ERP) to target cloud data platforms, including the design of cloud landing zones, data governance frameworks, and system rationalization strategies. Establish migration compliance controls, automated rollback procedures, and operational readiness gates while owning full technical accountability for 12–18+ month migration roadmaps
  • Data Pipeline Engineering & Transformation: Build production-grade data connectors to SAP (RFC, IDoc, BAPI, OData), Oracle (AQ, GoldenGate, APIs), and SQL/non-relational sources. Develop ETL/ELT pipelines with LLM-enabled transformation logic, multi-layer validation and reconciliation frameworks, and optimized throughput for datasets scaling from tens of millions to billions of records with built-in CDC and incremental loading
  • Ontology Layer Development & Schema Automation: Construct semantic ontology layers translating raw ERP structures into business-consumable objects (Customer, Order, Invoice, Product, Vendor, Asset). Deploy automated schema mapping agents for source-to-target analysis and transformation logic generation. Build unified master data models with row/column-level security, cross-system lineage tracking, and AI-ready semantic structures
  • Application & Workflow Delivery: Build operational dashboards, migration control centers, and agent-driven workflows for automated validation, exception handling, and anomaly detection using low-code platform tools. Generate TypeScript/Python SDKs for custom integrations and deliver real-time monitoring and self-service interfaces for migration progress, data quality KPIs, and compliance tracking
  • Multi-System Consolidation & Master Data Management: Lead consolidation of 5–15+ fragmented ERP instances into standardized master data models. Resolve complex entity resolution challenges including customer matching, product harmonization, and chart of accounts unification. Establish golden record frameworks, data quality scorecards, survivorship rules, and data stewardship workflows for post-migration governance
  • Client Engagement, Discovery & Modernization Advisory: Serve as primary technical advisor to C-suite and enterprise architecture stakeholders across all engagement phases. Deploy discovery agents to analyze legacy data estates, conduct assessment workshops, facilitate solution design sessions, and deliver executive briefings, go/no-go readiness assessments, and prioritized modernization roadmaps
  • Knowledge Transfer, Enablement & IP Development: Build reusable migration accelerators, playbooks, and reference architectures that scale across engagements. Lead knowledge transfer to upskill client teams for post-migration ownership and collaborate with internal product and sales engineering teams to feed field insights back into platform development and delivery methodology
  • Leadership & Executive Engagement: Operate autonomously in ambiguous, high-stakes client environments, driving outcomes with minimal oversight
  • translate deeply technical concepts into clear, business-level narratives for C-suite audiences through executive briefings and stakeholder communications
  • navigate organizational complexity, competing stakeholder priorities, and enterprise change management dynamics to maintain momentum across multi-workstream engagements
Read More
Arrow Right

Worldwide Partner Solutions Architect, Data and AI GTM

We are looking for a strong Partner Solution Architect with a Data & AI backgrou...
Location
Location
United States , San Francisco; Atlanta; Herndon; Seattle
Salary
Salary:
131300.00 - 204300.00 USD / Year
amazon.de Logo
Amazon Pforzheim GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of design, implementation, or consulting in applications and infrastructures experience
  • 4+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience
  • 5+ years of IT development or implementation/consulting in the software or Internet industries experience
  • 5+ years of working with Data & AI related technologies, including, but not limited to, AI/ML, GenAI, Analytics, Database, and/or Storage experience
Job Responsibility
Job Responsibility
  • Partner Advisor: Collaborate with regional teams to enable partners and customers in adopting AWS AI and data services
  • Devise Strategy: Partner with regional teams to define and execute engagement strategies for autonomous agent implementations, generative AI adoption, analytics modernization, database migrations, and streaming data architectures across global territories and key partners
  • Product Collaboration: Stay closely connected to AWS service teams, working with regional partner teams to launch new capabilities across our technology portfolio, enabling partners to quickly adopt innovations in AI agents, generative AI, ML, analytics, databases, and streaming services
  • Thought Leadership: Provide thought leadership on AI and data solutions, including autonomous agent architectures, generative AI use cases, analytics modernization, database migrations, and streaming data patterns
  • Community Engagement: Drive best practices across regions, participate in, and contribute to the worldwide AWS technical community of Solutions Architects and Consultants focused on our AI and data technology stack
  • Partner Evangelism: Evangelize partner value propositions internally across global AWS teams and externally with customers, highlighting implementation paths for agentic AI solutions, generative AI applications, data stack modernization, analytics workloads, database migrations, and streaming architectures
  • Partner Enablement: Drive partner solutions and go-to-market strategies across regions. Enable partners to build differentiated offerings for agentic AI implementations, generative AI solutions, data analytics transformation, database modernization, and streaming data services
  • Architectural Guidance: Provide architectural guidance for successful partner engagements globally, helping them leverage AWS services to win and deliver complex AI and data projects worldwide
What we offer
What we offer
  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
  • sign-on payments
  • restricted stock units (RSUs)
Read More
Arrow Right