CrawlJobs Logo

Sr. Data Engineer - Python Developer

myticas.com Logo

Myticas Consulting

Location Icon

Location:
United States , Springfield

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

52.00 - 54.00 USD / Hour

Job Description:

Seeking a hands-on Senior Data Engineer (ETL / Python Developer) to support the Enterprise Data Warehouse (EDW) and Analytics Program. This role plays a critical part in designing, developing, and maintaining scalable data ingestion and transformation pipelines that support Medicaid analytics, federal reporting, and enterprise decision support. The ideal candidate brings strong ETL and Python engineering expertise, experience working with large healthcare datasets, and the ability to operate effectively in a regulated, audit-sensitive environment. This is a delivery-focused role requiring close collaboration with architects, analysts, QA, PMs, SMEs, developers, and reporting BI teams across both legacy and cloud-based platforms.

Job Responsibility:

  • Design, develop, and maintain enterprise ETL pipelines using Azure Data Factory (ADF), Informatica PowerCenter, and Python-based frameworks
  • Build and optimize scalable data processing solutions using Python, Spark, and Databricks
  • Support Medicaid analytics and federal reporting initiatives (e.g., T-MSIS, PERM, MARS, Quality of Care)
  • Develop robust data validation, reconciliation, and audit-traceable data pipelines
  • Write and optimize SQL and stored procedures across relational platforms such as Snowflake, Oracle, and SQL Server
  • Participate in cloud migration and modernization initiatives within Azure-based architectures
  • Collaborate with analysts, QA, and reporting teams to ensure data quality, accuracy, and timeliness
  • Follow data engineering best practices for performance, reliability, reusability, and security
  • Support production operations, incident resolution, and root-cause analysis
  • Participate in code reviews, source control, and CI/CD processes using Azure DevOps and GitHub

Requirements:

  • 5+ years of data engineering experience with a focus on enterprise data warehousing
  • 5+ years of hands-on ETL development using Informatica PowerCenter, Azure Data Factory, or similar tools
  • 5+ years of Python development for data engineering and automation
  • 3+ years of experience with Spark-based processing frameworks (Databricks or equivalent)
  • Strong SQL expertise and experience with relational databases (such as Teradata, Snowflake, Oracle, SQL Server)
  • Experience with source control and DevOps practices (Azure DevOps, GitHub, CI/CD)
  • Bachelor's degree or higher in Computer Science, Engineering, Analytics, or a related field
  • Strong analytical, problem-solving, and troubleshooting skills

Nice to have:

  • Experience supporting State Medicaid EDW or MMIS analytics environments
  • Healthcare or public-sector analytics experience (Medicaid / Medicare preferred)
  • Data modeling experience in enterprise data warehouse environments
  • Scripting experience (PowerShell, Bash) for automation and orchestration
  • Experience designing or consuming APIs (REST) within data platforms
  • Familiarity with data quality frameworks, reconciliation, and audit support
  • Azure certifications related to data engineering or analytics

Additional Information:

Job Posted:
May 05, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Sr. Data Engineer - Python Developer

Sr Engineer, Data

The Sr Data Engineer designs and develops data architectures in on-premise, clou...
Location
Location
United States , Overland Park
Salary
Salary:
105100.00 - 189600.00 USD / Year
https://www.t-mobile.com Logo
T-Mobile
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Engineering, Computer Science, a related subject area or equivalent experience
  • 5+ years developing cloud solutions using data series
  • experience with cloud platforms (Amazon Web Services, Azure, or Google Cloud)
  • Hands-on development using and migrating data to cloud platforms
  • Experience in SQL, NoSQL, and/or relational database design and development
  • Advanced knowledge and experience in building complex data pipelines with Python, Experience in languages such as SQL, DAX Python, Java, Scala, and/or Go
Job Responsibility
Job Responsibility
  • Develop data engineering solutions, including data pipelines, visualization and analytical tools
  • Design and develop data architectures in on-premise, cloud and hybrid platforms
  • Data wrangling of heterogeneous data, exploration and discovery in pursuit of new business insights
  • Actively contribute to the team’s knowledge and drive new capabilities forward
  • Mentor other team members in their efforts to build data engineering skillsets
  • Assist team management in defining projects, including helping estimate, plan and scope work
  • Prepare and contribute to presentations required by management
What we offer
What we offer
  • Competitive base salary and compensation package
  • Annual stock grant
  • Employee stock purchase plan
  • 401(k)
  • Access to free, year-round money coaches
  • Medical, dental and vision insurance
  • Flexible spending account
  • Paid time off
  • Up to 12 paid holidays
  • Paid parental and family leave
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer

We are looking for a skilled Sr. Data Engineer to join our team in Oklahoma City...
Location
Location
United States , Oklahoma City
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with Snowflake data warehousing and schema design
  • proficiency in ETL tools such as Matillion or similar platforms
  • strong knowledge of Python and PowerShell for data automation
  • experience working with Microsoft SQL Server and related technologies
  • familiarity with cloud technologies, particularly AWS
  • understanding of data visualization and analytics tools
  • background in working with big data technologies such as Apache Kafka, Hadoop, Spark, or Pig
  • ability to design and implement APIs for data integration and management.
Job Responsibility
Job Responsibility
  • Design, implement, and maintain Snowflake data warehousing solutions to support business needs
  • assist in the migration of in-house data to Snowflake, ensuring a seamless transition
  • develop data pipelines and workflows using tools such as Matillion or equivalent ETL solutions
  • collaborate with teams to optimize and manage the existing data warehouse built on Microsoft SQL Server
  • utilize Python and PowerShell to automate data processes and enhance system efficiency
  • partner with the implementation team to shadow and learn best practices for Snowflake deployment
  • ensure data integrity, scalability, and security across all data engineering processes
  • provide insights into data visualization and analytics to support decision-making
  • work with cloud technologies, including AWS, to enhance data storage and accessibility
  • implement and manage APIs to enable seamless data integration and sharing.
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • eligibility to enroll in 401(k) plan
  • access to competitive compensation and free online training.
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer - Snowflake

Data Ideology is seeking a Sr. Snowflake Data Engineer to join our growing team ...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, data warehousing, or data architecture
  • 3+ years of hands-on Snowflake experience (performance tuning, data sharing, Snowpark, Snowpipe, etc.)
  • Strong SQL and Python skills, with production experience using dbt
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and modern data tooling (Airflow, Fivetran, Power BI, Looker, Informatica, etc.)
  • Prior experience in a consulting or client-facing delivery role
  • Excellent communication skills, with the ability to collaborate across technical and business stakeholders
  • SnowPro Core Certification required (or willingness to obtain upon hire)
  • advanced Snowflake certifications preferred
Job Responsibility
Job Responsibility
  • Design and build scalable, secure, and cost-effective data solutions in Snowflake
  • Develop and optimize data pipelines using tools such as dbt, Python, CloverDX, and cloud-native services
  • Participate in discovery sessions with clients to gather requirements and translate them into solution designs and project plans
  • Collaborate with engagement managers and account teams to help scope work and provide technical input for Statements of Work (SOWs)
  • Serve as a Snowflake subject matter expert, guiding best practices in performance tuning, cost optimization, access control, and workload management
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake
  • Integrate Snowflake with BI tools, governance platforms, and AI/ML frameworks
  • Contribute to internal accelerators, frameworks, and proofs of concept
  • Mentor junior engineers and support knowledge sharing across the team
What we offer
What we offer
  • Flexible Time Off Policy
  • Eligibility for Health Benefits
  • Retirement Plan with Company Match
  • Training and Certification Reimbursement
  • Utilization Based Incentive Program
  • Commission Incentive Program
  • Referral Bonuses
  • Work from Home
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Sr Data Engineer

Resource Informatics Group, Inc. is actively seeking a skilled Senior Data Engin...
Location
Location
United States , Irving
Salary
Salary:
Not provided
rigusinc.com Logo
Resource Informatics Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields
  • Strong expertise in data engineering and cloud-based solutions
  • 6+ years of experience in data engineering, architecture, and implementation of large-scale data solutions
  • Proficiency in designing and implementing data models, data structures, and algorithms
  • Advanced knowledge of SQL and NoSQL databases
  • Demonstrated expertise in optimizing data pipelines and improving data reliability, efficiency, and quality
  • Excellent problem-solving capabilities with a keen attention to detail
  • Strong communication and collaboration skills, with the ability to work effectively across diverse teams
  • Relevant certifications in cloud technologies (Azure, AWS, or GCP) advantageous
  • Master’s in Data Science or Computer Science or foreign equivalent, plus 6+ years of experience, OR Bachelor’s in Computer Science, Information Technology, or Electronics and Communication Engineering or foreign equivalent
Job Responsibility
Job Responsibility
  • Develop and execute ETL processes for data extraction, transformation, and loading into warehouses and data lakes
  • Architect data warehousing solutions using Azure Synapse Analytics for efficient querying and reporting
  • Optimize query performance, data processing speed, and resource utilization within Azure environments
  • Construct seamless data pipelines across Azure services utilizing Azure Data Factory, Databricks, and SQL Server Integration Services
  • Collaborate with stakeholders, including data scientists and analysts, to understand data requirements and deliver effective solutions
  • Manage large data volumes leveraging the Hadoop ecosystem for diverse source collection and loading
  • Design, maintain, and optimize data processing jobs using Hadoop MapReduce, Spark, and Hive, with coding in Java or Python for custom applications
  • Monitor job and cluster performance using tools like Ambari and custom monitoring scripts, scaling and maintaining Hadoop clusters and Azure data services
  • Ensure adherence to data security measures and governance standards
  • Integrate cross-cloud data with AWS and GCP services
  • Fulltime
Read More
Arrow Right

Sr Application Data Engineer

The Applications Development Senior Programmer Analyst will contribute to applic...
Location
Location
India , Pune; Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of relevant experience
  • experience in systems analysis and programming of software applications
  • experience in managing and implementing successful projects
  • working knowledge of consulting/project management techniques/methods
  • ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
  • 10+ years of application/software development/maintenance
  • banking domain experience
  • 8+ years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop
  • proficiency in ETL technologies like Abinitio, Data stage, Informatica
  • strong technical knowledge of Apache Spark, Hive, SQL, Hadoop ecosystem, UNIX/Python Scripting, Oracle/DB2
Job Responsibility
Job Responsibility
  • conduct feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and system implementation
  • monitor and control all phases of the development process, including analysis, design, construction, testing, and implementation
  • provide user and operational support on applications to business users
  • analyze complex problems and make evaluative judgments
  • recommend and develop security measures post-implementation
  • consult with users/clients on issues and recommend advanced programming solutions
  • install and assist customer exposure systems
  • define operating standards and processes
  • serve as advisor or coach to new or lower-level analysts
  • exercise judgment and autonomy
  • Fulltime
Read More
Arrow Right

Sr. Software Development Engineer

You will safeguard the quality of our AI and GenAI features by evaluating model ...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
highspot.com Logo
Highspot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience as a Software Development Engineer in AI/ML systems
  • Strong coding skills in Python (evaluation pipelines, data processing, metrics computation)
  • Hands-on experience with evaluation frameworks (Ragas or equivalent)
  • Knowledge of vector embeddings, similarity search, and RAG evaluation
  • Familiarity with evaluation metrics (precision, recall, F1, relevance, hallucination detection)
  • Understanding of LLM-as-a-judge evaluation approaches
  • Strong analytical and problem-solving skills
  • ability to combine human judgment with automated evaluations
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or related field
  • Strong English written and verbal communication skills
Job Responsibility
Job Responsibility
  • Evaluation Frameworks – Develop reusable, automated evaluation pipelines using frameworks such as Raagas
  • integrate LLM-as-a-judge methods for scalable assessments
  • Golden Datasets – Build and maintain high-quality benchmark datasets in collaboration with subject matter experts
  • AI Output Validation – Evaluate results across text, documents, audio, and video, using both automated metrics and human-in-the-loop judgment
  • Metric Evaluation – Implement and track metrics such as precision, recall, F1 score, relevance scoring, and hallucination penalties
  • RAG & Embeddings – Design and evaluate retrieval-augmented generation (RAG) pipelines, vector embedding similarity, and semantic search quality
  • Error & Bias Analysis – Investigate recurring errors, biases, and inconsistencies in model outputs
  • propose solutions
  • Framework & Tooling Development – Build tools that enable large-scale model evaluation across hundreds of AI agents
  • Cross-Functional Collaboration – Partner with ML engineers, product managers, and QA peers to integrate evaluation frameworks into product pipelines
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer in the Finance-DE team, you will have the opportunity ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right