CrawlJobs Logo

Sr. Data Engineer (Databricks)

myticas.com Logo

Myticas Consulting

Location Icon

Location:
United States , Lansing

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The position is responsible for providing ongoing maintenance and support of the Michigan Disease Surveillance System (MDSS). MDSS is a complex application that supports communicable disease surveillance, registries, and case management systems that are critical to supporting effective responses to public health emergencies and reducing the burden of communicable diseases. MDSS is going through modernization to enhance stability and functionality of the system. With phase 1 already completed. The resource is integral to developing and maintaining and enhancing MDHHS’ MDSS phase 1. Making sure automated processes are funtioning, streamlining critical business processes, data integrity, SEM/SUITE compliance, and securing the application. The resource also performs as a technical lead and provides technical guidance to the other developers in the department. As a technical lead, the resource participates in a variety of analytical assignments that provide for the enhancement, integration, maintenance, and implementation of projects. The resource will also provides technical oversight to other developers in the team that support other critical applications . Not having a resource on staff will lead to MDHSS failing to maintain, enhance, and support the modernized MDSS that can lead to errors causing application outages, data integrity issues and can eventually lead to incorrect information being processed and reporting of the patient information.

Job Responsibility:

  • Lead the design and development of scalable and high-performance solutions using AWS services
  • Experience with Databricks, Elastic search, Kibanna, S3
  • Experience with Extract, Transform, and Load (ETL) processes and data pipelines
  • Write clean, maintainable, and efficient code in Python/Scala
  • Experience with AWS Cloud-based Application Development
  • Experience in Electronic Health Records (EHR) HL7 solutions
  • Implement and manage Elastic Search engine for efficient data retrieval and analysis
  • Experience with data warehousing, data visualization Tools, data integrity
  • Execute full software development life cycle (SDLC) including experience in gathering requirements and writing functional/technical specifications for complex projects
  • Excellent knowledge in designing both logical and physical database model
  • Develop database objects including stored procedures, functions
  • Extensive knowledge on source control tools such as GIT
  • Develop software design documents and work with stakeholders for review and approval
  • Exposure to flowcharts, screen layouts and documentation to ensure logical flow of the system requirements
  • Experience working on large agile projects
  • Experience or Knowledge on creating CI/CD pipelines using Azure Devops

Requirements:

  • 12+ years developing complex database systems
  • 8+ years Databricks
  • 8+ years using Elastic search, Kibanna
  • 8+ years using Python/Scala
  • 8+ years Oracle
  • 5+ years’ experience with Extract, Transform, and Load (ETL) processes and developing Data Pipelines
  • 5+ years’ experience with AWS
  • 5+ years’ experience with data warehousing, data visualization Tools, data integrity
  • 5+ years using CMM/CMMI Level 3 methods and practices
  • 5+ years implemented agile development processes including test driven development

Nice to have:

3+ years’ experience or Knowledge on creating CI/CD pipelines using Azure DevOps

Additional Information:

Job Posted:
March 19, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Sr. Data Engineer (Databricks)

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Sr Data Engineer

Resource Informatics Group, Inc. is actively seeking a skilled Senior Data Engin...
Location
Location
United States , Irving
Salary
Salary:
Not provided
rigusinc.com Logo
Resource Informatics Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields
  • Strong expertise in data engineering and cloud-based solutions
  • 6+ years of experience in data engineering, architecture, and implementation of large-scale data solutions
  • Proficiency in designing and implementing data models, data structures, and algorithms
  • Advanced knowledge of SQL and NoSQL databases
  • Demonstrated expertise in optimizing data pipelines and improving data reliability, efficiency, and quality
  • Excellent problem-solving capabilities with a keen attention to detail
  • Strong communication and collaboration skills, with the ability to work effectively across diverse teams
  • Relevant certifications in cloud technologies (Azure, AWS, or GCP) advantageous
  • Master’s in Data Science or Computer Science or foreign equivalent, plus 6+ years of experience, OR Bachelor’s in Computer Science, Information Technology, or Electronics and Communication Engineering or foreign equivalent
Job Responsibility
Job Responsibility
  • Develop and execute ETL processes for data extraction, transformation, and loading into warehouses and data lakes
  • Architect data warehousing solutions using Azure Synapse Analytics for efficient querying and reporting
  • Optimize query performance, data processing speed, and resource utilization within Azure environments
  • Construct seamless data pipelines across Azure services utilizing Azure Data Factory, Databricks, and SQL Server Integration Services
  • Collaborate with stakeholders, including data scientists and analysts, to understand data requirements and deliver effective solutions
  • Manage large data volumes leveraging the Hadoop ecosystem for diverse source collection and loading
  • Design, maintain, and optimize data processing jobs using Hadoop MapReduce, Spark, and Hive, with coding in Java or Python for custom applications
  • Monitor job and cluster performance using tools like Ambari and custom monitoring scripts, scaling and maintaining Hadoop clusters and Azure data services
  • Ensure adherence to data security measures and governance standards
  • Integrate cross-cloud data with AWS and GCP services
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering tea...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up with ideas to help instigate self-serve data engineering within the company
  • Apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
  • Take vague requirements and transform them into solid solutions
  • Solve challenging problems, where creativity is as crucial as your ability to write code and test cases
What we offer
What we offer
  • Health and wellbeing resources
  • Paid volunteer days
  • Fulltime
Read More
Arrow Right

Sr. Solutions Engineer

At Databricks, our core principles are at the heart of everything we do; creatin...
Location
Location
South Korea , Seoul
Salary
Salary:
Not provided
databricks.com Logo
databricks
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions
  • Develop customer relationships and build internal partnerships with account executives and teams
  • Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark
  • Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s)
  • Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth
  • Native level in Korean is required, and proficiency in English is a plus
Job Responsibility
Job Responsibility
  • Form successful relationships with clients throughout your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives
  • Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform
  • Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups
  • Grow your knowledge and expertise to the level of a technical and/or industry specialist
Read More
Arrow Right

Sr. Delivery Solutions Architect

At Databricks we are on a mission to empower our customers to solve the world's ...
Location
Location
South Korea , Seoul
Salary
Salary:
Not provided
databricks.com Logo
databricks
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years in a customer-facing pre-sales, technical architecture, customer success, or consulting role
  • Experience understanding architecture related distributed data systems, specifically within one of the following: Data Engineering technologies (e.g. Spark, Hadoop, Kafka)
  • Data Warehousing (e.g. SQL, OLTP/OLAP/DSS)
  • Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO)
  • Comfortable managing multiple projects at once, and engaging a virtual team of subject matter experts
  • Influencing and leading teams - especially without having direct reporting line responsibility
  • Executive Stakeholder management - experience in effectively engaging and influencing a variety of audiences at all levels of an organization with particular success in building and maintaing strong CxO level relationship
  • Executive escalation management - experience in resolving complex and critical escalation with senior customer and internal executives
  • Strategic Management Consulting - experience of conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects
  • Building and steering to a value case - business value consulting and realization
Job Responsibility
Job Responsibility
  • Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers
  • Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts
  • Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks
  • Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account
  • Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts to execute on the right tasks that are beyond your scope of activities or expertise
  • Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals
  • Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs
  • Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA
  • Proactively provide internal and external updates - KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression - to your Technical GM
  • Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team
Read More
Arrow Right

Sr. Staff ML Platform Engineer

Machine learning is the crucial enabler for every financial service that EarnIn ...
Location
Location
United States , Mountain View
Salary
Salary:
360000.00 - 440000.00 USD / Year
earnin.com Logo
EarnIn
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master’s degree in Computer Science, Engineering, or a related field
  • 8+ years of industry machine learning experience and excellent software engineering skills
  • Strong programming skills in Python, with familiarity in ML frameworks such as TensorFlow or PyTorch
  • Experience with ML cloud platforms such as AWS Sagemaker, Databricks, or GCP Vertex AI
  • Familiarity with data pipelines and workflow management tools
  • Strong communication and collaboration skills
  • Passion for learning and staying updated with the latest industry trends in machine learning and platform engineering
Job Responsibility
Job Responsibility
  • Design, build, and maintain a robust ML platform and tooling ecosystem that supports the entire machine learning lifecycle, from experimentation to production
  • Lead and mentor a team of ML engineers, deeply understanding their workflows to streamline model training, deployment, and monitoring, while ensuring reproducibility and consistency of results
  • Drive scalability, reliability, and cost efficiency of the ML platform, balancing performance with ease of use for scientists and engineers
  • Evaluate and adopt emerging technologies to continually advance the organization’s machine learning capabilities and maintain a competitive edge
  • Champion operational excellence, setting a high bar for engineering quality, reliability, and automation
  • Act as a catalyst for innovation, spearheading step-change improvements that unlock new opportunities for growth and efficiency
What we offer
What we offer
  • equity and benefits
  • Fulltime
Read More
Arrow Right
New

Sr Data Engineer

The Senior Data Engineer will lead data architecture and modernization initiativ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years of experience in data engineering and related fields
  • Strong technical skills in data modeling and SQL
  • Leadership capabilities to drive cross-functional collaboration
  • Skills in Azure, Databricks, Data Modeling, Team Leadership, Client Interviews, SQL
Job Responsibility
Job Responsibility
  • Engage heavily with business users across North America and Europe, facilitating workshops and data discovery sessions
  • Drive consensus on business rules, data definitions, and data sources, especially where regional processes differ
  • Serve as the architectural thought leader enabling teams to transition from manual, inconsistent processes to standardized, modernized workflows
  • Partner closely with business analysts, data analysts, product owners, and engineering teams across multiple geographies
  • Architect a unified master stitched data model to replace downstream reliance on Varicent for data assembly
  • Lead the re‑architecture of compensation data processing—including internal and external compensation flows—into a scalable, cloud‑native Azure environment
  • Define patterns, frameworks, and integration strategies across Azure services (Data Factory, Databricks, Data Lake, SQL, etc.)
  • Evaluate and evolve the use of rules engines/ODM/Drools to externalize and modernize embedded business logic currently locked in application code
  • Guide decisions to shift logic and data ownership into enterprise‑owned systems rather than third‑party tools
  • Analyze current‑state processes (38 in NA, 9 in Europe) and identify opportunities for re‑engineering, automation, and consolidation
Read More
Arrow Right

Sr. Associate Data Engineer

In this vital role you will be responsible for the development and implementatio...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
amgen.com Logo
Amgen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field with 5+ years of experience
  • Mastery with at least one programming language (Python, Scala, or similar)
  • Experience with data engineering platforms like Databricks
  • Experience with vibe coding using large language models for production systems
  • Broad interest in various Amgen preferred platforms/tools
  • Eagerness to learn and grow in a data engineering environment
  • Ability to work well within a team and communicate effectively
  • Experienced with data modelling and structuring
  • Experienced working with ETL orchestration technologies
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps
Job Responsibility
Job Responsibility
  • Design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Deliver for data pipeline projects from development to deployment, managing, timelines, and risks
  • Ensure data quality and integrity through meticulous testing and monitoring
  • Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions
  • Work closely with product team, and key collaborators to understand data requirements
  • Enforce to data engineering industry standards and standards
  • Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies
  • Familiarity with code versioning using GIT and code migration tools
  • Familiarity with JIRA
  • Stay up to date with the latest data technologies and trends
Read More
Arrow Right