CrawlJobs Logo

Contract Data Engineer

https://www.roberthalf.com Logo

Robert Half

Location Icon

Location:
United States , Nashville

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Robert Half is seeking a Contract Data Engineer to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.

Job Responsibility:

  • Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources
  • Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs
  • Work within cloud environments such as AWS, Azure, or GCP to deploy and manage data solutions
  • Design and support enterprise data warehouses using platforms such as Snowflake, Redshift, BigQuery, or Azure Synapse
  • Develop solutions using big data technologies such as Spark, Databricks, Kafka, and Hadoop when required
  • Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency
  • Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability
  • Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions
  • Maintain detailed documentation for pipelines, data flows, and system architecture

Requirements:

  • Proven experience as a Data Engineer or in a similar role
  • Strong proficiency in SQL and Python (or similar languages)
  • Experience with cloud platforms (AWS, Azure, GCP)
  • Hands-on experience with ETL / ELT tools (Airflow, dbt, Fivetran, Matillion, Glue, ADF, etc.)
  • Experience with data warehousing platforms such as Snowflake, Redshift, BigQuery, or Azure Synapse
  • Familiarity with streaming and big data tools (Kafka, Spark, Databricks, Hadoop) is a plus
  • Strong understanding of data modeling, performance tuning, and pipeline optimization
  • Experience with version control systems (Git) and agile development practices
  • Excellent problem-solving, analytical, and communication skills

Nice to have:

Familiarity with streaming and big data tools (Kafka, Spark, Databricks, Hadoop)

What we offer:
  • medical, vision, dental, and life and disability insurance
  • eligible to enroll in our company 401(k) plan

Additional Information:

Job Posted:
March 01, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Contract Data Engineer

Curriculum Consultant – Data Engineering Contract

This role will include mapping out and planning the curriculum for a new Level 5...
Location
Location
Salary
Salary:
Not provided
instinct.co.uk Logo
Instinct Resourcing
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience working with apprenticeship curriculum and understanding apprenticeship regulations and frameworks
  • Strong understanding of data engineering principles, including pipelines, cloud architecture, ETL/ELT, data modelling, and governance.
  • Confident in mapping out curriculum
  • Recommend the tools, technologies, and lab environments needed for hands-on learning
Job Responsibility
Job Responsibility
  • Mapping out and planning the curriculum for a new Level 5 Data Engineering apprenticeship course
  • Mapping out the order of learning, planning where self-study modules will fit into instructor led learning and designing what is required for each area of learning
  • Working closely with the learning design team and offer input where needed to ensure the course is completed to a high standard
Read More
Arrow Right

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

PointClickCare is searching for a Principal Data Engineer who will contribute to...
Location
Location
United States
Salary
Salary:
183200.00 - 203500.00 USD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi
  • Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
Job Responsibility
Job Responsibility
  • Lead and guide the design and implementation of scalable streaming data pipelines
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work
What we offer
What we offer
  • Benefits starting from Day 1!
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more!
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

We're looking for a Lead Data Engineer to build the data infrastructure that pow...
Location
Location
United States
Salary
Salary:
185000.00 - 225000.00 USD / Year
zora.co Logo
Zora
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, with at least 2 years in a technical leadership role
  • Strong proficiency in Python and SQL for building production data pipelines, complex data transformations and evolving data platforms, shared infrastructure, and internal tooling with engineering best practices.
  • Strong experience in designing, building, and maintaining cloud-based data pipelines using orchestration tools such as Airflow, Dagster, Prefect, Temporal, or similar.
  • Hands-on experience with cloud data platforms (AWS, GCP, or Azure) and modern data stack tools
  • Deep understanding of data warehousing concepts and experience with platforms like Snowflake, BigQuery, Redshift, or similar
  • Strong software engineering fundamentals including testing, CI/CD, version control, and writing maintainable, documented code
  • Track record of optimizing data systems for performance, reliability, and cost efficiency at scale
  • Excellent communication skills and ability to collaborate with cross-functional teams including product, engineering, and design
Job Responsibility
Job Responsibility
  • Design and build scalable data pipelines to ingest, process, and transform blockchain data, trading events, user activity, and market signals at high volume and low latency
  • Architect and maintain data infrastructure that powers real-time trading analytics, P&L calculations, leaderboards, market cap tracking, and liquidity monitoring across the platform
  • Own ETL/ELT processes that transform raw onchain data from multiple blockchains into clean, reliable, and performant datasets used by product, engineering, analytics, and ML teams
  • Build and optimize data models and schemas that support both operational systems (serving live trading data) and analytical use cases (understanding market dynamics and user behavior)
  • Establish data quality frameworks including monitoring, alerting, testing, and validation to ensure pipeline reliability and data accuracy at scale
  • Collaborate with backend engineers to design event schemas, data contracts, and APIs that enable real-time data flow between systems
  • Partner with product and analytics teams to understand data needs and translate them into robust engineering solutions
  • Provide technical leadership by mentoring engineers, conducting code reviews, establishing best practices, and driving architectural decisions for the data platform
  • Optimize performance and costs of data infrastructure as we scale to handle exponentially growing trading volumes
What we offer
What we offer
  • Remote-First Culture: Work from anywhere in the world!
  • Competitive Compensation: Including salary, pre-IPO stock options, token compensation, and additional financial incentives
  • Comprehensive Benefits: Robust healthcare options, including fully covered medical, dental, and vision for employees
  • Retirement Contributions: Up to 4% employer match on your 401(k) contributions
  • Health & Wellness: Free memberships to One Medical, Teladoc, and Health Advocate
  • Unlimited Time Off: Flexible vacation policies, company holidays, and recharge weeks to prioritize wellness
  • Home Office Reimbursement: To cover home office items, monthly home internet, and monthly cell phone (if applicable)
  • Ease of Life Reimbursement: To cover everything from an Uber home in the rain, childcare, or meal delivery
  • Career Development: Access to mentorship, training, and opportunities to grow your career
  • Inclusive Environment: A culture dedicated to diversity, equity, inclusion, and belonging
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role in Data & Analytics, Group Digital to build trusted da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on building production data systems
  • Experience designing and operating batch and streaming pipelines on cloud platforms (GCP preferred)
  • Proficiency with tools like BigQuery, Dataflow/Beam, Pub/Sub (or Kafka), Cloud Composer/Airflow, and dbt
  • Fluent in SQL and production-grade Python/Scala for data processing and orchestration
  • Understanding of data modeling (star/snowflake, vault), partitioning, clustering, and performance at TB-PB scale
  • Experience turning ambiguous data needs into robust, observable data products with clear SLAs
  • Comfort with messy external data and geospatial datasets
  • Experience partnering with Data Scientists to productionize features, models, and feature stores
  • Ability to automate processes, codify standards, and champion governance and privacy by design (GDPR, PII handling, access controls)
Job Responsibility
Job Responsibility
  • Build Expansion360, the expansion data platform
  • Architect and operate data pipelines on GCP to ingest and harmonize internal and external data
  • Define canonical models, shared schemas, and data contracts as single source of truth
  • Enable interactive maps and location analytics through geospatial processing at scale
  • Deliver curated marts and APIs that power scenario planning and product features
  • Implement CI/CD for data, observability, access policies, and cost controls
  • Contribute to shared libraries, templates, and infrastructure-as-code
What we offer
What we offer
  • Intellectually stimulating, diverse, and open atmosphere
  • Collaboration with world-class peers across Data & Analytics, Product, and Engineering
  • Opportunity to create measurable, global impact
  • Modern tooling on Google Cloud Platform
  • Hardware and OS of your choice
  • Continuous learning (aim to spend ~20% of time on learning)
  • Flexible, friendly, values-led working environment
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Protection

LufCo is seeking a Senior Software Engineer with a focus on Data Protection. Thi...
Location
Location
United States , Annapolis Junction
Salary
Salary:
170000.00 - 245000.00 USD / Year
lufburrow.com Logo
LufCo
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor of Science degree in Software Engineering, Computer Science, Information Systems, or other related field
  • 4 years of relevant work experience may be substituted for a B.S. degree
  • Fourteen (14) or more years experience as a Software Engineer in programs and contracts of similar scope
  • Languages: Java (for both front-end (Swing) and back-end (servlets), Javascript (vanilla/JQuery),Shell Scripting (BASH), PL/SQL (Oracle)
  • Frameworks: React and Spring/Spring Boot
  • OS: Linux and Windows
  • COTs: AEM (Adobe)
  • Servers: JBoss 7.x and Tomcat
  • Active TS/SCI with Polygraph clearance
Job Responsibility
Job Responsibility
  • Drive next generation Data Protection forward utilizing commercial and government best practices for ensuring secure encryption solutions
  • Planning, implementation, and evolution of Data Protection sets for evaluation and analysis as part of existing system modernization efforts
  • Ability to see impacts of system changes at scale, minimizing technical debt and critical thinking related to strategic moves regarding Identity, Credentialing, and Access Management Solutions
  • Provide fundamental knowledge on applying technologies like containerization to legacy physical workloads, the ability to identify automation improvements, and the ability to communicate pros/cons as part of the technical decision making process
  • Demonstrate a high level of familiarity with software patterns and modern design methodology
  • Software development on Linux based platforms
  • Software planning to include development planning, build planning, and sprint planning
  • Develop software to meet cybersecurity related software requirements and constraints
  • Advocate for automation in all aspects of the system (build, deployment, test, updating, and monitoring)
  • Perform requirements analysis, refinement, testing, troubleshooting, deployment, and push secure access solutions forward to support the customer
What we offer
What we offer
  • Competitive salary
  • generous PTO
  • health/dental/vision insurance
  • 401K matching
  • tuition reimbursement
  • Paid Time Off
  • 401K Contribution and Employer Match Contributions
  • Medical, Dental, and Vision Coverage
  • Impactful Work
  • Cutting-Edge Technology
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Team

We’re looking for Software Engineers to join our Data Department, developers wit...
Location
Location
Spain , Barcelona; Madrid
Salary
Salary:
50000.00 - 70000.00 EUR / Year
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Python Proficiency: confident working deeply in Python, understand topics like the GIL, concurrency (asyncio), generators, and decorators, care about maintainable typing and thoughtful performance optimization
  • Architecture Patterns: comfortable applying Hexagonal Architecture to keep domain logic clean and decoupled, can leverage patterns like CQRS and the Transactional Outbox to support consistency and reliability in an event-driven environment
  • Database Polyglot: strong SQL fundamentals, know how to design for performance (PostgreSQL internals, indexing strategies), understand when tools like Redis (caching) or Elasticsearch (search/aggregations) are the right fit
  • Communication: communicate clearly in English across audiences
  • Pragmatic mindset: balance quality with impact, able to make thoughtful trade-offs, deliver iteratively, and keep an eye on long-term sustainability while moving at a good pace
Job Responsibility
Job Responsibility
  • Architect and Build: Design, implement, and maintain scalable microservices using Python (FastAPI/Django), take ownership of breaking down complex monoliths or building new services from the ground up, applying DDD principles
  • Master the Event Stream: Build robust, event-driven flows with Kafka, ensure that our events are durable, ordered, and processed idempotently, managing eventual consistency with care
  • Integrate at Scale: Design fault-tolerant integrations with third-party ecosystems (Meta Ads, Google Marketing Platform, Salesforce), navigate rate limits, retries, and circuit breakers to maintain platform stability
  • Bridge OLTP and OLAP: Work at the intersection of transactional applications and analytical data, optimize PostgreSQL for operational efficiency while designing ingestion pipelines for Snowflake and Elasticsearch, using Airflow and dbt
  • Productionize Data Capabilities: Partner closely with Data Science, Machine Learning, and Data Engineering teams to ensure seamless integration of data sources and model infrastructure
  • Elevate the Bar: Lead thorough code reviews, write RFCs for key technical decisions, and mentor mid-level engineers, champion testing strategies (unit, integration, contract testing) and advocate for clean, sustainable code architecture
What we offer
What we offer
  • Responsibility from day one and professional and personal growth
  • Opportunity to have a real impact in a high-growth global category leader
  • A compensation package consisting of base salary and the potential to earn a significant bonus for top performance
  • Stock options plan
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Health insurance and other benefits such as Flexible remuneration with a 100% tax exemption through Cobee
  • English / Spanish Lessons
  • Wellhub Membership
  • Possibility to receive in advance part of your salary by Payflow
  • Fulltime
Read More
Arrow Right

Data Engineer

The Data Engineer will support the modernization and further evolution of its in...
Location
Location
Belgium , Brussels
Salary
Salary:
Not provided
airswift.com Logo
Airswift Sweden
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Deep knowledge of the Ralph Kimball’s data warehousing concepts
  • Proven experience in agile Business Intelligence development team
  • Knowledge of relational databases and good data investigation/querying skills
  • A strong team player with a goal-directed attitude
  • Able to fluently communicate in English (spoken and written)
  • Knowledge of Microsoft SQL Server data warehouses, SSRS & SSIS
  • Experience working with semantic modelling tools like SSAS or Power BI
Job Responsibility
Job Responsibility
  • Designing, developing, and maintaining the central Data Warehouse to support key functional product processes such as contracting, flow planning, invoicing, and data publication
  • Build and optimize high-performance ETL pipelines and reporting solutions
  • Deploy through governed data pipelines, debugging, aligning with your team and peer reviewing code
  • Contribute to the technical design of scalable and reliable data solutions
  • Join agile ceremonies and contribute to sprint planning and delivery
  • Analyse functional and technical requirements to design database models
  • Build and maintain ETL pipelines using Microsoft BI tools
  • Develop semantic data models in Analysis Services and Power BI
  • Create and maintain reports in SSRS
  • Prepare and execute releases within a governed deployment framework
  • Fulltime
Read More
Arrow Right