CrawlJobs Logo

Data Engineer + Scientist Hybrid

Spoak Decor

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

115000.00 - 150000.00 USD / Year

Job Description:

Spoak is looking for a hybrid data engineer / data scientist to join us on our mission to build the world’s most loved interior design platform. As a company committed to using data to drive our business and roadmap, we are looking for a talented data engineer/data scientist hybrid who can help us to develop and maintain our data infrastructure and use advanced analytics techniques to uncover insights that will help us grow our business and achieve our mission.

Job Responsibility:

  • Design, build and maintain our data infrastructure, including ETL pipelines and databases
  • Develop and implement advanced analytics models and algorithms to uncover insights that can be used to optimize our products and customer experience
  • Work closely with product managers, designers, and engineers to identify data needs and build out new data-driven features
  • Develop and maintain data documentation, ensuring that our data is accurate, consistent, and well-documented
  • Participate in cross-functional projects and collaborate with other teams to share insights and knowledge

Requirements:

  • Bachelor's degree in computer science, statistics, mathematics or a related field
  • Strong knowledge of data engineering and data science concepts and techniques, including ETL, data warehousing, statistical modeling, machine learning, and data visualization
  • Proficiency in programming languages such as Python, R, or SQL
  • Experience with cloud platforms such as AWS or GCP
  • Ability to work collaboratively in a fast-paced, startup environment
  • Excellent communication skills and ability to explain technical concepts and insights to non-technical stakeholders

Nice to have:

  • Experience with data visualization tools such as Tableau or Power BI
  • Experience with distributed computing systems such as Hadoop or Spark
  • Experience with big data technologies such as Apache Kafka, BigQuery or Cassandra
  • Experience with containerization technologies such as Docker or Kubernetes
  • Experience with machine learning platforms like TensorFlow or PyTorch
What we offer:
  • To build an amazing company from scratch
  • To build tools that enable creativity
  • Remote-first team, EST hours
  • Medical, dental, and vision insurance
  • 401K
  • A four-day work week every other week
  • Flexible time-off
  • Monthly virtual team events
  • A close-knit team

Additional Information:

Job Posted:
January 20, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer + Scientist Hybrid

Data Engineer II

We’re looking for a Data Engineer II to join our Decisions & Insights team — the...
Location
Location
United States , San Francisco
Salary
Salary:
101250.00 - 162000.00 USD / Year
axon.com Logo
Axon
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a Data Engineer, Analytics Engineer, or similar hybrid role
  • Advanced SQL skills with strong understanding of schema design and analytical database patterns
  • Strong Python experience for data manipulation, scripting, and automation
  • Strong analytical mindset, attention to detail, and a passion for turning complex datasets into clear insights
  • Solid Git experience for collaboration and version control
  • Experience integrating and analyzing data from AWS services (e.g., Redshift, S3, APIs)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain automated ETL/ELT pipelines and analytical models in Redshift using SQL, dbt, or SQLMesh
  • Build clean, structured datasets that enable fast, self-serve insights for PMs, analysts, and operational agents
  • Create dashboards and analytical tools (e.g., Tableau) that make insights intuitive and actionable
  • Optimize SQL queries, schemas, and table designs for high-volume analytics workloads
  • Partner with product managers, data scientists, and engineering teams to deliver analysis that supports product and operational decisions
  • Use Python to automate processing, improve pipeline reliability, and support advanced analytical workflows
  • Apply best practices for Git-based version control, testing, documentation, and data quality monitoring
  • Participate in design reviews for new analytical features and data systems
What we offer
What we offer
  • Competitive salary and 401k with employer match
  • Discretionary paid time off
  • Paid parental leave for all
  • Medical, Dental, Vision plans
  • Fitness Programs
  • Emotional & Mental Wellness support
  • Learning & Development programs
  • Snacks in our offices
  • Fulltime
Read More
Arrow Right

Data Engineer, Solutions Architecture

We are seeking a talented Data Engineer to design, build, and maintain our data ...
Location
Location
United States , Scottsdale
Salary
Salary:
90000.00 - 120000.00 USD / Year
clearwayenergy.com Logo
Clearway Energy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-4 years of hands-on data engineering experience in production environments
  • Bachelor's degree in Computer Science, Engineering, or a related field
  • Proficiency in Dagster or Airflow for pipeline scheduling, dependency management, and workflow automation
  • Advanced-level Snowflake administration, including virtual warehouses, clustering, security, and cost optimization
  • Proficiency in dbt for data modeling, testing, documentation, and version control of analytical transformations
  • Strong Python and SQL skills for data processing and automation
  • 1-2+ years of experience with continuous integration and continuous deployment practices and tools (Git, GitHub Actions, GitLab CI, or similar)
  • Advanced SQL skills, database design principles, and experience with multiple database platforms
  • Proficiency in AWS/Azure/GCP data services, storage solutions (S3, Azure Blob, GCS), and infrastructure as code
  • Experience with APIs, streaming platforms (Kafka, Kinesis), and various data connectors and formats
Job Responsibility
Job Responsibility
  • Design, deploy, and maintain scalable data infrastructure to support enterprise analytics and reporting needs
  • Manage Snowflake instances, including performance tuning, security configuration, and capacity planning for growing data volumes
  • Optimize query performance and resource utilization to control costs and improve processing speed
  • Build and orchestrate complex ETL/ELT workflows using Dagster to ensure reliable, automated data processing for asset management and energy trading
  • Develop robust data pipelines that handle high-volume, time-sensitive energy market data and asset generation and performance metrics
  • Implement workflow automation and dependency management for critical business operations
  • Develop and maintain dbt models to transform raw data into business-ready analytical datasets and dimensional models
  • Create efficient SQL-based transformations for complex energy market calculations and asset performance metrics
  • Support advanced analytics initiatives through proper data preparation and feature engineering
  • Implement comprehensive data validation, testing, and monitoring frameworks to ensure accuracy and consistency across all energy and financial data assets
What we offer
What we offer
  • generous PTO
  • medical, dental & vision care
  • HSAs with company contributions
  • health FSAs
  • dependent daycare FSAs
  • commuter benefits
  • relocation
  • a 401(k) plan with employer match
  • a variety of life & accident insurances
  • fertility programs
  • Fulltime
Read More
Arrow Right

Junior Data Scientist

Aramark Sports + Entertainment is hiring a Junior Data Scientist - Oracle Park, ...
Location
Location
United States , San Francisco
Salary
Salary:
70000.00 - 95000.00 USD / Year
aramark.com Logo
Aramark
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Must be legally authorized to work in the United States without the need for current or future employment-based sponsorship from Aramark
  • Bachelor’s degree in Mathematics, Statistics, Computer Science, Data Science, or a related field
  • equivalent practical experience may be considered
  • 1–3 years of experience in an analytical or data science role
  • Proficiency in data manipulation and transformation using Python or R
  • Familiarity with SQL for data querying and analysis
  • Knowledge of data science workflows including data cleaning, feature engineering, and predictive modeling
  • Effective organizational and time management skills, with the ability to manage multiple projects simultaneously
  • Solid understanding of statistics, experimental design, and core data science concepts
  • Strong communication skills with the ability to present findings and recommendations to technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Analyze consumer behavior at Oracle Park by leveraging purchasing and dining data to identify key customer segments. Present insights using statistical methods and engaging visualizations tailored to diverse stakeholder audiences
  • Evaluate operational performance by integrating data from labor tracking, Point of Sale (POS), and inventory systems. Identify inefficiencies and recommend actionable improvements to enhance venue operations
  • Conduct ad-hoc analyses to assess the effectiveness of short-term strategies. Collaborate with cross-functional teams to define success metrics and deliver timely, data-backed evaluations
  • Support the development of automated reporting workflows that deliver key performance metrics to stakeholders, including Oracle Park operations and the San Francisco Giants
  • Assist in building scalable data pipelines using Python, R, and SQL to streamline data access and support analytics and reporting initiatives
  • Perform machine learning experiments and model evaluation tasks under the guidance of the team’s Lead Data Scientist
What we offer
What we offer
  • medical, dental, vision, and work/life resources
  • retirement savings plans like 401(k)
  • paid days off such as parental leave and disability coverage
  • Fulltime
Read More
Arrow Right

Data Engineer

Location
Location
Vietnam , Hà Nội
Salary
Salary:
Not provided
cmcglobal.com.vn Logo
CMC Global Company Limited.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineer with strong Hadoop / Spark / Talend experience
  • Experience building and operating large-scale data lakes and data warehouses
  • Experience with Hadoop ecosystem and big data tools, including Spark and Kafka
  • Experience with Master Data Management (MDM) tools and platforms such as Informatica MDM, Talend Data Catalog, Semarchy xDM, IBM PIM & IKC, or Profisee
  • Familiarity with MDM processes such as golden record creation, survivorship, reconciliation, enrichment, and quality
  • Experience in data governance, including data quality management, data profiling, data remediation, and automated data lineage
  • Experience with stream-processing systems including Spark-Streaming
  • Experience working with Cloud services using one or more Cloud providers such as Azure, GCP, or AWS
  • Experience with Delta Lake and Databricks
  • Advanced working experience with relational SQL and NoSQL databases, including Hive, HBase, and Postgres
Job Responsibility
Job Responsibility
  • Create and manage a single master record for each business entity, ensuring data consistency, accuracy, and reliability
  • Implement data governance processes, including data quality management, data profiling, data remediation, and automated data lineage
  • Create and maintain multiple robust and high-performance data processing pipelines within Cloud, Private Data Centre, and Hybrid data ecosystems
  • Assemble large, complex data sets from a wide variety of data sources
  • Collaborate with Data Scientists, Machine Learning Engineers, Business Analysts, and Business users to derive actionable insights and reliable foresights into customer acquisition, operational efficiency, and other key business performance metrics
  • Develop, deploy, and maintain multiple microservices, REST APIs, and reporting services
  • Design and implement internal processes to automate manual workflows, optimize data delivery, and re-design infrastructure for greater scalability
  • Establish expertise in designing, analyzing, and troubleshooting large-scale distributed systems
  • Support and work with cross-functional teams in a dynamic environment
What we offer
What we offer
  • Attractive compensation package: 14-month salary scheme plus annual bonus and additional allowances
  • Annual bonus package tailored based on performance and contribution
  • Young, open, and dynamic working environment that promotes innovation and creativity
  • Ongoing learning and development with regular professional training and opportunities to enhance both technical and soft skills
  • Exposure to cutting-edge technologies and diverse real-world enterprise projects
  • Vibrant company culture with regular team-building activities, sports tournaments, arts events, Family Day, and more
  • Full compliance with Vietnamese labor laws, plus additional internal perks such as annual company trips, special holidays, and other corporate benefits
  • Fulltime
Read More
Arrow Right

Senior Data Scientist

At Valtech, you’ll find an environment designed for continuous learning, meaning...
Location
Location
North Macedonia
Salary
Salary:
Not provided
valtech.com Logo
Valtech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Several years of experience as a Senior Data Scientist / ML Engineer / Software Engineer (or equivalent) with demonstrable productive ML systems
  • In-depth knowledge of ML (Supervised/Unsupervised, Sequences/Time Series, Anomaly Detection, Classification/Regression) and solid understanding of statistics/evaluation
  • First-principles thinking: You can not only "apply" models, but also derive them, question them, and combine them with domain knowledge (hybrid approaches)
  • Scientific curiosity paired with pragmatism: forming hypotheses, testing experimentally, delivering results
Job Responsibility
Job Responsibility
  • Development and validation of models/algorithms for use cases such as: Gait analysis and movement pattern recognition (gait pattern, stability, deviations, trend analyses)
  • New features such as delirium, complex risk indicators, clinical "events"
  • Experiment Design & Measurability: Definition of metrics, offline evaluation, golden sets, reproducibility, performance/robustness
  • Feature Engineering & Representation Learning: Derive meaningful representations from radar data (incl. domain understanding)
  • Evaluation of new approaches for radar-based patient monitoring (classic ML, deep learning, probabilistic models, first principles, and hybrid methods)
What we offer
What we offer
  • Private health insurance
  • Education program with training and certification
  • Wellbeing program
  • Free beverages
  • Events
  • Competitive salary and 24 days of vacation
  • Challenging projects
  • Cool colleagues
  • Honest feedback
Read More
Arrow Right
New

Senior Data Scientist

At Boeing, we innovate and collaborate to make the world a better place. We’re c...
Location
Location
United States , Seattle; Berkeley
Salary
Salary:
162350.00 - 234600.00 USD / Year
boeing.com Logo
Boeing
Expiration Date
May 12, 2026
Flip Icon
Requirements
Requirements
  • Ability to obtain a US Security Clearance for which the US Government requires US Citizenship
  • Bachelor’s degree or higher
  • 5+ years of experience with AI/ML technologies, frameworks, models and ensembles
  • 5+ years with container and container orchestration (Docker and Kubernetes)
  • 5+ years of experience with data engineering and data pipelines for On-Prem cloud, hybrid data models and data warehouses
  • 5+ years of experience with software programming/scripting (such as Python, Unix/Linux type batch scripting, FORTRAN, C / C++)
Job Responsibility
Job Responsibility
  • Define the strategy to build highly reliable and scalable ML and AI solutions that align with the organization’s business goals and objectives
  • Lead the creation and implementation of scalable, robust, and high-performance ML architectures including MLOps, AIOps leveraging cloud native services (AWS, Azure, GCP) and open-source frameworks
  • Design, build, and optimize machine learning models, ensuring accuracy, efficiency, and scalability
  • Partner with product managers, engineers, and business stakeholders to define problem statements, success metrics, and deployment requirements
  • Collaborate with data engineers, data architect, software developers, and DevOps teams to integrate ML models into production systems
  • Assess and recommend ML tools, frameworks, and platforms to deliver business value and foster innovation
  • Monitor and optimize ML models and systems for latency, throughput, and cost-efficiency in production
  • Ensure ML systems adhere to ethical guidelines, data privacy regulations, and industry standards
  • Design and development of Generative AI and AI use cases (LLMs, RAG, Agentic, multi model AI, fine tuning. Vector databases and prompt engineering)
  • Lead organizational change for the adoption of new platforms, machine learning tools and analytics workflows
What we offer
What we offer
  • Competitive base pay and variable compensation opportunities
  • health insurance
  • flexible spending accounts
  • health savings accounts
  • retirement savings plans
  • life and disability insurance programs
  • paid and unpaid time away from work
  • generous company match to your 401(k)
  • industry-leading tuition assistance program pays your institution directly
  • fertility, adoption, and surrogacy benefits
  • Fulltime
!
Read More
Arrow Right

Cloud Data Engineer

Join BIP – xTech, BIP's Centre of Excellence specializing in innovative consulti...
Location
Location
Albania , Tirana
Salary
Salary:
Not provided
businessintegrationpartners.com Logo
Business Integration Partners
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in the implementation and management of cloud‑based data platforms
  • Knowledge of one or more cloud platforms (e.g., Google GCP, Amazon AWS, Microsoft Azure) and their services supporting data management, processing, and analysis
  • Knowledge of key concepts of modern data platforms (e.g., Big Data, Data Lake, Data Warehouse, Data Virtualization, Data Mesh, Data Governance, etc.)
  • Experience in designing data models and creating/maintaining data pipelines for data extraction, transformation, and loading
  • Knowledge of one or more programming and data query languages (e.g., SQL, Java, Ruby, Python, R…)
  • In‑depth knowledge of one or more technologies for managing structured and unstructured databases (e.g., BigQuery, MongoDB, Hadoop, Redis, etc.)
Job Responsibility
Job Responsibility
  • Working on Big Data or hybrid Data Platforms, on‑premises or cloud, implementing batch or real‑time processing pipelines, and performing transformation and handling of structured and unstructured data
  • Autonomous in the phases of data acquisition, historization, cleansing, anonymization/crypting/masking, null-value and outlier handling, aggregation, structuring of unstructured data, creation of quality KPIs, and preparation of data for different users
  • Receiving Machine Learning models from Data Scientists, optimizing them for scalable execution—for example by applying computational parallelism—and integrating/scheduling them for efficient automated execution
  • Working with various technologies, chosen based on the specific project and client, favoring cloud‑native and globally scalable architectures
  • Provide Advisory support to select Engineering technologies within emerging Big Data architectures or to improve existing ecosystems
  • Build relationships with clients, vendors, and partners
  • Actively contribute to the Community through research activities, scouting, new solution concepts, and business development
  • Fulltime
Read More
Arrow Right

Senior AI Project Manager

Seeking a Senior AI Project Manager to lead AI and data-driven initiatives withi...
Location
Location
United States , Los Angeles
Salary
Salary:
90.00 USD / Hour
bhsg.com Logo
Beacon Hill
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of project or program management experience, including AI/ML or data-driven initiatives
  • Direct experience in Broker Dealer & Wealth Management environments (required)
  • Proven delivery experience in financial services or highly regulated environments
  • Hands-on experience managing large, complex programs with minimal supervision
  • Strong understanding of the AI/ML lifecycle: data pipelines, model training/testing, evaluation, deployment, and MLOps
  • Working knowledge of data quality, governance, bias, privacy, and ethical AI considerations
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and AI/engineering tools (e.g., Jira, Git, TensorFlow, PyTorch)
  • Define and manage scope, schedule, cost, risk, resources, and quality
  • Create and maintain detailed project plans, milestones, dependencies, and reporting
  • Lead cross-functional teams including data scientists, ML engineers, data engineers, and business stakeholders
Job Responsibility
Job Responsibility
  • Lead AI and data-driven initiatives within a Broker Dealer & Wealth Management environment
  • Own delivery end-to-end-from project definition through deployment
  • Ensure alignment between business objectives, data teams, and regulatory requirements
  • Fulltime
Read More
Arrow Right