CrawlJobs Logo

Data Science Engineer

DevSavant Inc.

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

At DevSavant, we are a trusted technology partner specializing in Software Development, Data Engineering, AI/Machine Learning, Cloud Solutions, Automation Testing, and UI/UX Design. We deliver innovative, high-quality solutions with a focus on excellence and results. Our people are at the heart of everything we do, fostering a culture of growth and well-being. Join us and thrive in a supportive, success-driven environment. We're looking for a talented Data Scientist with expert Python skills and experience in processing large amounts of data to join our client's team. You'll be a key player in designing, building, and making our main data pipelines and ML systems (that power our advanced analytics and machine learning models) able to handle more. You'll work closely with data scientists and engineers to create strong, efficient, and scalable systems. If you love solving complex technical problems, building production-ready data systems, and want to make a big impact on a data-driven company, this job is for you! Antenna, our client, is a remote-first company, and we are looking for candidates who can work during US business hours. You will report to the Data Science Lead.

Job Responsibility:

  • Design, develop, test, and maintain strong and scalable data pipelines using Python and tools for large-scale data processing (like Spark, Dask, or similar on GCP)
  • Design and take ownership of key parts of our ML systems, making sure they are reliable, efficient, and can grow
  • Set up and manage MLOps practices, including automatic updates for machine learning models (CI/CD), model monitoring, and automated launch plans
  • Improve and manage data processing jobs on cloud platforms (GCP: Dataproc, BigQuery, Cloud Run, Cloud Build)
  • Work with data scientists to get machine learning models ready for production and connect them to our data systems
  • Write detailed documents for the system designs, code, and systems you create and manage
  • Fix complex technical problems in data systems that run on many computers and in ML pipelines

Requirements:

  • 3-5+ years of work experience in software engineering, with a strong focus on data engineering, ML engineering, or building applications that use a lot of data
  • Expert in Python, with a strong understanding of object-oriented design, software system design, and experience building high-quality, testable code for production
  • Strong, hands-on experience with tools for handling large amounts of data like Apache Spark (PySpark), Dask, or similar
  • Solid experience with cloud platforms (GCP is highly preferred). This includes putting services live, managing them, making them handle more users (e.g., Docker, Cloud Run, GKE), and working with large data systems (e.g., Dataproc, BigQuery)
  • Strong SQL skills and experience working with large, complex datasets
  • Deep understanding of machine learning ideas, the full process of creating a model, and MLOps principles
  • Excellent problem-solver, good at fixing complex issues in systems that run on many computers, and making them perform better and handle more data
  • Explain complex technical ideas and system design decisions clearly and effectively in English
  • Advanced English proficiency (B2-C1)
  • Excellent communication, teamwork, and consulting skills
  • Passionate about building strong, scalable systems and are eager to guide and work with a team
  • Care deeply about code quality, system reliability, and writing good documentation

Nice to have:

  • Experience in or passion for the Subscription Economy, especially in media and entertainment
  • Deep knowledge of specific GCP services like Dataproc, Dataflow, Cloud Composer, Vertex AI, or Kubernetes Engine
  • Experience building and maintaining Python code (libraries) used by many, or contributions to open-source projects
  • Advanced knowledge of MLOps tools and ways to manage workflows (e.g. Cloudbuild, CloudRun)

Additional Information:

Job Posted:
February 20, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Science Engineer

Data Science Engineer

The Applications Development Intermediate Programmer Analyst is an intermediate ...
Location
Location
India , Chennai; Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of relevant experience in the Financial Service industry
  • Intermediate level experience in Applications Development role
  • Consistently demonstrates clear and concise written and verbal communication
  • Demonstrated problem-solving and decision-making skills
  • Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
  • Bachelor’s degree/University degree or equivalent experience
  • Experience in ML libraries such as PyTorch, TensorFlow, Pandas, NumPy
  • Strong experience in Python, Spark, SQL, Hive, Impala, Hadoop, Autosys
  • Data Analysis and Data Wrangling skills when dealing with Huge Volume(1B+ Transaction)
  • Performance analysis, troubleshooting and resolution including Cloudera Big Data(PySpark) ecosystem
Job Responsibility
Job Responsibility
  • Utilize knowledge of applications development procedures and concepts
  • Identify and define necessary system enhancements
  • Analyze and interpret code
  • Contribute to applications systems analysis and programming activities
  • Work in a distributed global team environment
  • Fulltime
Read More
Arrow Right

Associate Digital Innovation Engineer - Data Science

We are looking for experienced data science professionals to be part of the data...
Location
Location
India , Chennai
Salary
Salary:
Not provided
buckman.com Logo
Buckman overview
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Data Science/Data Analytics, Applied Mathematics, Statistics, Computer Science or related science and technical field
  • Experience in developing predictive and deep data analytics solutions and optimization scenarios involving expertise in data extraction, analysis, and visualization
  • Hands-on work experience with open-source data analytics toolsets such as R, Python, etc.
  • Proficiency with modern statistical modeling (regression, boosting trees, random forests, etc.), machine learning (text mining, neural network, NLP, etc.), optimization (linear optimization, nonlinear optimization, stochastic optimization, etc.) methodologies
  • Exposure to data analytics and visualization platforms such as SAS, Rapidminer, MATLAB, Minitab, PowerBI, KNIME, PowerApps, etc.
  • Experience in cloud computing (Azure, AWS, etc.), data analytics architecture (Azure Machine Learning, Datalake analytics, Workbench, Stream Analytics, etc.) and data engineering technologies (Datalake, CosmosDB, Hadoop, SQL and No-SQL databases, etc.)
  • A demonstrated record of success with a verifiable portfolio of data science problems tackled in different domains
  • Strong communications skills in English and ability to work with global stakeholders effectively
Job Responsibility
Job Responsibility
  • Work closely with Buckman stakeholders to derive deep industry knowledge across leather, paper, water, and performance chemical industries
  • Help develop a data strategy for the company including collection of the right data, creation of the data science project portfolio, partnering with external providers and augmenting capabilities with additional internal hires
  • Communicating and developing relationship with key stakeholders and subject matter experts to tee up proofs of concept projects to demonstrate how data science can be used to solve problems in unique and novel ways
  • Working closely with a multi-disciplinary team consisting of sensor scientists, software engineers, network engineers, mechanical/electrical engineers, and chemical engineers in the development, and deployment of IoT solutions
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our team.
Location
Location
Salary
Salary:
Not provided
bostondatapro.com Logo
Boston Data Pro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering: 8 years (Preferred)
  • Data Programming languages: 5 years (Preferred)
  • Data Developers: 5 years (Preferred)
Job Responsibility
Job Responsibility
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and use programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our team.
Location
Location
Salary
Salary:
Not provided
bostondatapro.com Logo
Boston Data Pro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering: 8 years (Preferred)
  • Data Programming languages: 5 years (Preferred)
  • Data Developers: 5 years (Preferred)
Job Responsibility
Job Responsibility
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and use programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Data Engineer, Enterprise Data, Analytics and Innovation

Are you passionate about building robust data infrastructure and enabling innova...
Location
Location
United States
Salary
Salary:
110000.00 - 125000.00 USD / Year
vaniamgroup.com Logo
Vaniam Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience in data engineering, ETL, or related roles
  • Strong proficiency in Python and SQL for data engineering
  • Hands-on experience building and maintaining pipelines in a lakehouse or modern data platform
  • Practical understanding of Medallion architectures and layered data design
  • Familiarity with modern data stack tools, including: Spark or PySpark
  • Workflow orchestration (Airflow, dbt, or similar)
  • Testing and observability frameworks
  • Containers (Docker) and Git-based version control
  • Excellent communication skills, problem-solving mindset, and a collaborative approach
Job Responsibility
Job Responsibility
  • Design, build, and operate reliable ETL and ELT pipelines in Python and SQL
  • Manage ingestion into Bronze, standardization and quality in Silver, and curated serving in Gold layers of our Medallion architecture
  • Maintain ingestion from transactional MySQL systems into Vaniam Core to keep production data flows seamless
  • Implement observability, data quality checks, and lineage tracking to ensure trust in all downstream datasets
  • Develop schemas, tables, and views optimized for analytics, APIs, and product use cases
  • Apply and enforce best practices for security, privacy, compliance, and access control, ensuring data integrity across sensitive healthcare domains
  • Maintain clear and consistent documentation for datasets, pipelines, and operating procedures
  • Lead the integration of third-party datasets, client-provided sources, and new product-generated data into Vaniam Core
  • Partner with product and innovation teams to build repeatable processes for onboarding new data streams
  • Ensure harmonization, normalization, and governance across varied data types (scientific, engagement, operational)
What we offer
What we offer
  • 100% remote environment with opportunities for local meet-ups
  • Positive, diverse, and supportive culture
  • Passionate about serving clients focused on Cancer and Blood diseases
  • Investment in you with opportunities for professional growth and personal development through Vaniam Group University
  • Health benefits – medical, dental, vision
  • Generous parental leave benefit
  • Focused on your financial future with a 401(k) Plan and company match
  • Work-Life Balance and Flexibility
  • Flexible Time Off policy for rest and relaxation
  • Volunteer Time Off for community involvement
  • Fulltime
Read More
Arrow Right

Data Engineer / Scientist

As a Data Engineer/Scientist at Actica, you will have the opportunity to design,...
Location
Location
United Kingdom , London; Guildford; Bristol; M4 corridor
Salary
Salary:
Not provided
actica.co.uk Logo
Actica Consulting
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Coding expertise in Python or R
  • Collaborative, team-based development
  • Cloud analytics platforms e.g. relevant AWS and Azure platform services
  • Hands on experience with Palantir ESSENTIAL
  • Data science approaches and tooling e.g. Hadoop, Spark
  • Data engineering approaches
  • Database management, e.g. MySQL, Postgres
  • Software development methods and techniques e.g. Agile methods such as SCRUM
  • Software change management, notably familiarity with git
  • Public sector best practice guidance, e.g. ITIL, OGC toolkit
Job Responsibility
Job Responsibility
  • Design, implement, and maintain scalable data pipelines and ETL processes
  • Develop and maintain data warehouses and data lakes
  • Implement data quality monitoring and validation systems
  • Create and maintain data documentation and cataloguing systems
  • Optimize data storage and retrieval systems
  • Implement data security and governance frameworks
  • Build and maintain data APIs and services
  • Translate business problems into data queries and solutions
  • Develop and deploy machine learning models
  • Create advanced analytics solutions
What we offer
What we offer
  • 25 days of paid leave per annum plus 8 UK bank holidays
  • Discretionary, Performance-Based Bonus Scheme
  • Enrolment in Stakeholder Pension Scheme
  • Cycle To Work Scheme
  • Employee Assistance Programme
  • Electric Vehicle Leasing Scheme
  • Private Medical Insurance
  • Fulltime
Read More
Arrow Right