CrawlJobs Logo

Senior Data & Automation Engineer

Fluent, Inc

Location Icon

Location:
Canada , Toronto

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

90000.00 - 100000.00 CAD / Year

Job Description:

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition. Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers. As a Senior Data & Automation Engineer, you will leverage your Databricks and Spark expertise to execute on building enterprise-grade data products that power Fluent’s business lines. These products serve as the foundation for sophisticated representations of customer journeys and marketplace activity across our ecosystem. You will partner with Data Architects, Data Scientists, and Product Managers to transform Enterprise Data Models into optimized physical data models and real-time pipelines. You will elevate standards across the team in code quality, observability, and architecture design—while actively contributing as a hands-on engineer. This role is fully remote in Ontario, with occasional travel to NYC or Toronto offices.

Job Responsibility:

  • Design, build, and support scalable real-time and batch data pipelines using PySpark and Spark Structured Streaming on Databricks
  • Implement process automation and end-to-end workflows following Bronze → Silver → Gold architecture using Delta Lake best practices
  • Handle event-driven ingestion with Kafka and integrate into automated pipelines
  • Orchestrate workflows using Databricks Workflows/Jobs and CI/CD automation
  • Implement strong monitoring, observability, and alerting for reliability and performance (Databricks metrics, dashboards)
  • Collaborate cross-functionally in agile sprints with Product, Analytics, and Data Science teams
  • Translate enterprise logical data models into optimized physical and performance-tuned implementations
  • Write modular, version-controlled code in Git
  • contribute to code reviews and enforce quality standards
  • Implement robust logging, error handling, and data quality validation across automation layers
  • Utilize relevant AWS services (S3, IAM, Secrets Manager) and DevOps practices
  • Promote best practices through documentation, knowledge sharing, tech talks, and training

Requirements:

  • 5+ years of professional experience in data engineering, including Spark (PySpark) and SQL
  • 3+ years of hands-on experience building pipelines on Databricks (Workflows, Notebooks, Delta Lake)
  • Deep understanding of Apache Spark distributed processing concepts and optimization
  • Strong experience with streaming architectures and Kafka
  • Familiarity with Databricks monitoring and observability tooling
  • Understanding of Lakehouse architecture, Unity Catalog, and governance principles
  • Proven proficiency in Git-based CI/CD workflows and automated deployment
  • Strong troubleshooting, optimization, and performance tuning skills
  • Experience designing and building large-scale, automated data pipelines

Nice to have:

  • Experience with schema management (Schema Registry) and data validation frameworks (Great Expectations, Deequ)
  • Exposure to real-time ML systems and feature pipelines
  • Prior experience in startup or small agile teams
  • Familiarity with test-driven development in data engineering contexts
What we offer:
  • Competitive compensation
  • Ample career and professional growth opportunities
  • New Headquarters with an open floor plan to drive collaboration
  • Health, dental, and vision insurance
  • Pre-tax savings plans and transit/parking programs
  • 401K with competitive employer match
  • Volunteer and philanthropic activities throughout the year
  • Educational and social events
  • Fully stocked kitchen
  • Catered lunch
  • Activity-filled events
  • Quarterly outings

Additional Information:

Job Posted:
January 20, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data & Automation Engineer

Senior Data Engineer

We are looking for a Data Engineer to join our team and support with designing, ...
Location
Location
Salary
Salary:
Not provided
foundever.com Logo
Foundever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years plus experience in data engineering
  • Track record of deploying and maintaining complex data systems at an enterprise level within regulated environments
  • Expertise in implementing robust data security measures, access controls, and monitoring systems
  • Proficiency in data modeling and database management
  • Strong programming skills in Python and SQL
  • Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases
  • Deep experience with ETL processes and data pipeline development
  • Strong understanding of data warehousing concepts and best practices
  • Experience with cloud platforms such as AWS and Azure
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design and optimize complex data storage solutions, including data warehouses and data lakes
  • Develop, automate, and maintain data pipelines for efficient and scalable ETL processes
  • Ensure data quality and integrity through data validation, cleansing, and error handling
  • Collaborate with data analysts, machine learning engineers, and software engineers to deliver relevant datasets or data APIs for downstream applications
  • Implement data security measures and access controls to protect sensitive information
  • Monitor data infrastructure for performance and reliability, addressing issues promptly
  • Stay abreast of industry trends and emerging technologies in data engineering
  • Document data pipelines, processes, and best practices for knowledge sharing
  • Lead data governance and compliance efforts to meet regulatory requirements
  • Collaborate with cross-functional teams to drive data-driven decision-making within the organization
What we offer
What we offer
  • Impactful work
  • Professional growth
  • Competitive compensation
  • Collaborative environment
  • Attractive salary and benefits package
  • Continuous learning and development opportunities
  • A supportive team culture with opportunities for occasional travel for training and industry events
Read More
Arrow Right

Senior ML Data Engineer

As a Senior Data Engineer, you will play a pivotal role in our AI/ML workstream,...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
awin.com Logo
Awin Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor or Master’s degree in data science, data engineering, Computer Science with focus on math and statistics / Master’s degree is preferred
  • At least 5 years experience as AI/ML data engineer undertaking above task and accountabilities
  • Strong foundation in computer science principes and statistical methods
  • Strong experience with cloud technology (AWS or Azure)
  • Strong experience with creation of data ingestion pipeline and ET process
  • Strong knowledge of big data tool such as Spark, Databricks and Python
  • Strong understanding of common machine learning techniques and frameworks (e.g. mlflow)
  • Strong knowledge of Natural language processing (NPL) concepts
  • Strong knowledge of scrum practices and agile mindset
  • Strong Analytical and Problem-Solving Skills with attention to data quality and accuracy
Job Responsibility
Job Responsibility
  • Design and maintain scalable data pipelines and storage systems for both agentic and traditional ML workloads
  • Productionise LLM- and agent-based workflows, ensuring reliability, observability, and performance
  • Build and maintain feature stores, vector/embedding stores, and core data assets for ML
  • Develop and manage end-to-end traditional ML pipelines: data prep, training, validation, deployment, and monitoring
  • Implement data quality checks, drift detection, and automated retraining processes
  • Optimise cost, latency, and performance across all AI/ML infrastructure
  • Collaborate with data scientists and engineers to deliver production-ready ML and AI systems
  • Ensure AI/ML systems meet governance, security, and compliance requirements
  • Mentor teams and drive innovation across both agentic and classical ML engineering practices
  • Participate in team meetings and contribute to project planning and strategy discussions
What we offer
What we offer
  • Flexi-Week and Work-Life Balance: We prioritise your mental health and well-being, offering you a flexible four-day Flexi-Week at full pay and with no reduction to your annual holiday allowance. We also offer a variety of different paid special leaves as well as volunteer days
  • Remote Working Allowance: You will receive a monthly allowance to cover part of your running costs. In addition, we will support you in setting up your remote workspace appropriately
  • Pension: Awin offers access to an additional pension insurance to all employees in Germany
  • Flexi-Office: We offer an international culture and flexibility through our Flexi-Office and hybrid/remote work possibilities to work across Awin regions
  • Development: We’ve built our extensive training suite Awin Academy to cover a wide range of skills that nurture you professionally and personally, with trainings conveniently packaged together to support your overall development
  • Appreciation: Thank and reward colleagues by sending them a voucher through our peer-to-peer program
Read More
Arrow Right

Senior Data Engineer

As a senior member of our engineering team, you will take ownership of critical ...
Location
Location
Poland
Salary
Salary:
Not provided
userlane.com Logo
Userlane GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 5 years of hands-on experience in designing and developing data processing systems
  • Experience being part of a team of software engineers and helping establish processes from scratch
  • Familiarity with DBMS like ClickHouse or a different SQL-based OLAP database
  • Experience with various data engineering tools like Airflow, Kafka, dbt
  • Experience building and maintaining applications with the following languages: Python, Golang, Typescript
  • Knowledge of container technologies like Docker and Kubernetes
  • Experience with CI/CD pipelines and automated testing
  • Ability to solve problems and balance structure with creativity
  • Ability to operate independently and apply strategic thinking with technical depth
  • Willingness to share information and skills with the team
Job Responsibility
Job Responsibility
  • Shape and maintain our various data and backend components - DBs, APIs and services
  • Understand business requirements and analyze their impact on the design of our software services and tools
  • Identify architectural changes needed in our infrastructure to support a smooth process of adding new features
  • Research, propose, and deliver changes to our software architecture to address our engineering and product requirements
  • Design, develop, and maintain a solid and stable RESTful API based on industry standards and best practices
  • Collaborate with internal and external teams to deliver software that fits the overall ecosystem of our products
  • Stay up to date with the new trends and technologies that enable us to work smarter, not harder
What we offer
What we offer
  • Team & Culture: A high-performance culture with great leadership and a fun, engaged, motivated, and diverse team with people from over 20 countries
  • Market: Userlane is among the global leaders in the rapidly growing Digital Adoption industry
  • Growth: We take you and your development seriously. You can expect weekly 121s, a personalised skills assessment and development plan, on the job coaching and a budget for events and training
  • Compensation: Significant financial upside with an attractive and incentivising package on B2B basis
  • Fulltime
Read More
Arrow Right

Senior Data Engineer - Platform Enablement

SoundCloud empowers artists and fans to connect and share through music. Founded...
Location
Location
United States , New York; Atlanta; East Coast
Salary
Salary:
160000.00 - 210000.00 USD / Year
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, analytics engineering, or similar roles
  • Expert-level SQL skills, including performance tuning, advanced joins, CTEs, window functions, and analytical query design
  • Proven experience with Apache Airflow (designing DAGs, scheduling, task dependencies, monitoring, Python)
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Excellent communication skills and a collaborative mindset
Job Responsibility
Job Responsibility
  • Develop and optimize SQL data models and queries for analytics, reporting, and operational use cases
  • Design and maintain ETL/ELT workflows using Apache Airflow, ensuring reliability, scalability, and data integrity
  • Collaborate with analysts and business teams to translate data needs into efficient, automated data pipelines and datasets
  • Own and enhance data quality and validation processes, ensuring accuracy and completeness of business-critical metrics
  • Build and maintain reporting layers, supporting dashboards and analytics tools (e.g. Looker, or similar)
  • Troubleshoot and tune SQL performance, optimizing queries and data structures for speed and scalability
  • Contribute to data architecture decisions, including schema design, partitioning strategies, and workflow scheduling
  • Mentor junior engineers, advocate for best practices and promote a positive team culture
What we offer
What we offer
  • Comprehensive health benefits including medical, dental, and vision plans, as well as mental health resources
  • Robust 401k program
  • Employee Equity Plan
  • Generous professional development allowance
  • Creativity and Wellness benefit
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

SoundCloud is looking for a Senior Data Engineer to join our growing Content Pla...
Location
Location
Germany; United Kingdom , Berlin; London
Salary
Salary:
Not provided
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience in backend engineering (Scala/Go/Python) with strong design and data modeling skills
  • Hands-on experience building ETL/ELT pipelines and streaming solutions on cloud platforms (GCP preferred)
  • Proficient in SQL and experienced with relational and NoSQL databases
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience with containerization and orchestration (Docker, Kubernetes)
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Strong collaboration skills and ability to work across backend, data, and product teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain high-performance services for content modeling, serving, and integration
  • Develop data pipelines (batch & streaming) with cloud native tools
  • Collaborate on rearchitecting the content model to support rich metadata
  • Implement APIs and data services that power internal products, external integrations, and real-time features
  • Ensure data quality, governance, and validation across ingestion, storage, and serving layers
  • Optimize system performance, scalability, and cost efficiency for both backend services and data workflows
  • Work with infrastructure-as-code (Terraform) and CI/CD pipelines for deployment and automation
  • Monitor, debug, and improve reliability using various observability tools (logging, tracing, metrics)
  • Collaborate with product leadership, music industry experts, and engineering teams across SoundCloud
What we offer
What we offer
  • Extensive relocation support including allowances, one way flights, temporary accommodation and on the ground support on arrival
  • Creativity and Wellness benefit
  • Employee Equity Plan
  • Generous professional development allowance
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Free German courses at beginning, intermediate and advanced
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right

Senior Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
United States , Menlo Park
Salary
Salary:
146000.00 - 198000.00 USD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • Market competitive and pay equity-focused compensation structure
  • 100% paid health insurance for employees with 90% coverage for dependents
  • Annual lifestyle wallet for personal wellness, learning and development, and more
  • Lifetime maximum benefit for family forming and fertility benefits
  • Dedicated mental health support for employees and eligible dependents
  • Generous time away including company holidays, paid time off, sick time, parental leave, and more
  • Lively office environment with catered meals, fully stocked kitchens, and geo-specific commuter benefits
  • Bonus opportunities
  • Equity
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Location
Location
United States , Flowood
Salary
Salary:
Not provided
phasorsoft.com Logo
PhasorSoft Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience with Snowflake or Azure Cloud Data Engineering, including setting up and managing data pipelines
  • Proficiency in designing and implementing ETL processes for data integration
  • Knowledge of data warehousing concepts and best practices
  • Strong SQL skills for querying and manipulating data in Snowflake or Azure databases
  • Experience with data modeling techniques and tools to design efficient data structures
  • Understanding of data governance principles and experience implementing them in cloud environments
  • Proficiency in Tableau or Power BI for creating visualizations and interactive dashboards
  • Ability to write scripts (e.g., Python, PowerShell) for automation and orchestration of data pipelines
  • Skills to monitor and optimize data pipelines for performance and cost efficiency
  • Knowledge of cloud data security practices and tools to ensure data protection
Job Responsibility
Job Responsibility
  • Design, implement, and maintain data pipelines and architectures on Snowflake or Azure Cloud platforms
  • Develop ETL processes to extract, transform, and load data from various sources into data warehouses
  • Optimize data storage, retrieval, and processing for performance and cost-efficiency in cloud environments
  • Collaborate with stakeholders to understand data requirements and translate them into technical solutions
  • Implement data security and governance best practices to ensure data integrity and compliance
  • Work with reporting tools such as Tableau or Power BI to create interactive dashboards and visualizations
  • Monitor and troubleshoot data pipelines, ensuring reliability and scalability
  • Automate data workflows and processes using cloud-native services and scripting languages
  • Provide technical expertise and support to data analysts, scientists, and business users
  • Fulltime
Read More
Arrow Right