CrawlJobs Logo

Senior Data Collection Engineer

https://feverup.com/fe Logo

Fever

Location Icon

Location:
Argentina

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a Senior Data Collection Engineer to help design, build and operate Fever’s tracking and measurement infrastructure at scale. This role sits at the intersection of engineering, data architecture and product, with a strong focus on server-side tracking, first-party data collection and DWH-first measurement systems. You will not be implementing ad-hoc pixels for the sake of it — your work will shape how data is reliably collected, governed and activated across the entire company and its partners. You’ll be working on Dataverse, Fever’s internal tracking layer, used across web, app, marketplace, whitelabels and multiple companies within the group.

Job Responsibility:

  • Design and evolve server-side tracking architectures (GTM Server-Side, middleware, APIs, event pipelines)
  • Implement and maintain first-party data collection systems across web and app
  • Own complex JavaScript-based tracking logic, both client-side and server-side
  • Define and enforce tracking standards (events, schemas, identifiers, consent-aware logic)
  • Ensure tracking data flows reliably into the Data Warehouse (Snowflake) as the source of truth
  • Build scalable solutions to support marketing, experimentation, SEO, product and partners
  • Collaborate with Growth, Product, Data Science and Engineering teams on measurement needs
  • Help evaluate trade-offs between vendors (GA4, Mixpanel, Meta, etc.) and internal systems
  • Participate in incident analysis, performance tuning and load testing of tracking infrastructure
  • Contribute to internal documentation and external-facing capability documents for partners

Requirements:

  • Strong experience with JavaScript (ES6+), including advanced concepts and browser internals
  • Hands-on experience with server-side tracking (GTM Server-Side, custom endpoints, or similar)
  • Solid understanding of web tracking fundamentals: cookies, localStorage, sessions, identity, consent, attribution
  • Experience working with event-based data models and structured schemas
  • Familiarity with cloud environments (GCP preferred, AWS acceptable)
  • Experience integrating data into a Data Warehouse (Snowflake, BigQuery, etc.)
  • Ability to reason about data quality, reliability and scalability, not just implementation
  • Comfortable working autonomously in a cross-functional, fast-moving environment
  • 5+ years of hands-on experience in data collection, tracking implementation, or measurement infrastructure roles
  • Proven track record designing and maintaining scalable tracking systems in production environments
  • Strong knowledge of best practices in event tracking, data quality, and testing methodologies
  • Experience designing scalable data collection systems and event-driven architectures
  • Ability to deliver code from development to production while ensuring high-quality engineering solutions
  • High autonomy and a proactive mindset for identifying and solving problems with a bias for action
  • Exceptional communication skills and ability to thrive in collaborative, cross-functional environments
  • Growth mindset, adaptability to change, and commitment to continuous improvement
  • Empathetic, inclusive, and curious attitude with a passion for making a positive impact through technology
  • Advanced English proficiency (written and spoken)

Nice to have:

  • Experience with mobile tracking (iOS / Android concepts, even if not native development)
  • Knowledge of privacy and consent frameworks (GDPR, CMPs, Consent Mode)
  • Experience with middleware or streaming systems
  • Background in analytics engineering or data engineering
What we offer:
  • 40% discount on all Fever events and experiences
  • Osde 410 as medical insurance
  • Home office friendly anywhere in Argentina
  • Responsibility from day one and professional and personal growth
  • Great work environment with a young, international team of talented people to work with!
  • English Lessons
  • Gympass
  • Attractive compensation package consisting of base salary and the potential to earn a significant bonus for top performance. (Including Base, Variable, and Stock Options)

Additional Information:

Job Posted:
February 17, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Collection Engineer

Senior Data Engineer

As a senior data engineer, you will help our clients with building a variety of ...
Location
Location
Belgium , Brussels
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience as a Data Engineer or in software engineering in a data context
  • Programming experience with one or more languages: Python, Scala, Java, C/C++
  • Knowledge of relational database technologies/concepts and SQL is required
  • Experience building, scheduling and maintaining data pipelines (Spark, Airflow, Data Factory)
  • Practical experience with at least one cloud provider (GCP, AWS or Azure). Certifications from any of these is considered a plus
  • Knowledge of Git and CI/CD
  • Able to work independently, prioritize multiple stakeholders and tasks, and manage work time effectively
  • You have a degree in Computer Engineering, Information Technology or related field
  • You are proficient in English, knowledge of Dutch and/or French is a plus.
Job Responsibility
Job Responsibility
  • Gather business requirements and translate them to technical specifications
  • Design, implement and orchestrate scalable and efficient data pipelines to collect, process, and serve large datasets
  • Apply DataOps best practices to automate testing, deployment and monitoring
  • Continuously follow & learn the latest trends in the data world.
What we offer
What we offer
  • A variety of perks, such as mobility options (including a company car), insurance coverage, meal vouchers, eco-cheques, and more
  • Continuous learning opportunities through the Sopra Steria Academy to support your career development
  • The opportunity to connect with fellow Sopra Steria colleagues at various team events.
Read More
Arrow Right

Senior Data Engineer

Adswerve is looking for a Senior Data Engineer to join our Adobe Services team. ...
Location
Location
United States
Salary
Salary:
130000.00 - 155000.00 USD / Year
adswerve.com Logo
Adswerve, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience)
  • 5+ years of experience in a data engineering, analytics, or marketing technology role
  • Hands-on expertise in Adobe Experience Platform (AEP), Real-Time CDP, Journey Optimizer, or similar tools is a big plus
  • Strong proficiency in SQL and hands-on experience with data transformation and modeling
  • Understanding of ETL/ELT workflows (e.g., dbt, Fivetran, Airflow, etc.) and cloud data platforms (e.g., GCP, Snowflake, AWS, Azure)
  • Experience with ingress/egress patterns and interacting with API’s to move data
  • Experience with Python, or JavaScript in a data or scripting context
  • Experience with customer data platforms (CDPs), event-based tracking, or customer identity management
  • Understanding of Adobe Experience Cloud integrations (e.g., Adobe Analytics, Target, Campaign) is a plus
  • Strong communication skills with the ability to lead technical conversations and present to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Lead the end-to-end architecture of data ingestion and transformation in Adobe Experience Platform (AEP) using Adobe Data Collection (Tags), Experience Data Model (XDM), and source connectors
  • Design and optimize data models, identity graphs, and segmentation strategies within Real-Time CDP to enable personalized customer experiences
  • Implement schema mapping, identity resolution, and data governance strategies
  • Collaborate with Data Architects to build scalable, reliable data pipelines across multiple systems
  • Conduct data quality assessments and support QA for new source integrations and activations
  • Write and maintain internal documentation and knowledge bases on AEP best practices and data workflows
  • Simplify complex technical concepts and educate team members and clients in a clear, approachable way
  • Contribute to internal knowledge sharing and mentor junior engineers in best practices around data modeling, pipeline development, and Adobe platform capabilities
  • Stay current on the latest Adobe Experience Platform features and data engineering trends to inform client strategies
What we offer
What we offer
  • Medical, dental and vision available for employees
  • Paid time off including vacation, sick leave & company holidays
  • Paid volunteer time
  • Flexible working hours
  • Summer Fridays
  • “Work From Home Light” days between Christmas and New Year’s Day
  • 401(k) Plan with 5% company match and no vesting period
  • Employer Paid Parental Leave
  • Health-care Spending Accounts
  • Dependent-care Spending Accounts
  • Fulltime
Read More
Arrow Right

Data engineer senior

Within a dynamic, high-level team, you will contribute to both R&D and client pr...
Location
Location
France , Paris
Salary
Salary:
Not provided
artelys.com Logo
Artelys
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree from a top engineering school or a high-level university program
  • At least 3 years of experience in designing and developing data-driven solutions with high business impact, particularly in industrial or large-scale environments
  • Excellent command of Python for both application development and data processing, with strong expertise in libraries such as Pandas, Polars, NumPy, and the broader Python Data ecosystem
  • Experience implementing data processing pipelines using tools like Apache Airflow, Databricks, Dask, or flow orchestrators integrated into production environments
  • Contributed to large-scale projects combining data analysis, workflow orchestration, back-end development (REST APIs and/or Messaging), and industrialisation, within a DevOps/DevSecOps-oriented framework
  • Proficient in using Docker for processing encapsulation and deployment
  • Experience with Kubernetes for orchestrating workloads in cloud-native architectures
  • Motivated by practical applications of data in socially valuable sectors such as energy, mobility, or health, and thrives in environments where autonomy, rigour, curiosity, and teamwork are valued
  • Fluency in English and French is required
Job Responsibility
Job Responsibility
  • Design and develop innovative and high-performance software solutions addressing industrial challenges, primarily using the Python language and a microservices architecture
  • Gather user and business needs to design data collection and storage solutions best suited to the presented use cases
  • Develop technical solutions for data collection, cleaning, and processing, then industrialise and automate them
  • Contribute to setting up technical architectures based on Data or even Big Data environments
  • Carry out development work aimed at industrialising and orchestrating computations (statistical and optimisation models) and participate in software testing and qualification
What we offer
What we offer
  • Up to 2 days of remote work per week possible
  • Flexible working hours
  • Offices located in the city center of each city where we are located
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Provectus, a leading AI consultancy and solutions provider specializing in Data ...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.
  • Experience in AWS
  • Proficiency in programming languages relevant to data engineering, such as Python and SQL
  • Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation
  • Experience in building scalable APIs
  • Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization
  • Upper-Intermediate or higher English skills
  • Ability to take ownership, solve problems proactively, and collaborate effectively in dynamic settings
Job Responsibility
Job Responsibility
  • Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals
  • Collect and manage large volumes of varied data sets
  • Work directly with ML Engineers to create robust and resilient data pipelines that feed Data Products
  • Define data models that integrate disparate data across the organization
  • Design, implement, and maintain ETL/ELT data pipelines
  • Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently
  • Develop, continuously test, and deploy Data API Products with Python and frameworks like Flask or FastAPI
What we offer
What we offer
  • Participate in internal training programs (Leadership, Public Speaking, etc.) with full support for AWS and other professional certifications
  • Work with the latest AI tools, premium subscriptions, and the freedom to use them in your daily work
  • Collaboration with an international, cross-functional team
  • Comprehensive private medical insurance or budget for your medical needs
  • Paid sick leave, vacation, and public holidays
  • Equipment and all the tech you need for comfortable, productive work
  • Special gifts for weddings, childbirth, and other personal milestones
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
85000.00 - 150000.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or another related technical field Required
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field Preferred
  • 2+ years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows Required
  • 6+ years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics Required
  • Expert knowledge on SQL and Python programming
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed
  • Experience in tuning queries for performance and scalability
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar
  • Excellent organizational, prioritization and analytical abilities
  • Have proven experience working in incremental execution through successful launches
Job Responsibility
Job Responsibility
  • Work closely with various business, IT, Analyst and Data Science groups to collect business requirements
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform
  • Optimize data pipelines for performance, scalability, and reliability
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
  • Troubleshoot and resolve data engineering issues as they arise
  • Develop REST APIs to expose data to other teams within the company
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Fulltime
Read More
Arrow Right

Senior ML Data Engineer

As a Senior Data Engineer, you will play a pivotal role in our AI/ML workstream,...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
awin.com Logo
Awin Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor or Master’s degree in data science, data engineering, Computer Science with focus on math and statistics / Master’s degree is preferred
  • At least 5 years experience as AI/ML data engineer undertaking above task and accountabilities
  • Strong foundation in computer science principes and statistical methods
  • Strong experience with cloud technology (AWS or Azure)
  • Strong experience with creation of data ingestion pipeline and ET process
  • Strong knowledge of big data tool such as Spark, Databricks and Python
  • Strong understanding of common machine learning techniques and frameworks (e.g. mlflow)
  • Strong knowledge of Natural language processing (NPL) concepts
  • Strong knowledge of scrum practices and agile mindset
  • Strong Analytical and Problem-Solving Skills with attention to data quality and accuracy
Job Responsibility
Job Responsibility
  • Design and maintain scalable data pipelines and storage systems for both agentic and traditional ML workloads
  • Productionise LLM- and agent-based workflows, ensuring reliability, observability, and performance
  • Build and maintain feature stores, vector/embedding stores, and core data assets for ML
  • Develop and manage end-to-end traditional ML pipelines: data prep, training, validation, deployment, and monitoring
  • Implement data quality checks, drift detection, and automated retraining processes
  • Optimise cost, latency, and performance across all AI/ML infrastructure
  • Collaborate with data scientists and engineers to deliver production-ready ML and AI systems
  • Ensure AI/ML systems meet governance, security, and compliance requirements
  • Mentor teams and drive innovation across both agentic and classical ML engineering practices
  • Participate in team meetings and contribute to project planning and strategy discussions
What we offer
What we offer
  • Flexi-Week and Work-Life Balance: We prioritise your mental health and well-being, offering you a flexible four-day Flexi-Week at full pay and with no reduction to your annual holiday allowance. We also offer a variety of different paid special leaves as well as volunteer days
  • Remote Working Allowance: You will receive a monthly allowance to cover part of your running costs. In addition, we will support you in setting up your remote workspace appropriately
  • Pension: Awin offers access to an additional pension insurance to all employees in Germany
  • Flexi-Office: We offer an international culture and flexibility through our Flexi-Office and hybrid/remote work possibilities to work across Awin regions
  • Development: We’ve built our extensive training suite Awin Academy to cover a wide range of skills that nurture you professionally and personally, with trainings conveniently packaged together to support your overall development
  • Appreciation: Thank and reward colleagues by sending them a voucher through our peer-to-peer program
Read More
Arrow Right

Senior Data Engineer

The Data Engineer will build scalable pipelines and data models, implement ETL w...
Location
Location
United States , Fort Bragg
Salary
Salary:
Not provided
barbaricum.com Logo
Barbaricum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Active DoD TS/SCI clearance (required or pending verification)
  • Bachelor’s degree in Computer Science, Data Science, Engineering, or related field (or equivalent experience) OR CSSLP / CISSP-ISSAP
  • Strong programming skills in Python, Java, or Scala
  • Strong SQL skills
  • familiarity with analytics languages/tools such as R
  • Experience with data processing frameworks (e.g., Apache Spark, Hadoop) and orchestration tools (e.g., Airflow)
  • Familiarity with cloud-based data services (e.g., AWS Redshift, Google BigQuery, Azure Data Factory)
  • Experience with data modeling, database design, and data architecture concepts
  • Strong analytical and problem-solving skills with attention to detail
  • Strong written and verbal communication skills
Job Responsibility
Job Responsibility
  • Build and maintain scalable, reliable data pipelines to collect, process, and store data from multiple sources
  • Design and implement ETL processes to support analytics, reporting, and operational needs
  • Develop and maintain data models, schemas, and standards to support enterprise data usage
  • Collaborate with data scientists, analysts, and stakeholders to understand requirements and deliver solutions
  • Analyze large datasets to identify trends, patterns, and actionable insights
  • Present findings and recommendations through dashboards, reports, and visualizations
  • Optimize database and pipeline performance for scalability and reliability across large datasets
  • Monitor and troubleshoot pipeline issues to minimize downtime and improve system resilience
  • Implement data quality checks, validation routines, and integrity controls
  • Implement security measures to protect data and systems from unauthorized access
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right