CrawlJobs Logo

Data Integration Engineer

Zion & Zion

Location Icon

Location:
United States

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Zion & Zion is looking for a passionate, motivated, Data Integration Engineer to join our agency. This engineer will be responsible for integrating marketing solutions with our clients’ Customer Data Platforms (CDPs). In this role, you will serve as a trusted technical advisor for clients to support their CDP implementations. You will also be responsible for architecting data collection, authoring technical documentation, writing JavaScript code within tag management systems, as well as writing custom app code for capturing individual data points.

Job Responsibility:

  • Work closely with marketing technology leads to assess the client’s technical architecture and integrate 3rd party tools with CDP technologies (e.g., Web analytics, CRM, Marketing automation tools, etc.)
  • Support tagging, data diagnostic and troubleshoot data collection issues
  • Stay up to date on industry changes, evolution, and innovation and communicate key applicable implications and opportunities to the team leaders
  • Contribute to the company’s knowledge base by creating and sharing case studies, POVs and seminar/conference summaries
  • Build integrations between a CDP for multiple marketing tools (paid media, social, marketing automation, etc) to ensure data flows into and out of the CDP for campaign delivery
  • Support growth of agency’s marketing technology practice with a heavy focus on agency’s CDP practice
  • Provide expertise in data governance and data privacy
  • Design and solution marketing stack architecture to support each CDP implementation

Requirements:

  • 4+ years experience web analytics and digital marketing experience
  • Expert-level experience with a Tag Management platform including Tealium, GTM, Adobe Launch
  • Strong knowledge of web languages like HTML, CSS
  • Expert at writing custom JavaScript to implement data collection platforms
  • Experience with Customer Data Platforms (CDP) like Tealium AudienceStream, Segment, mParticle or other industry leading CDP technology is required
  • Experience setting up integrations between a CDP and 3rd party marketing tools is preferred
  • REST, SOAP and other APIs formats
  • ETL and cloud architecture and technologies, in particular AWS preferred
  • Database basics
  • Ability to manage multiple tasks to completion within established deadlines
  • Previous agency or consulting experience preferred

Nice to have:

  • Experience setting up integrations between a CDP and 3rd party marketing tools is preferred
  • ETL and cloud architecture and technologies, in particular AWS preferred
  • Previous agency or consulting experience preferred

Additional Information:

Job Posted:
December 14, 2025

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Integration Engineer

New

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right
New

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right
New

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
United Kingdom
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Serve as a core contributor, owning and maintaining critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right
New

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Canada
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right
New

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Israel
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right
New

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Netherlands , Amsterdam
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right
New

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
United States
Salary
Salary:
125600.00 - 185500.00 USD / Year
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Craft tools that enable Data Engineers to harness ClickHouse's incredible speed and scale
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right
New

Integration Developer / Data Engineer

We are looking for an experienced Integration Developer / Data Engineer to desig...
Location
Location
Poland , Wroclaw
Salary
Salary:
Not provided
eviden.com Logo
Eviden
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with MS SQL (T-SQL, query optimization)
  • Knowledge of Microsoft Dataverse and Dynamics 365 Sales CRM
  • Hands-on experience with Azure Data Factory
  • Strong understanding of ETL process design
  • English – professional working proficiency
Job Responsibility
Job Responsibility
  • Design and implement integration processes (MS SQL → Dataverse / D365 Sales CRM)
  • Develop and optimize pipelines in Azure Data Factory
  • Analyze business and technical requirements
  • Monitor and maintain ETL processes
  • Collaborate with development and analytics teams
  • Parttime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.