CrawlJobs Logo

Data Engineer – Java Focused

amaris.com Logo

Amaris Consulting

Location Icon

Location:
Netherlands , Veldhoven

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We’re looking for a Data Engineer with strong Java skills who loves building systems that move, process, and transform data at scale. This isn’t just about managing data - it’s about writing elegant, high-performance Java code that powers our data infrastructure. If you enjoy combining software engineering best practices with data pipeline development, you’ll thrive here. You’ll work on ETL pipelines, microservices, and cloud-native solutions that directly support business decisions and product development.

Job Responsibility:

  • Build and maintain ETL pipelines using Java 11
  • Develop RESTful microservices with Spring Boot to integrate data systems
  • Write clean, efficient SQL and participate in database design and modeling
  • Collaborate across teams to turn raw data into actionable insights
  • Manage code using Git/GitHub, Maven, Gradle, and TFS
  • Apply software design patterns and testing frameworks to ensure reliability
  • Deploy and maintain applications in AWS cloud environments

Requirements:

  • Minimum 4 years experience
  • Java 11 & Spring Boot – Develop scalable backend and data systems
  • ETL & Data Pipeline Development – Build and maintain high-performance pipelines
  • SQL & Data Modeling – Efficient querying and structuring of databases
  • Software Engineering Practices – Unit testing (Mockito), CI/CD, version control

Nice to have:

  • Knowledge of Vertica or other analytical databases
  • Experience with big data tools (e.g., Apache Spark)
  • Familiarity with Docker/Kubernetes
  • Experience in data warehousing or streaming systems
What we offer:
  • An international community bringing together 110+ different nationalities
  • An environment where trust has a central place: 70% of our key leaders started their careers at the first level of responsibilities
  • A robust training system with our internal Academy and 250+ available modules
  • A vibrant workplace that frequently gathers for internal events (afterworks, team buildings, etc.)
  • Opportunity to turn your ideas into action and make a tangible impact through ESG commitments
  • Empowered to design and lead projects that create real social or environmental impact through the WeCare Together program

Additional Information:

Job Posted:
March 20, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer – Java Focused

Senior Data Engineer

As a Senior Data Engineer in the Finance-DE team, you will have the opportunity ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Sparteo is an independent suite of AI-powered advertising technologies built on ...
Location
Location
Salary
Salary:
Not provided
corporate.sparteo.com Logo
Sparteo
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in distributed data systems
  • Proficient in clustering, various table types, and data types
  • Strong understanding of materialized views concepts
  • Skilled in designing table sorting keys
  • Solid programming skills in Python, Java, or Scala
  • Expertise in database technologies (SQL, NoSQL)
  • You are comfortable using AI-assisted development tools (e.g., GitHub Copilot, Tabnine)
  • Proven experience leading data teams in fast-paced environments
  • Ability to mentor junior engineers and foster a culture of growth and collaboration
  • Data-driven decision-making abilities aligned with Sparteo's focus on results and improvement
Job Responsibility
Job Responsibility
  • Data Infrastructure Design and Optimization
  • Lead the design, implementation, and optimization of data architectures to support massive data pipelines
  • Ensure the scalability, security, and performance of the data infrastructure
  • Collaborate with software and data scientists to integrate AI-driven models into data workflows
  • Leadership and Team Management
  • Manage and mentor a team of 2 data engineers, fostering a culture of continuous improvement
  • Oversee project execution and delegate responsibilities within the team
  • Guide technical decisions and promote best practices in data engineering
  • Collaboration and Cross-Functional Engagement
  • Work closely with product managers, developers, and analytics teams to define data needs and ensure alignment with business objectives
What we offer
What we offer
  • A convivial and flexible working environment, with our telecommuting culture integrated into the company's organization
  • A friendly and small-sized team that you can find in our offices near Lille or in Paris
  • Social gatherings and company events organized throughout the year
  • Sparteo is experiencing significant growth both in terms of business and workforce, especially internationally
  • Additional benefits include an advantageous compensation system with non-taxable and non-mandatory overtime hours, as well as a Swile restaurant ticket card
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
United Kingdom
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Serve as a core contributor, owning and maintaining critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right

Data Engineer

The Finance Data Engineering team focuses on empowering data-informed decisions ...
Location
Location
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Software Engineer or Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies.
Job Responsibility
Job Responsibility
  • Partner with Product Managers, Engineering, analytics, and business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions.
  • Design and implement data pipelines to ETL data from multiple sources into a central data warehouse.
  • Improve data quality by using and improving internal tools/frameworks to automatically detect DQ issues.
  • Develop and implement data governance procedures to ensure data security, privacy, and compliance.
  • Implement new technologies to improve data processing and analysis.
  • Coach junior data engineers to improve their skills.
What we offer
What we offer
  • Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more.
Read More
Arrow Right

Senior Software Engineer - Java Full Stack - Futures Engineering

As a Developer, you will be enhancing and maintaining an enterprise Cleared Deri...
Location
Location
United States , Chicago
Salary
Salary:
185000.00 - 215000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience in back-end development with Java
  • 3+ years of experience within a financial institution, preferably in FCM (Futures Commission Merchant) or Broker-Dealer environments
  • Ability to work under pressure and meet deadlines
  • Experience building microservices
  • Strong understanding of design patterns, multithreading, and performance optimization
  • Strong problem-solving skills and ability to debug complex systems
  • Hands-on experience with Apache Kafka for event streaming and messaging
  • Proficiency in MongoDB or AWS DocumentDB for NoSQL database design and querying
  • Familiarity with Apache Solr for search and indexing, Apache ZooKeeper for distributed system coordination, and HashiCorp Vault for secrets management
  • Experience with Kubernetes for container orchestration and deployment
Job Responsibility
Job Responsibility
  • Working in a project team alongside other developers to architect, develop, and optimize server-side applications, RESTful APIs, and microservices using Java
  • Implement event-driven architectures with Apache Kafka and for real-time data processing
  • Contribute to front-end development using ReactJS, focusing on integrating UI components with back-end services
  • Optimize application performance, security, and reliability
  • Deploy and manage applications in Kubernetes clusters, ensuring high availability and scalability
  • Provide technical support for application
  • Collaborate with cross-functional teams across the organization to architect solutions and deliver robust features
  • Participate in code reviews, unit testing, and CI/CD pipeline maintenance
What we offer
What we offer
  • competitive compensation packages
  • company equity
  • 401k matching
  • gender neutral parental leave
  • full medical, dental and vision insurance
  • lunch stipends
  • fully stocked kitchens
  • happy hours
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Ecosystem Data Engin...
Location
Location
United States , San Francisco; Seattle; Austin
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up ideas to help instigate self-serve data engineering within the company
  • Build micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering tea...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up with ideas to help instigate self-serve data engineering within the company
  • Apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
  • Take vague requirements and transform them into solid solutions
  • Solve challenging problems, where creativity is as crucial as your ability to write code and test cases
What we offer
What we offer
  • Health and wellbeing resources
  • Paid volunteer days
  • Fulltime
Read More
Arrow Right