CrawlJobs Logo

Data Engineer - Java focused

amaris.com Logo

Amaris Consulting

Location Icon

Location:
Netherlands , Veldhoven

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Are you a passionate Java developer with a strong interest in data and building robust, high-performance systems? We’re looking for a Data Engineer with deep Java expertise to design, develop, and maintain scalable ETL pipelines and data solutions in a fast-paced, innovative environment. This is not just a data role — it’s for a software engineer who loves data. If you thrive on turning complex data challenges into elegant, maintainable code using Java 11, Spring Boot, and cloud-native tools, this is your next big step. You’ll be part of a collaborative, agile team shaping the future of data infrastructure — with real impact on business decisions, product innovation, and system scalability.

Job Responsibility:

  • Design, develop, and deploy high-performance ETL pipelines using Java 11 and modern frameworks
  • Build and maintain RESTful microservices using Spring Boot, ensuring scalability and reliability
  • Write clean, efficient, and well-documented SQL scripts and contribute to data modeling and schema design
  • Collaborate across teams to integrate data from diverse sources into unified, actionable systems
  • Use Gradle, Maven, Git (GitHub), and TFS to manage code, CI/CD pipelines, and version control
  • Apply design patterns and best practices to create modular, testable, and maintainable code
  • Work in cloud environments (AWS) to deploy and manage data infrastructure
  • Participate in all phases of the SDLC: analysis, coding, testing (including Mockito), debugging, and release

Requirements:

  • Strong hands-on experience with Java 11, Spring Boot, RESTful APIs, and JdbcTemplate
  • Proven experience in ETL development and data pipeline engineering using Java
  • Solid understanding of SQL scripting, database design, and data modeling
  • Experience with IDEs like IntelliJ IDEA or Eclipse, and build tools like Maven and Gradle
  • Familiarity with testing frameworks (e.g., Mockito) and version control (Git)
  • Exposure to cloud platforms (AWS) and containerization concepts (Docker/Kubernetes — a plus)
  • A problem-solving mindset, strong analytical skills, and a passion for clean, efficient code

Nice to have:

  • Experience with Vertica or other analytical databases
  • Knowledge of data warehousing, streaming data, or big data tools (e.g., Apache Spark)
What we offer:
  • Performance bonuses
  • Mobility options including lease cars

Additional Information:

Job Posted:
January 29, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer - Java focused

Senior Data Engineer

As a Senior Data Engineer in the Finance-DE team, you will have the opportunity ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Sparteo is an independent suite of AI-powered advertising technologies built on ...
Location
Location
Salary
Salary:
Not provided
corporate.sparteo.com Logo
Sparteo
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in distributed data systems
  • Proficient in clustering, various table types, and data types
  • Strong understanding of materialized views concepts
  • Skilled in designing table sorting keys
  • Solid programming skills in Python, Java, or Scala
  • Expertise in database technologies (SQL, NoSQL)
  • You are comfortable using AI-assisted development tools (e.g., GitHub Copilot, Tabnine)
  • Proven experience leading data teams in fast-paced environments
  • Ability to mentor junior engineers and foster a culture of growth and collaboration
  • Data-driven decision-making abilities aligned with Sparteo's focus on results and improvement
Job Responsibility
Job Responsibility
  • Data Infrastructure Design and Optimization
  • Lead the design, implementation, and optimization of data architectures to support massive data pipelines
  • Ensure the scalability, security, and performance of the data infrastructure
  • Collaborate with software and data scientists to integrate AI-driven models into data workflows
  • Leadership and Team Management
  • Manage and mentor a team of 2 data engineers, fostering a culture of continuous improvement
  • Oversee project execution and delegate responsibilities within the team
  • Guide technical decisions and promote best practices in data engineering
  • Collaboration and Cross-Functional Engagement
  • Work closely with product managers, developers, and analytics teams to define data needs and ensure alignment with business objectives
What we offer
What we offer
  • A convivial and flexible working environment, with our telecommuting culture integrated into the company's organization
  • A friendly and small-sized team that you can find in our offices near Lille or in Paris
  • Social gatherings and company events organized throughout the year
  • Sparteo is experiencing significant growth both in terms of business and workforce, especially internationally
  • Additional benefits include an advantageous compensation system with non-taxable and non-mandatory overtime hours, as well as a Swile restaurant ticket card
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
United Kingdom
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Serve as a core contributor, owning and maintaining critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right

Data Engineer

The Finance Data Engineering team focuses on empowering data-informed decisions ...
Location
Location
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Software Engineer or Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies.
Job Responsibility
Job Responsibility
  • Partner with Product Managers, Engineering, analytics, and business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions.
  • Design and implement data pipelines to ETL data from multiple sources into a central data warehouse.
  • Improve data quality by using and improving internal tools/frameworks to automatically detect DQ issues.
  • Develop and implement data governance procedures to ensure data security, privacy, and compliance.
  • Implement new technologies to improve data processing and analysis.
  • Coach junior data engineers to improve their skills.
What we offer
What we offer
  • Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more.
Read More
Arrow Right

Senior Software Engineer - Java Full Stack - Futures Engineering

As a Developer, you will be enhancing and maintaining an enterprise Cleared Deri...
Location
Location
United States , Chicago
Salary
Salary:
185000.00 - 215000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience in back-end development with Java
  • 3+ years of experience within a financial institution, preferably in FCM (Futures Commission Merchant) or Broker-Dealer environments
  • Ability to work under pressure and meet deadlines
  • Experience building microservices
  • Strong understanding of design patterns, multithreading, and performance optimization
  • Strong problem-solving skills and ability to debug complex systems
  • Hands-on experience with Apache Kafka for event streaming and messaging
  • Proficiency in MongoDB or AWS DocumentDB for NoSQL database design and querying
  • Familiarity with Apache Solr for search and indexing, Apache ZooKeeper for distributed system coordination, and HashiCorp Vault for secrets management
  • Experience with Kubernetes for container orchestration and deployment
Job Responsibility
Job Responsibility
  • Working in a project team alongside other developers to architect, develop, and optimize server-side applications, RESTful APIs, and microservices using Java
  • Implement event-driven architectures with Apache Kafka and for real-time data processing
  • Contribute to front-end development using ReactJS, focusing on integrating UI components with back-end services
  • Optimize application performance, security, and reliability
  • Deploy and manage applications in Kubernetes clusters, ensuring high availability and scalability
  • Provide technical support for application
  • Collaborate with cross-functional teams across the organization to architect solutions and deliver robust features
  • Participate in code reviews, unit testing, and CI/CD pipeline maintenance
What we offer
What we offer
  • competitive compensation packages
  • company equity
  • 401k matching
  • gender neutral parental leave
  • full medical, dental and vision insurance
  • lunch stipends
  • fully stocked kitchens
  • happy hours
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Ecosystem Data Engin...
Location
Location
United States , San Francisco; Seattle; Austin
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up ideas to help instigate self-serve data engineering within the company
  • Build micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering tea...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up with ideas to help instigate self-serve data engineering within the company
  • Apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
  • Take vague requirements and transform them into solid solutions
  • Solve challenging problems, where creativity is as crucial as your ability to write code and test cases
What we offer
What we offer
  • Health and wellbeing resources
  • Paid volunteer days
  • Fulltime
Read More
Arrow Right