CrawlJobs Logo

Streaming Data Platform Engineer

spyro-soft.com Logo

Spyrosoft

Location Icon

Location:
Poland , Wroclaw

Category Icon

Job Type Icon

Contract Type:
B2B

Salary Icon

Salary:

150.00 - 190.00 PLN / Hour

Job Description:

Our customer is a leading German producer of customized solutions for the self-supply of solar-powered electricity. This includes photovoltaic, energy storage systems as well as cloud technology systems helping individuals to become energetically independent. Please note that this position involves occasional on-call duty to resolve potential customer-critical incidents.

Job Responsibility:

  • Configure and operate our time-series database infrastructure to store and process telemetry data
  • Continuously develop and optimize our stream-processing pipelines
  • Continuously develop and optimize our Python and Java-based applications that are close to the TSDB
  • Automate build and deployment processes in hybrid environments (e.g., public cloud and self-hosted)
  • Work closely with development teams and architects to enhance performance and features
  • Python development to adapt to telemetry database components, API design for scalable infrastructure and API development for interfaces from software development projects
  • Manage, scale, and handle access management for telemetry databases

Requirements:

  • At least 3 years of experience working with InfluxDB
  • At least 3 years of experience with programming languages such as Python and Java
  • Familiarity with Ansible, Terraform or Terragrunt
  • Knowledge of Apache Kafka or Confluent Cloud
  • Experience with containers in scalable environments such as Docker, Kubernetes, ArgoCD and AKS
  • Experience with SCMS and CI/CD systems like Gitlab-CI
  • Knowledge of cloud environments and services such as Azure

Additional Information:

Job Posted:
January 29, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Streaming Data Platform Engineer

Senior Back End Engineer for Streaming Data Platform

Do you want to build a high-quality data platform that will innovate financial m...
Location
Location
Salary
Salary:
Not provided
korfinancial.com Logo
KOR Financial
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A minimum of 8+ years of experience as a Back End Engineer
  • Experience with Java and Spring Boot Framework
  • Experience with building and running applications on public cloud vendors like AWS
  • Working experience Kafka, DataBricks and Streaming data solutions
  • Experience profiling, debugging, and performance tuning complex distributed systems
  • A firm reliance on unit testing and mocking frameworks with a TDD (Test Driven Development) mindset
  • Knowledge of OOP principles and modern development practices
Job Responsibility
Job Responsibility
  • Designing and implementing the streaming data platform engine and SDK
  • Implementing new features for our range of web and streaming applications and data reporting capabilities
  • Be an active voice in the platform's build-out in regards to the technical choices and implementations
  • Working closely with the broader team to embrace new challenges and adapt requirements as we continue to grow and adjust priorities
  • Paired programming with a growing team of Back-end, Data, and Front-end Engineers
What we offer
What we offer
  • Culture of trust, empowerment, and constructive feedback
  • Competitive salary, great IT equipment, and expense allowance
  • Flexible working times
  • A span of control that matches your ambitions and skills
  • Commitment to a genuine, balanced relationship
Read More
Arrow Right

Principal Data Engineer

PointClickCare is searching for a Principal Data Engineer who will contribute to...
Location
Location
United States
Salary
Salary:
183200.00 - 203500.00 USD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi
  • Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
Job Responsibility
Job Responsibility
  • Lead and guide the design and implementation of scalable streaming data pipelines
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work
What we offer
What we offer
  • Benefits starting from Day 1!
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more!
  • Fulltime
Read More
Arrow Right

Data Platform Engineer

Data Platform Engineers at Adyen build the foundational layer of tooling and pro...
Location
Location
Netherlands , Amsterdam
Salary
Salary:
Not provided
adyen.com Logo
Adyen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Fluency in Python
  • Experience developing and maintaining distributed data and compute systems like Spark, Trino, Druid, etc
  • Experience developing and maintaining DevOps pipelines and development ecosystems
  • Experience developing and maintaining real-time and batch data pipelines (via Kafka, Spark streaming)
  • Experience with Kubernetes ecosystem (k8s, docker), and/or Hadoop ecosystems (Hive, Yarn, HDFS, Kerberos)
  • Team player with strong communication skills
  • Ability to work closely with diverse stakeholders
Job Responsibility
Job Responsibility
  • Develop and maintain scalable and high performance big data platforms
  • Work with distributed systems in all shapes and flavors (databases, filesystems, compute, etc.)
  • Identify opportunities to improve continuous release and deployment environments
  • Build or deploy tools to enhance data discoverability through the collection and presentation of metadata
  • Introduce and extend tools to enhance the quality of our data, platform-wide
  • Explore and introduce technologies and practices to reduce the time to insight for analysts and data scientists
  • Develop streaming processing applications and frameworks
  • Build the foundational layer of tooling and processes for on-premise Big Data Platforms
  • Collaborate with data engineers and ML scientists and engineers to build and roll-out tools
  • Develop and operate multiple big data platforms
Read More
Arrow Right

Senior Principal Data Platform Software Engineer

We’re looking for a Sr Principal Data Platform Software Engineer (P70) to be a k...
Location
Location
Salary
Salary:
239400.00 - 312550.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years in Data Engineering, Software Engineering, or related roles, with substantial exposure to big data ecosystems
  • Demonstrated experience building and operating data platforms or large‑scale data services in production
  • Proven track record of building services from the ground up (requirements → design → implementation → deployment → ongoing ownership)
  • Hands‑on experience with AWS, GCP (e.g., compute, storage, data, and streaming services) and cloud‑native architectures
  • Practical experience with big data technologies, such as Databricks, Apache Spark, AWS EMR, Apache Flink, or StarRocks
  • Strong programming skills in one or more of: Kotlin, Scala, Java, Python
  • Experience leading cross‑team technical initiatives and influencing senior stakeholders
  • Experience mentoring Staff/Principal engineers and lifting the technical bar for a team or org
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience
Job Responsibility
Job Responsibility
  • Design, develop and own delivery of high quality big data and analytical platform solutions aiming to solve Atlassian’s needs to support millions of users with optimal cost, minimal latency and maximum reliability
  • Improve and operate large‑scale distributed data systems in the cloud (primarily AWS, with increasing integration with GCP and Kubernetes‑based microservices)
  • Drive the evolution of our high-performance analytical databases and its integrations with products, cloud infrastructures (AWS and GCP) and isolated cloud environments
  • Help define and uplift engineering and operational standards for petabyte scale data platforms, with sub‑second analytic queries and multi‑region availability (coding guidelines, code review practices, observability, incident response, SLIs/SLOs)
  • Partner across multiple product and platform teams (including Analytics, Marketplace/Ecosystem, Core Data Platform, ML Platform, Search, and Oasis/FedRAMP) to deliver company‑wide initiatives that depend on reliable, high‑quality data
  • Act as a technical mentor and multiplier, raising the bar on design quality, code quality, and operational excellence across the broader team
  • Design and implement self‑healing, resilient data platforms with strong observability, fault tolerance, and recovery characteristics
  • Own the long‑term architecture and technical direction of Atlassian’s product data platform with projects that are directly tied to Atlassian’s company-level OKRs
  • Be accountable for the reliability, cost efficiency, and strategic direction of Atlassian’s product analytical data platform
  • Partner with executives and influence senior leaders to align engineering efforts with Atlassian’s long-term business objectives
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right

Senior Backend Engineer- Data Ingestion - (ClickPipes Platform)

The ClickPipes Platform plays a critical role in driving the growth of our compa...
Location
Location
United States
Salary
Salary:
133450.00 - 197200.00 USD / Year
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of relevant software development industry experience building data-intensive software solutions
  • Strong knowledge of Golang and experience with its ecosystem
  • Experience with distributed systems and microservices architecture
  • The ability to design and build robust ETL data pipelines that can handle large volumes of data reliably and efficiently
  • Understanding data replication methodologies like CDC
  • Good knowledge of cloud-native architecture and practical experience with at least one major CSP
  • You have excellent communication skills and the ability to work well within a team and across engineering teams
  • You are a strong problem solver and have solid production debugging skills
Job Responsibility
Job Responsibility
  • Develop and enhance integrations with various data sources including streaming platforms, databases, data lakes, and object stores
  • Continuously improve our systems based on operational metrics, customer feedback, and evolving business requirements
  • Drive technical discussions and contribute to architectural decisions that impact our platform's scalability and resilience
  • Participate in on-call rotations to ensure system reliability and respond to production incidents
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right

Data Engineer, Enterprise Data, Analytics and Innovation

Are you passionate about building robust data infrastructure and enabling innova...
Location
Location
United States
Salary
Salary:
110000.00 - 125000.00 USD / Year
vaniamgroup.com Logo
Vaniam Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience in data engineering, ETL, or related roles
  • Strong proficiency in Python and SQL for data engineering
  • Hands-on experience building and maintaining pipelines in a lakehouse or modern data platform
  • Practical understanding of Medallion architectures and layered data design
  • Familiarity with modern data stack tools, including: Spark or PySpark
  • Workflow orchestration (Airflow, dbt, or similar)
  • Testing and observability frameworks
  • Containers (Docker) and Git-based version control
  • Excellent communication skills, problem-solving mindset, and a collaborative approach
Job Responsibility
Job Responsibility
  • Design, build, and operate reliable ETL and ELT pipelines in Python and SQL
  • Manage ingestion into Bronze, standardization and quality in Silver, and curated serving in Gold layers of our Medallion architecture
  • Maintain ingestion from transactional MySQL systems into Vaniam Core to keep production data flows seamless
  • Implement observability, data quality checks, and lineage tracking to ensure trust in all downstream datasets
  • Develop schemas, tables, and views optimized for analytics, APIs, and product use cases
  • Apply and enforce best practices for security, privacy, compliance, and access control, ensuring data integrity across sensitive healthcare domains
  • Maintain clear and consistent documentation for datasets, pipelines, and operating procedures
  • Lead the integration of third-party datasets, client-provided sources, and new product-generated data into Vaniam Core
  • Partner with product and innovation teams to build repeatable processes for onboarding new data streams
  • Ensure harmonization, normalization, and governance across varied data types (scientific, engagement, operational)
What we offer
What we offer
  • 100% remote environment with opportunities for local meet-ups
  • Positive, diverse, and supportive culture
  • Passionate about serving clients focused on Cancer and Blood diseases
  • Investment in you with opportunities for professional growth and personal development through Vaniam Group University
  • Health benefits – medical, dental, vision
  • Generous parental leave benefit
  • Focused on your financial future with a 401(k) Plan and company match
  • Work-Life Balance and Flexibility
  • Flexible Time Off policy for rest and relaxation
  • Volunteer Time Off for community involvement
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

As a Staff Data Engineer, you will be leading the architecture, design and devel...
Location
Location
United States; Canada , Remote
Salary
Salary:
Not provided
https://www.1password.com Logo
1Password
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 8+ years of professional software engineering experience
  • Minimum of 7 years technical engineering experience building data processing applications (batch and streaming) with coding in languages
  • In-depth, hands-on experience on extensible data modeling, query optimizations and work in Java, Scala, Python, and related technologies
  • Experience in data modeling across external facing product insights and business processes, such as revenue/sales operations, finance, and marketing
  • Experience with Big Data query engines such as Hive, Presto, Trino, Spark
  • Experience with data stores such as Redshift, MySQL, Postgres, Snowflake, etc.
  • Experience using Realtime technologies like Apache Kafka, Kinesis, Flink, etc.
  • Experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP with extensive use of datastores like RDBMS, key-value stores, etc.
  • Experience leveraging distributed systems at scale and systems knowledge on infrastructure hardware, resources bare-metal hosts to containers to networking.
Job Responsibility
Job Responsibility
  • Design, develop, and automate large-scale, high-performance batch and streaming data processing systems to drive business growth and enhance product experience
  • Build data engineering strategy that supports a rapidly growing tech company and aligns with the priorities across our product strategy and internal business organizations’ desire to leverage data for more competitive advantages
  • Build scalable data pipelines using best-in-class software engineering practices
  • Develop optimal data models for storage and retrieval, meeting critical product and business requirements
  • Establish and execute short and long-term architectural roadmaps in collaboration with Analytics, Data Platform, Business Systems, Engineering, Privacy and Security
  • Lead efforts on continuous improvement to the efficiency and flexibility of the data, platform, and services
  • Mentor Analytics & Data Engineers on best practices, standards and forward-looking approaches on building robust, extensible and reusable data solutions
  • Influence and evangelize high standard of code quality, system reliability, and performance.
What we offer
What we offer
  • Maternity and parental leave top-up programs
  • Generous PTO policy
  • Four company-wide wellness days
  • Company equity for all full-time employees
  • Retirement matching program
  • Free 1Password account
  • Paid volunteer days
  • Employee-led inclusion and belonging programs and ERGs
  • Peer-to-peer recognition through Bonusly
  • Fulltime
Read More
Arrow Right