CrawlJobs Logo

Software Engineer, Analytics

aiven.io Logo

Aiven Deutschland GmbH

Location Icon

Location:
Finland , Helsinki

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We’re a global team of over 400 people, working together to push the boundaries of open-source technology and multi-cloud solutions. Our vision is to help developers, builders, and creators bring their ideas to life with speed and simplicity, by providing a cloud data platform that makes open-source databases, search, streaming, and application infrastructure easily accessible to everyone. The Role: We are looking for a Software Engineer to work on our cloud operations platform turning the best open-source technologies into frustration-free cloud services. You’ll be part of the team that is focused on developing the PostgreSQL services and improving their scalability and reliability. Our philosophy is to automate everything and to avoid repetitive manual work in everything we do. Our own backend systems are mostly implemented in Python with bits of Java, Go and C.

Job Responsibility:

  • Write high-quality, maintainable code and release quality features
  • Participate in technical discussions and perform PR reviews
  • Contribute to technical planning and backlog management
  • Help investigate and resolve customer issues
  • Mentor and empower other engineers
  • Automate processes to eliminate repetitive manual work
  • Develop and improve the scalability and reliability of PostgreSQL service

Requirements:

  • Solid development skills in Python
  • Experience working with one of these databases - PostgreSQL, MySQL, MariaDB, SQL Server, OracleDB
  • Experience with backup solutions and backup strategies
  • Advanced understanding of Linux OS
  • Experience with automated testing
  • Distributed systems knowledge
  • Experience with performance improvements, bug fixes, and security vulnerability resolution
  • Fluency in English, verbal and written

Nice to have:

  • Good understanding of security (software, networking)
  • Experience in infrastructure as code
  • Experience in cloud DBAAS production environments
  • Experience in production software environments
What we offer:
  • Participate in Aiven’s equity plan
  • Balance work and life with our hybrid work policy
  • Choose the equipment you need to set yourself up for success
  • Use your Professional Development Plan budget for learning opportunities
  • Receive holistic wellbeing support through our global Employee Assistance Program
  • Inquire about our Global Time Off Commitment (Parental and Sick Leave, as well as Personal Time)
  • Enjoy country-specific benefits for our global cast

Additional Information:

Job Posted:
May 14, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Software Engineer, Analytics

Sr. Staff Software Engineer - Advanced Analytics Platform

At DISQO, we’re redefining how companies turn data into decisions. Our mission i...
Location
Location
United States , Los Angeles, Glendale
Salary
Salary:
200000.00 - 240000.00 USD / Year
disqo.com Logo
DISQO
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of professional software engineering experience
  • 5+ years architecting or building high-performance data systems or analytics platforms
  • 3+ years of product Rust experience
  • Deep expertise in Rust and strong experience in Java
  • Proven track record building large-scale data analytics or OLAP systems from the ground up
  • Deep understanding of columnar data engines, vectorized execution, and query/dataframe optimization
  • Hands-on experience with performance engineering, profiling, and hardware-aware optimization
  • Strong expertise with AWS - designing, deploying, and optimizing large-scale data and compute systems in the cloud
  • A systems-thinking mindset
  • Thrives in a fast-moving, startup environment
Job Responsibility
Job Responsibility
  • Architect and deliver a high-performance Advanced Analytics Engine
  • Design and build an Agentic AI system that leverages this Advanced Analytics Engine
  • Partner with product, engineering and data teams to power agentic AI analytics systems
  • Profile, benchmark, and optimize Rust components
  • Leverage AWS cloud services to architect scalable, reliable, and cost-efficient analytics infrastructure
  • Shape the evolution of DISQO’s broader data platform and its integration across our product ecosystem
  • Mentor and guide engineers
  • Contribute to open-source or internal frameworks that advance analytical systems and distributed computation
What we offer
What we offer
  • 100% covered Medical/Dental/Vision for employee
  • Equity
  • 401K
  • Generous PTO policy
  • Flexible workplace policy
  • Team offsites, social events & happy hours
  • Life Insurance
  • Health FSA
  • Commuter FSA (for hybrid employees)
  • Catered lunch and fully stocked kitchen
  • Fulltime
Read More
Arrow Right

Software Engineer - Platform Software, Device Drivers, System Bring-Up

Our team is responsible for driving technology leadership in the Juniper routing...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BTech / MTech in CS/CE or related field with proven experience of 10+ years
  • Good understanding of hardware-level details for Optics, PCIe, SPI, I2C, Retimers, FPGA, CPLD, MDIO, Flash Driver
  • Proficiency with device drivers, system bring-up, FreeBSD/Linux internals
  • Understanding of Ethernet, OTN, SONET, etc. technologies
  • Strong technical, analytical, and problem-solving skills
  • Strong in C, C++ programming, OO analysis & design, data structures, and system debugging skills
  • Prior software development experience on networking products
Job Responsibility
Job Responsibility
  • Board bring-up related experience. 10G, 40G, 100G, 400G, 800G interface related platform software like interface drivers, etc.
  • Platform infrastructure-related software like Routing Engine Redundancy/High Availability, Chassis/line card, fabric, Optics, etc.
  • Timing software in PTP, SYNCE & Grand Master
  • You will be responsible for these product developments in the platform area in either JunOS or Junos evolved software architecture
  • In addition to the development activity, you are required to work closely with system and solution test teams to ensure products/solutions delivered are of the highest quality
  • You will be required to work closely with Juniper Technical Assistance Team for providing engineering assistance in supporting critical customer escalations for customer deployments
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right

Software Engineer - Platform Software, Device Drivers, System Bring-Up

Our team is responsible for driving technology leadership in the Juniper routing...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BTech / MTech in CS/CE or related field with proven experience of 10+ years
  • Good understanding of hardware-level details for Optics, PCIe, SPI, I2C, Retimers, FPGA, CPLD, MDIO, Flash Driver
  • Proficiency with device drivers, system bring-up, FreeBSD/Linux internals
  • Understanding of Ethernet, OTN, SONET, etc. technologies
  • Strong technical, analytical, and problem-solving skills
  • Strong in C, C++ programming, OO analysis & design, data structures, and system debugging skills
  • Prior software development experience on networking products
Job Responsibility
Job Responsibility
  • Board bring-up related experience
  • 10G, 40G, 100G, 400G, 800G interface related platform software like interface drivers, etc.
  • Platform infrastructure-related software like Routing Engine Redundancy/High Availability, Chassis/line card, fabric, Optics, etc.
  • Timing software in PTP, SYNCE & Grand Master
  • Product developments in the platform area in either JunOS or Junos evolved software architecture
  • Work closely with system and solution test teams to ensure products/solutions delivered are of the highest quality
  • Work closely with Juniper Technical Assistance Team for providing engineering assistance in supporting critical customer escalations for customer deployments
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
United States , Menlo Park
Salary
Salary:
146000.00 - 198000.00 USD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • Market competitive and pay equity-focused compensation structure
  • 100% paid health insurance for employees with 90% coverage for dependents
  • Annual lifestyle wallet for personal wellness, learning and development, and more
  • Lifetime maximum benefit for family forming and fertility benefits
  • Dedicated mental health support for employees and eligible dependents
  • Generous time away including company holidays, paid time off, sick time, parental leave, and more
  • Lively office environment with catered meals, fully stocked kitchens, and geo-specific commuter benefits
  • Bonus opportunities
  • Equity
  • Fulltime
Read More
Arrow Right

Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
Canada , Toronto
Salary
Salary:
124000.00 - 145000.00 CAD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • bonus opportunities
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right

Software Engineer - Growth & Insights Team

Intermediate Software Engineer role on the Growth and Insights Engineering team ...
Location
Location
United States , Atlanta
Salary
Salary:
Not provided
https://www.pagerduty.com Logo
PagerDuty
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience designing, building, and operating large systems with scalability, availability, and performance requirements
  • Development experience working on customer-facing and web-based systems
  • Experience with monitoring, observability and logging platforms (e.g. DataDog, New Relic, SumoLogic, Splunk, Segment)
  • Proficiency in at least one programming language (e.g. Python, Java, Ruby, Elixir etc.)
  • Have operational experience with modern data stack patterns & tools (e.g. ELT, Kafka, applying software engineering principles to data problems, etc.)
  • Strong verbal and written communication skills
Job Responsibility
Job Responsibility
  • Designing, coding, testing and shipping backend applications or micro-services, APIs or front-end interfaces
  • Build and develop the core infrastructure and tooling
  • Help drive and define team standards by participating in code reviews
  • Collaborate with product and UX to deliver the highest quality customer experiences
  • Lead post incident reviews and drive systematic improvements to prevent recurring issues
  • Collaborate with other engineering teams globally to define and implement development standards
  • Champion observability and monitoring best practices across the organization
  • Participate in a 24/7 on-call rotation
What we offer
What we offer
  • Comprehensive benefits package from day one
  • Flexible work arrangements
  • Company equity
  • ESPP (Employee Stock Purchase Program)
  • Retirement or pension plan
  • Generous paid vacation time
  • Paid holidays and sick leave
  • Dutonian Wellness Days & HibernationDuty - companywide paid days off in addition to PTO
  • Paid parental leave: 22 weeks for pregnant parent, 12 weeks for non-pregnant parent
  • Paid volunteer time off: 20 hours per year
  • Fulltime
Read More
Arrow Right

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right