CrawlJobs Logo

Senior Software Engineer - Transactional Data Platform

https://www.atlassian.com Logo

Atlassian

Location Icon

Location:
Australia , Sydney

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As a Senior Software Engineer, you will play a critical role in designing, building, and optimizing high-performance, scalable, and resilient backend storage solutions on AWS cloud infrastructure. You will be responsible for developing distributed storage systems, APIs, and backend services that power mission-critical applications, ensuring low-latency, high-throughput, and fault-tolerant data storage. Your work will directly impact system reliability, scalability, and cost efficiency. You will collaborate closely with principal engineers, architects, SREs, and product teams to define technical roadmaps, improve storage efficiency, and optimize access patterns. You will drive performance tuning, data modeling, caching strategies, and cost optimization across AWS storage services like S3, DynamoDB, EBS, EFS, FSx, and Glacier. Additionally, you will contribute to infrastructure automation, security best practices, and monitoring strategies using tools like Terraform, CloudWatch, Prometheus, and OpenTelemetry. In this role, you will also be responsible for troubleshooting and resolving production incidents related to data integrity, latency spikes, and storage failures, ensuring high availability and disaster recovery preparedness. You will mentor junior engineers, participate in design reviews and architectural discussions, and advocate for engineering best practices such as CI/CD automation, infrastructure as code, and observability-driven development. Your contributions will directly impact the organization's ability to scale its storage infrastructure efficiently while maintaining security, reliability, and compliance with industry standards.

Job Responsibility:

  • Designing, building, and optimizing high-performance, scalable, and resilient backend storage solutions on AWS cloud infrastructure
  • Developing distributed storage systems, APIs, and backend services that power mission-critical applications, ensuring low-latency, high-throughput, and fault-tolerant data storage
  • Collaborating closely with principal engineers, architects, SREs, and product teams to define technical roadmaps, improve storage efficiency, and optimize access patterns
  • Driving performance tuning, data modeling, caching strategies, and cost optimization across AWS storage services like S3, DynamoDB, EBS, EFS, FSx, and Glacier
  • Contributing to infrastructure automation, security best practices, and monitoring strategies using tools like Terraform, CloudWatch, Prometheus, and OpenTelemetry
  • Troubleshooting and resolving production incidents related to data integrity, latency spikes, and storage failures, ensuring high availability and disaster recovery preparedness
  • Mentoring junior engineers, participating in design reviews and architectural discussions, and advocating for engineering best practices such as CI/CD automation, infrastructure as code, and observability-driven development

Requirements:

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field
  • 5+ years of experience in backend software development
  • 3+ years of hands-on experience working with AWS cloud services, particularly AWS storage technologies (S3, DynamoDB, EBS, EFS, FSx, or Glacier)
  • 3+ years of experience in designing and developing distributed systems or high-scale backend services
  • Strong programming skills in Kotlin
  • Experience working in agile environments following DevOps and CI/CD best practices
  • Strong Backend Development Skills
  • Proficiency in Kotlin, Java for backend development
  • Experience building high-performance, scalable microservices and APIs
  • Strong understanding of RESTful APIs, gRPC, and event-driven architectures
  • Experience with AWS Storage Technologies
  • Hands-on experience with AWS S3, DynamoDB, EBS, EFS, FSx, and Glacier
  • Knowledge of AWS IAM, KMS, and data access policies for secure storage solutions
  • Understanding of AWS networking (VPC, PrivateLink, Route 53) for optimizing storage performance
  • Distributed Systems & Scalability
  • Solid understanding of distributed databases, storage consistency models, and caching mechanisms
  • Experience with sharding, partitioning, and load balancing to scale storage-heavy applications
  • Familiarity with event-driven architectures using AWS SNS, SQS, Kinesis, or Kafka
  • Performance Optimization & Cost Efficiency
  • Ability to profile and optimize storage performance, indexing strategies, and data retrieval latencies
  • Experience with cost-efficient storage solutions by implementing tiering, lifecycle policies, and data deduplication
  • Knowledge of benchmarking and monitoring tools (CloudWatch, OpenTelemetry, Prometheus, Grafana)
  • Security & Reliability
  • Experience implementing data encryption at rest and in transit using AWS KMS or TLS
  • Understanding of access control mechanisms (IAM roles, STS, fine-grained permissions)
  • Experience ensuring high availability and disaster recovery using AWS backup strategies and multi-region replication
  • Hands-On with Infrastructure as Code (IaC) & DevOps
  • Experience using Terraform, AWS CloudFormation, or CDK to manage infrastructure
  • Familiarity with CI/CD pipelines for backend deployments using GitHub Actions, CodePipeline, or Jenkins
  • Experience with containerized deployments using Docker, Kubernetes (EKS), and serverless solutions (Lambda, Fargate)
  • Troubleshooting & Production Support
  • Strong debugging skills for investigating storage failures, high-latency issues, and API bottlenecks
  • Experience using observability and tracing tools to monitor storage workloads
  • Ability to triage and resolve production incidents in large-scale backend systems
  • Collaboration & Engineering Best Practices
  • Strong experience in code reviews, unit testing, and API contract enforcement
  • Ability to work cross-functionally with SREs, data engineers, and infrastructure teams
  • Good documentation habits for ensuring architecture decisions and design patterns are well-documented
What we offer:
  • Atlassians can choose where they work – whether in an office, from home, or a combination of the two
  • Flexibility for eligible candidates to work remotely across the West US

Additional Information:

Job Posted:
April 23, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Software Engineer - Transactional Data Platform

Senior Software Engineer - ClickPipes (Database Integration)

About the Team: The ClickPipes - Database Integrations team builds the platform ...
Location
Location
India
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of industry experience building data-intensive software solutions
  • Proficient in Go, or experienced in systems programming with willingness to ramp up quickly in Go
  • Cloud-native experience deploying and operating services on at least one major cloud platform (AWS/GCP/Azure)
  • Practical experience with Kubernetes
  • Strong problem solver and solid production debugging skills
  • Clear communication in writing (design docs, code review) and verbally (technical discussions, customer calls, incident response)
Job Responsibility
Job Responsibility
  • Build data-intensive systems
  • Design and develop high-throughput integrations with databases (Postgres, MySQL, MongoDB), data lakes (Iceberg, Delta Lake), and data warehouses (BigQuery, Snowflake)
  • Handle edge cases in real-world production scenarios: unconventional database setups, internals of data types, database upgrades/failovers, large transactions, etc
  • Design integration solutions to enable users to fully harness ClickHouse's performance and throughput
  • Own end-to-end reliability
  • Debug complex issues in production leveraging runtime diagnostics (e.g. pprof, parca) and observability tools (e.g. metrics, logging, tracing)
  • Build and improve infrastructure and tools to increase system reliability, reduce incident response time, and simplify/automate operations
  • Write clear documentation, both publicly and internally
  • Participate in on-call rotation
  • Drive product innovation
What we offer
What we offer
  • Flexible work environment
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Senior Software Engineer - ClickPipes (Database Integration)

About the Team: The ClickPipes - Database Integrations team builds the platform ...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of industry experience building data-intensive software solutions
  • Proficient in Go, or experienced in systems programming with willingness to ramp up quickly in Go
  • Cloud-native experience deploying and operating services on at least one major cloud platform (AWS/GCP/Azure)
  • Practical experience with Kubernetes
  • Strong problem solver and solid production debugging skills
  • Clear communication in writing (design docs, code review) and verbally (technical discussions, customer calls, incident response)
Job Responsibility
Job Responsibility
  • Build data-intensive systems
  • Design and develop high-throughput integrations with databases (Postgres, MySQL, MongoDB), data lakes (Iceberg, Delta Lake), and data warehouses (BigQuery, Snowflake)
  • Handle edge cases in real-world production scenarios: unconventional database setups, internals of data types, database upgrades/failovers, large transactions, etc
  • Design integration solutions to enable users to fully harness ClickHouse's performance and throughput
  • Own end-to-end reliability
  • Debug complex issues in production leveraging runtime diagnostics (e.g. pprof, parca) and observability tools (e.g. metrics, logging, tracing)
  • Build and improve infrastructure and tools to increase system reliability, reduce incident response time, and simplify/automate operations
  • Write clear documentation, both publicly and internally
  • Participate in on-call rotation
  • Drive product innovation
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Axway B2B Senior Software Engineer

Senior Software Engineer role focusing on EDI integration platforms, mapping var...
Location
Location
India , Noida
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience on different EDI Integration platforms like IBM Sterling Integrator, Axway B2bi, Open Text, Seeburger etc.
  • Strong Mapping Experience using various formats like X12, EDIFACT, XML, IDOCs, Flat Files etc.
  • Knowledge of EDI transactions like Purchase Order, Invoice, ASN, Warehouse Order and Response, Claim and response
  • Functional knowledge around Supply Chain, Automotive industry, HIPAA, Retail
  • Working knowledge of AS2, SFTP, FTP, FTPS, HTTP/S data communication protocols with expertise using Seeburger
  • Experience working with Trading Partner Profile Management and configurations
  • Understanding of databases and experience with SQL
  • Experience in Unix Scripting
  • Excellent written and verbal communication skills
  • Ready to work in fast paced and Dynamic environment
Job Responsibility
Job Responsibility
  • Part of team or work as individual consultant
  • Working in 24x5 shifts across EMEA/APAC/US business hours
  • Weekend oncall/standby
What we offer
What we offer
  • Inclusive and respectful work environment
  • Positions open to people with disabilities
  • Fulltime
Read More
Arrow Right

Axway B2B Senior Software Engineer

Senior Software Engineer role focusing on Axway B2B integration platforms, worki...
Location
Location
India , Noida
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience on different EDI Integration platforms like IBM Sterling Integrator, Axway B2bi, Axway TSIM, Open Text, Seeburger etc.
  • Strong Mapping Experience using various formats like X12, EDIFACT, XML, IDOCs, Flat Files etc.
  • Knowledge of EDI transactions like Purchase Order, Invoice, ASN, Warehouse Order and Response, Claim and response
  • Functional knowledge around Supply Chain, Automotive industry, HIPAA, Retail
  • Working knowledge of AS2, SFTP, FTP, FTPS, HTTP/S data communication protocols with expertise using Seeburger
  • Experience working with Trading Partner Profile Management and configurations
  • Experience with process flow and Integration configurations
  • Understanding of databases and experience with SQL
  • Experience in Unix Scripting
  • BTech (Bachelor of Technology)
Job Responsibility
Job Responsibility
  • Supporting global customers on Axway B2B suite
  • Working in 24x5 shifts across EMEA/APAC/US business hours
  • Weekend oncall/standby
What we offer
What we offer
  • Inclusive and respectful work environment
  • Positions open to people with disabilities
  • Creative environment with initiative support
  • Fulltime
Read More
Arrow Right

Mapping (EDI) Senior Software Engineer

Sopra Steria is looking for a Mapping (EDI) Senior Software Engineer to work on ...
Location
Location
India , Noida
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-5 years of work Experience on different EDI Integration platforms like IBM Sterling Integrator, Axway B2bi, Open Text, Seeburger etc.
  • Strong Mapping Experience using various formats like – X12, EDIFACT, XML, IDOCs, Flat Files etc.
  • Knowledge of EDI transactions like Purchase Order, Invoice, ASN, Warehouse Order and Response, Claim and response
  • Functional knowledge around Supply Chain, Automotive industry, HIPAA, Retail
  • Working knowledge of AS2, SFTP, FTP, FTPS, HTTP/S data communication protocols with expertise using Seeburger
  • Experience working with Trading Partner Profile Management and configurations
  • Experience with process flow and Integration configurations
  • understanding of databases and experience with SQL
  • Experience in Unix Scripting
Job Responsibility
Job Responsibility
  • Be part of team or work as individual consultant
  • Work in fast paced and Dynamic environment
  • Customer handling experience
  • Ready to work in U.K / U.S / APAC shifts
  • Ready to work in Either of Development / Managed Services / Support Environment
What we offer
What we offer
  • Inclusive and respectful work environment
  • Open to people with disabilities
  • Fulltime
Read More
Arrow Right

Senior Billing Data Engineer

The Billing Platform team at GEICO oversees the tools, infrastructure, data, rep...
Location
Location
United States , Palo Alto; Richardson; Chevy Chase
Salary
Salary:
100000.00 - 215000.00 USD / Year
geico.com Logo
Geico
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of professional, hands-on data engineering experience
  • Strong experience in architecting and designing large-scale, complex data systems
  • Proficient coding skills in languages like Python, Java, or Scala, with a focus on building high-performance, production-quality data applications
  • Experience with a wide range of data technologies, including transactional databases (SQL), data warehousing/lakehouse solutions (e.g., Apache Iceberg), and data processing frameworks (e.g., Apache Flink, Apache Spark)
  • Experience with workflow orchestration tools such as Airflow
  • Proficient in using cloud computing tools throughout the software development lifecycle, with deep expertise in DataOps, observability, and automated testing
  • Skilled in collaborating across engineering teams and other functions to build alignment, drive decision-making, and communicate transparently
Job Responsibility
Job Responsibility
  • Oversee the high-level and low-level designs of one or more data sub-systems of the billing platform we are building
  • Be responsible and accountable for the quality, reliability, accessibility, and performance of our data solutions
  • Lead the design and development of complex data processing systems, ensuring they are scalable, maintainable, and meet high-quality standards
  • Develop robust data pipelines for transporting data from transactional databases to analytical stores, utilizing technologies such as Change Data Capture (CDC), streaming platforms, and workflow orchestration
  • Architect data models and transform raw transactional data into simplified, aggregated billing entities suitable for large-scale analytics and reporting
  • Provide technical leadership and guidelines for building next-generation agentic billing systems that leverage our rich datasets
  • Work closely with various departments, including product management, analytics, and software engineering, to ensure cohesive and successful project delivery
  • Mentor and guide other engineers, fostering a culture of continuous learning and data excellence
What we offer
What we offer
  • Comprehensive Total Rewards program that offers personalized coverage tailor-made for you and your family’s overall well-being
  • Financial benefits including market-competitive compensation
  • a 401K savings plan vested from day one that offers a 6% match
  • performance and recognition-based incentives
  • and tuition assistance
  • Access to additional benefits like mental healthcare as well as fertility and adoption assistance
  • Supports flexibility- We provide workplace flexibility as well as our GEICO Flex program, which offers the ability to work from anywhere in the US for up to four weeks per year
  • Fulltime
Read More
Arrow Right
New

Senior Staff Software Engineer: Data & Storage Platform

Uber’s Data Platform is the heart of the company’s critical decision-making and ...
Location
Location
United States , Seattle; San Francisco; Sunnyvale
Salary
Salary:
267000.00 - 297000.00 USD / Year
uber.com Logo
Uber
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 14+ Years of Engineering Excellence: Proven experience designing and operating world-class distributed data and storage systems
  • Mastery of Storage Internals: Extensive storage experience is a must
  • Deep expertise in: Batch & Object Storage: HDFS, Cloud Object Storage (S3/GCS/OCI), and Blobstore metadata management
  • Storage Optimization: Practical experience with Apache Hudi or Apache Iceberg for lakehouse architectures
  • Transactional Systems: Experience with distributed transactional storage (e.g., Docstore, Google Spanner, TiDB)
  • NoSQL & Cache: Cassandra, Redis, and high-throughput Key-Value stores
  • Data + AI Convergence: Deep understanding of how compute fabrics (Spark, Flink, Ray) integrate with vector databases and model-serving platforms
  • Query Engine Proficiency: Architect-level knowledge of Presto, Trino, or Hive for large-scale analytical processing
  • Systems Programming: Expert-level command of Java, Go, Scala, or C++ with a focus on performance tuning and distributed consensus
Job Responsibility
Job Responsibility
  • Architect the Multi-Modal Fabric: Unify batch, streaming, and AI compute into one intelligent fabric, enabling real-time insights and trustworthy AI agents at a global scale
  • Revolutionize Storage & Catalog: Drive the architecture for a unified catalog and metadata management service for unstructured data, leveraging native cloud object store capabilities
  • Operationalize AI Intelligence: Partner with teams like QueryCopilot and DataIQ to bridge human validation with autonomous reasoning through agentic workflows
  • Lead Storage Modernization: Evolve our massive-scale persistence layers—including Docstore (Transactional Distributed Storage) and Distributed MySQL—to increase resiliency and reduce operational overhead
  • Open Source & Act as a force multiplier by contributing to the community (Hudi, Iceberg, Presto)
What we offer
What we offer
  • Eligible to participate in Uber's bonus program
  • May be offered an equity award & other types of comp
  • All full-time employees are eligible to participate in a 401(k) plan
  • Eligible for various benefits
  • Fulltime
Read More
Arrow Right

Senior Software Engineer

We are seeking a Senior Software Engineer to design, build, and evolve core comp...
Location
Location
India , Pune; Kolkata
Salary
Salary:
Not provided
bentley.com Logo
Bentley Systems
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of professional experience in software engineering
  • proficiency in .Net, C#
  • exposure to distributed or cloud‑based systems
  • strong experience with Azure, microservices, containers, and Kubernetes
  • hands‑on experience building ETL pipelines, workflow‑based systems, or event‑driven architectures
  • solid understanding of observability, CI/CD, reliability, and cloud operations
  • strong problem‑solving skills and the ability to deliver production‑quality software
  • experience working with large‑scale engineering or infrastructure data
  • familiarity with schema evolution, metadata‑driven pipelines, or data governance
  • exposure to graph databases or time‑series data systems
Job Responsibility
Job Responsibility
  • Design cloud-native services to transform engineering data into iModels
  • Build event-driven, containerized microservices for global scale
  • Develop state-driven workflows for long-running tasks and retries
  • Deploy via Azure, Kubernetes, and CI/CD with zero-downtime
  • Implement concurrency control, idempotency, and conflict resolution
  • Maintain structured logging, metrics, and alerting
  • Build fault-tolerant pipelines for data validation and mapping
  • Manage schema versioning and ensure transactional consistency
  • Apply circuit breakers, rate limiting, and backoff strategies
  • Lead code reviews and root-cause analysis for production issues
What we offer
What we offer
  • A great Team and culture
  • An exciting career as an integral part of a world-leading software company
  • An attractive salary and benefits package
  • A commitment to inclusion, belonging and colleague wellbeing through global initiatives and resource groups
  • A company committed to making a real difference by advancing the world’s infrastructure for better quality of life
Read More
Arrow Right