This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows.
Job Responsibility:
Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience
Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products
Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models
Helping build next-gen Agentic tooling for reporting and insights
Processing large amounts of internal and external system data for analytics, caching, modeling and more
Identifying performance bottlenecks and implementing solutions for them
Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations
Requirements:
BS or MS in Computer Science, similar technical field of study, or equivalent practical experience
3+ years of software development experience within a SaaS business
Must have a familiarity with .NET Core, and C# and frameworks
Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc
Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc)
Familiarity with RESTFul microservice-based APIs
Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent)
Experience with the SCRUM and the AGILE development process
Familiarity developing in cloud-based environments
Nice to have:
Optional: Experience with 3rd party integrations
Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams
Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot
Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things)
Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala