This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Radix is building the most trusted data and analytics platform in multifamily. Joining now means stepping into a place where curiosity wins, ideas move quickly, and your impact shapes an industry. As a Senior Data Engineer at Radix, you'll design and operate the data pipelines that fuel our AI-powered analytics, reporting engines, and transformation systems. You'll work across ingestion, normalization, cleaning, enrichment, and model-ready output — ensuring reliability, quality, and scalability across our multifamily data ecosystem. You'll take ownership of end-to-end outcomes for production systems, mentor junior engineers, and shape technical direction for the team.
Job Responsibility:
Build scalable ETL/ELT pipelines for ingesting structured and unstructured data (Excel, JSON, PDFs, APIs)
Design and maintain data pipelines using SQL, Python, Node.js, or TypeScript
Work with distributed compute systems (Spark, Kubernetes), message queues, and streaming data
Manage and optimize data storage in MongoDB, PostgreSQL, Redis, Snowflake, and S3
Develop clean, standardized data schemas and event-driven transformations
Integrate AI-assisted parsers, mappers, and LLM-supported transformations
Collaborate with backend, analytics, product, and AI teams to break down requirements into well-defined engineering problems
Implement monitoring, data validation, and reliability checks (DQ rules, freshness, duplication)
Own production readiness, including on-call responsibilities and incident follow-ups
Mentor engineers, conduct thorough code reviews, and introduce patterns that raise the team's technical bar
Communicate technical designs and their business implications clearly to both technical and non-technical stakeholders
Requirements:
5+ years of experience in data engineering or backend systems, with 2+ years collaborating closely with analytical or scientific practitioners
Experience designing at least one end-to-end data application (visualization, model pipeline, etc.) for production use
Strong understanding of data modeling, batch processing, and streaming systems
Hands-on experience with SQL/NoSQL databases, cloud storage, file-based datasets, and Infrastructure-as-code
Experience building or operating data pipelines on AWS cloud services (Lambda, S3, RDS, ECS)
Understanding of AI/LLM integration and prompt engineering fundamentals
Proficiency with Git/GitHub, and familiarity with Spark and Kubernetes
Strong problem-solving skills, ownership, and ability to identify root causes of technical debt and recurring problems
Demonstrated ability to translate Product/Science requirements into technical plans and hold teams accountable to delivery timelines
Undergraduate degree in computer science, computer engineering, software engineering, or equivalent
What we offer:
Medical, dental and vision coverage designed to support your wellbeing
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.