CrawlJobs Logo
Briefcase Icon
Category Icon

Filters

×
Countries

Spark/Scala Engineer Jobs

1 Job Offers

Filters
Java Spark/Scala Engineer
Save Icon
Join Citi's FX Data Analytics & AI team in Pune as a Java Spark/Scala Engineer. You will design and develop high-performance, data-driven applications using Java, React, and modern web technologies. This role involves close collaboration with trading desks, full-stack development, and mentoring j...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Spark/Scala Engineer jobs represent a specialized and in-demand career path at the intersection of big data engineering and high-performance software development. Professionals in this role are primarily responsible for designing, building, and maintaining large-scale data processing systems using Apache Spark, with Scala as the core programming language. They are the architects of data pipelines that transform vast amounts of raw, unstructured data into clean, reliable, and actionable datasets that drive business intelligence, machine learning models, and analytical applications. The profession is fundamental to modern data-driven organizations, enabling real-time analytics, complex event processing, and the management of petabytes of information. A typical day for a Spark/Scala Engineer involves a blend of development, optimization, and collaboration. Common responsibilities include writing efficient, scalable, and fault-tolerant Spark applications using Scala's functional programming paradigms. They design and implement ETL (Extract, Transform, Load) or ELT processes, often working with distributed data storage systems like Hadoop HDFS, cloud data lakes (AWS S3, Azure Data Lake), and data warehouses. Performance tuning is a critical aspect of the role, requiring engineers to deeply understand Spark's internals—such as partitioning, shuffling, and caching—to optimize job execution and reduce cluster resource costs. They also ensure data quality, implement data governance practices, and build robust monitoring and alerting for data pipelines. Collaboration with data scientists, analysts, and other engineering teams to understand requirements and deliver reliable data products is a constant. The typical skill set for these jobs is both deep and broad. Proficiency in Scala is paramount, including a strong grasp of its object-oriented and functional programming features, concurrency models, and type system. Expert-level knowledge of Apache Spark—including its Core, SQL, Streaming (Structured Streaming), and MLlib modules—is the defining technical requirement. Familiarity with the broader big data ecosystem (e.g., Kafka, Hive, HBase) and cloud platforms (AWS EMR, Databricks, Google Cloud Dataproc) is highly valuable. Soft skills are equally important; successful engineers possess strong problem-solving abilities to debug complex distributed systems, clear communication to explain technical concepts, and a collaborative mindset to work within agile teams. Common requirements for Spark/Scala Engineer jobs often include a degree in computer science or a related field, coupled with proven hands-on experience in building and deploying large-scale data processing solutions. For those passionate about big data challenges, these roles offer a rewarding opportunity to build the foundational infrastructure that powers insight and innovation.

Filters

×
Countries
Category
Location
Work Mode
Salary