This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly motivated and enthusiastic Intermediate Software Developer to join our growing engineering team. This role is ideal for a recent individual with 3-5 years of experience who is eager to learn and grow within a fast-paced environment. You will work on exciting projects involving large-scale data processing, analytics, and software development, leveraging technologies like Java, Apache Spark, Python, and Apache Iceberg. This position offers a unique opportunity to gain hands-on experience with cutting-edge data lake technologies and contribute to critical data infrastructure.
Job Responsibility:
Collaborate with senior developers and data engineers to design, develop, test, and deploy scalable data processing pipelines and applications
Write clean, efficient, and well-documented code in Java and Python for various data ingestion, transformation, and analysis tasks
Utilize Apache Spark for distributed data processing, focusing on performance optimization and resource management
Work with Apache Iceberg tables for managing large, evolving datasets in our data lake, ensuring data consistency and reliability
Assist in troubleshooting, debugging, and resolving issues in existing data pipelines and applications
Participate in code reviews, contributing to a high standard of code quality and best practices
Learn and adapt to new technologies and methodologies as the project requirements evolve
Contribute to the documentation of technical designs, processes, and operational procedures
Requirements:
2-5 years of relevant experience
Bachelor’s degree in Computer Science, Software Engineering, Data Science, or a related technical field
Strong foundational knowledge of object-oriented programming principles
Proficiency in at least one of the following programming languages: Java or Python
Basic understanding of data structures, algorithms, and software development lifecycles
Familiarity with version control systems (e.g., Git)
Eagerness to learn and a strong passion for software development and data technologies
Excellent problem-solving skills and attention to detail
Good communication and teamwork abilities
Nice to have:
Familiarity with distributed computing concepts
Basic understanding of Apache Spark or experience with data processing frameworks
Exposure to cloud platforms (AWS, Azure, GCP)
Knowledge of SQL and database concepts
Any experience or coursework related to data lakes, data warehousing, or Apache Iceberg
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.