This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Intellectsoft is a software development company delivering innovative solutions since 2007. We operate across North America, Latin America, the Nordic region, the UK, and Europe.We specialize in industries like Fintech, Healthcare, EdTech, Construction, Hospitality, and more, partnering with startups, mid-sized businesses, and Fortune 500 companies to drive innovation and scalability. Our clients include Jaguar Motors, Universal Pictures, Harley-Davidson, and many more where our teams are making daily impactTogether, our team delivers solutions that make a difference. Learn more at www.intellectsoft.net. Our customer's product is an AI-powered platform that helps businesses make better decisions and work more efficiently. It uses advanced analytics and machine learning to analyze large amounts of data and provide useful insights and predictions. The platform is widely used in various industries, including healthcare, to optimize processes, improve customer experiences, and support innovation. It integrates easily with existing systems, making it easier for teams to make quick, data-driven decisions to deliver cutting-edge solutions.
Job Responsibility:
Design and build highly reliable and scalable data pipelines using PySpark and big data technologies
Collaborate with the data science team to develop new features that enhance model accuracy and performance
Create standardized data models to improve consistency across various deployments
Troubleshoot and resolve issues in existing ETL pipelines and optimize workflows
Conduct POCs to evaluate new technologies and integrate additional data sources
Follow and promote best practices for software development, ensuring high-quality solutions that meet requirements and deadlines
Document development updates and maintain clear technical documentation
Requirements:
4+ years of professional experience, including 2+ years of data engineering with Apache Spark and SQL
Proficiency in Python for data processing and automation
Knowledge of PySpark, distributed computing, analytical databases and other big data technologies
Expertise in designing and managing ETL pipelines and distributed data processing frameworks
Strong knowledge of database systems, data modeling, and analytical databases
Hands-on experience with workflow orchestration tools such as Apache Airflow
Familiarity with cloud platforms like AWS, GCP, or Azure
Solid understanding of software development lifecycles, including coding standards, version control, and testing
Nice to have:
Bachelor’s or master’s degree in Computer Science or a related field
Familiarity with the data science and machine learning development process
Understanding of Machine Learning pipelines or frameworks
What we offer:
Awesome projects with an impact
Udemy courses of your choice
Team-buildings, events, marathons & charity activities to connect and recharge
Workshops, trainings, expert knowledge-sharing that keep you growing
Clear career path
Absence days for work-life balance
Flexible hours & work setup - work from anywhere and organize your day your way