CrawlJobs Logo

Sr. Big Data Cloud Engineer

vodafone.com Logo

Vodafone

Location Icon

Location:
Romania , Bucuresti

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

At Vodafone, we’re not just shaping the future of connectivity for our customers – we’re shaping the future for everyone who joins our team. When you work with us, you’re part of a global mission to connect people, solve complex challenges, and create a sustainable and more inclusive world. If you want to grow your career whilst finding the perfect balance between work and life, Vodafone offers the opportunities to help you belong and make a real impact.

Job Responsibility:

  • Drive the definition and delivery of IT solutions by gathering requirements, assisting clients in defining functional specifications, and translating them into technical solutions
  • Lead the evaluation and implementation of complex projects, ensuring technical solutions align with business needs and architectural standards
  • Validate objectives, detailed technical solutions, and effort estimates provided by development teams
  • Provide expert recommendations and solutions to project managers and management teams for system implementation and project execution
  • Ensure overall solutions are aligned with long-term architectural plans and resolve any discrepancies
  • Actively participate in solution design, software development/configuration, and provide support to other developers and testing teams
  • Maintain comprehensive project documentation and deliver technical presentations to internal teams and clients
  • Support and transfer knowledge throughout the system lifecycle, troubleshoot technical issues, and ensure smooth handover to support and maintenance teams
  • Monitor system performance, propose improvements, and contribute to the development strategy for managed systems
  • Proactively identify outdated technologies or capacity needs and recommend improvements in line with Vodafone Global standards
  • Adhere to internal policies, compliance, and participate in mandatory training and professional development programs

Requirements:

  • University degree (or ongoing studies) in IT or a technical field
  • 3–5 years of experience working in complex organizations and delivering large-scale or transformational projects
  • Advanced knowledge of SQL and experience with at least one database management system (Oracle, MySQL, SQL Server, PostgreSQL, etc.)
  • Strong skills in developing database structures and models using SAS Base, PL/SQL, Python, BigQuery (OnPrem and Cloud)
  • Solid foundation in software development processes, best practices, and design patterns
  • Analytical, planning, and technical project coordination skills
  • Fluent in English (written and spoken)
  • Enthusiastic, creative, entrepreneurial, and eager to innovate and improve
  • Able to work under pressure, adapt to change, and make quick decisions with a problem-solving mindset
  • Strong relationship-building skills at all organizational levels, with a collaborative and trustworthy approach
  • Committed to continuous knowledge sharing and professional development

Nice to have:

  • Technical certifications (e.g., TMForum, TOGAF, SAS Base, SQL, Informatica, Dynamo, Python, BigQuery, Data Fusion, CDAP) are a plus
  • Familiarity with web standards and J2EE architectures is an advantage
What we offer:
  • Hybrid working regime 2 days from the office, 3 days remote
  • Special discounts for Vodafone employees, Friends & Family offers
  • Demo telephone subscription - unlimited (voice and data)
  • Voucher for the purchase of a mobile phone
  • Medical subscription to a top private clinic & other medical benefits
  • Insurance for hospitalization and surgical interventions
  • Life insurance
  • Meal tickets
  • Bookster subscription
  • Participation in development programs and challenging projects in the leadership area
  • Access to internal Wellbeing & Recognition events
  • Extra vacation days (for seniority, special events, volunteering)
  • You will benefit from specializations in your field of activity, through programs based on modern training methods and systems

Additional Information:

Job Posted:
January 22, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Sr. Big Data Cloud Engineer

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Sr. Data Engineer

We are looking for a skilled Sr. Data Engineer to join our team in Oklahoma City...
Location
Location
United States , Oklahoma City
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with Snowflake data warehousing and schema design
  • proficiency in ETL tools such as Matillion or similar platforms
  • strong knowledge of Python and PowerShell for data automation
  • experience working with Microsoft SQL Server and related technologies
  • familiarity with cloud technologies, particularly AWS
  • understanding of data visualization and analytics tools
  • background in working with big data technologies such as Apache Kafka, Hadoop, Spark, or Pig
  • ability to design and implement APIs for data integration and management.
Job Responsibility
Job Responsibility
  • Design, implement, and maintain Snowflake data warehousing solutions to support business needs
  • assist in the migration of in-house data to Snowflake, ensuring a seamless transition
  • develop data pipelines and workflows using tools such as Matillion or equivalent ETL solutions
  • collaborate with teams to optimize and manage the existing data warehouse built on Microsoft SQL Server
  • utilize Python and PowerShell to automate data processes and enhance system efficiency
  • partner with the implementation team to shadow and learn best practices for Snowflake deployment
  • ensure data integrity, scalability, and security across all data engineering processes
  • provide insights into data visualization and analytics to support decision-making
  • work with cloud technologies, including AWS, to enhance data storage and accessibility
  • implement and manage APIs to enable seamless data integration and sharing.
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • eligibility to enroll in 401(k) plan
  • access to competitive compensation and free online training.
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

The Sr Data Engineer is essential for designing and developing data architecture...
Location
Location
United States , Overland Park; Atlanta; Frisco
Salary
Salary:
105100.00 - 189600.00 USD / Year
https://www.t-mobile.com Logo
T-Mobile
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree plus 5 years of related work experience OR Advanced degree with 3 years of related experience
  • Acceptable areas of study include Computer Engineering, Computer Science, a related subject area
  • 4-7+ years Developing cloud solutions using data series
  • experience with cloud platforms (Azure Data Factory, Azure Databricks, and Snowflake)
  • 4-7+ years Hands-on development using and migrating data to cloud platforms
  • 4-7+ years Experience in SQL, NoSQL, and/or relational database design and development
  • 4-7+ years Advanced knowledge and experience in building complex data pipelines with experience in languages such as Python, SQL, Scala, and Spark
  • Analytical approach to problem-solving
  • ability to use technology to tackle business problems
  • Knowledge Of Message Queuing, Stream Processing, And Highly Scalable ‘Big Data’ Data Stores
Job Responsibility
Job Responsibility
  • Develop data engineering solutions that enable data pipelines, data transformation, data privacy and analytical tools within the T-Mobile Customer Data Platform (CDP)
  • Collaborate on analysis, architecture, design, and development of data products within the T-Mobile Customer Data Platform (CDP)
  • Design and develop data architectures across on-premise, cloud, and hybrid platforms to ensure scalable data infrastructure
  • Perform data wrangling, exploration, and discovery of heterogeneous data to generate new business insights
  • Tackling the most complex and innovative tasks of the organization usually with strict time constraints.
  • Support white-boarding sessions, workshops, design sessions, and project meetings as needed
  • Contribute to team knowledge sharing and drive the advancement of new data engineering capabilities
  • Mentor team members to build and enhance their data engineering skillsets and professional growth
  • Assist management in project definition, including estimating, planning, and scoping work to meet objectives
  • Also responsible for other duties/projects as assigned by business management as needed
What we offer
What we offer
  • Competitive base salary and compensation package
  • Annual stock grant
  • Employee stock purchase plan
  • 401(k)
  • Access to free, year-round money coaches
  • Medical, dental and vision insurance
  • Flexible spending account
  • Employee stock grants
  • Employee stock purchase plan
  • Paid time off
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

As a Data Engineer, you will be responsible for designing, building, maintaining...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
amgen.com Logo
Amgen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years of Computer Science, IT or related field experience
  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops
  • Proficient in SQL, Python for extracting, transforming, and analyzing complex datasets from relational data stores
  • Proficient in Python with strong experience in ETL tools such as Apache Spark and various data processing packages, supporting scalable data workflows and machine learning pipeline development.
  • Strong understanding of data modeling, data warehousing, and data integration concepts
  • Proven ability to optimize query performance on big data platforms
  • Knowledge on Data visualization and analytics tools like Spotfire, PowerBI
Job Responsibility
Job Responsibility
  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Be a key team member that assists in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
What we offer
What we offer
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards
Read More
Arrow Right

Director of Engineering

We are seeking a highly experienced and strategic Director of Engineering to tak...
Location
Location
India , Chennai
Salary
Salary:
Not provided
arrcus.com Logo
Arrcus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Engineering/Computer Science or equivalent degree
  • Excellent communication, presentation, and interpersonal skills
  • 5+ years of experience leading and managing distributed engineering teams involved in creating complex software products
  • 10+ years of relevant experience in managing very senior technical talent in some of the following areas: Networking protocols such as OSPF, BGP, ISIS, MPLS, BFD, MLAG, EVPN, VxLAN, SR-MPLS, SRv6, L3VPN
  • Test Harness like Robot framework, Jinja2
  • Familiarity network merchant silicon chipsets and whitebox platforms
  • Software development of Network Data Path (Linux, virtual and ASIC)
  • Virtualization technologies like SR-IOV, Intel DPDK, FD.io, NSX, OVS
  • High Availability, ISSU, Linux networking
  • Debian Build/Packaging, Linux Kernel, Kernel Networking Stack
Job Responsibility
Job Responsibility
  • Work with customer and product teams to understand and prioritise new requirements
  • Develop a holistic understanding of individual employee skill sets and drive resource allocation for customer requirements
  • Help drive the recruiting process both in terms of attracting new talent as well as defining and streamlining the recruitment process
  • Continuous Process Improvement: Solicit feedback, drive discussion and implement process and workflow improvements
  • Provide technical guidance and mitigation for engineering projects
  • Build 1:1 rapport with engineers, help identify and fulfil personal aspirations by aligning with larger team goals
  • Strong ability to plan, execute and deliver multiple projects across worldwide sites
  • Experience with rapidly growing engineering organizations in all aspects of people, resources, tools, and more
What we offer
What we offer
  • Generous compensation packages including equity
  • Medical Insurance
  • Parental Leave
  • Fulltime
Read More
Arrow Right

Sr Big Data Engineer - Oozie and Pig (GCP)

We are seeking a Senior Big Data Engineer with deep expertise in distributed sys...
Location
Location
United States
Salary
Salary:
116100.00 - 198440.00 USD / Year
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, software engineering or related field of study
  • Experience with managed cloud services and understanding of cloud-based batch processing systems
  • Proficiency in Oozie, Airflow, Map Reduce, Java
  • Strong programming skills with Java (specifically Spark), Python, Pig, and SQL
  • Expertise in public cloud services, particularly in GCP
  • Proficiency in the Apache Hadoop ecosystem with Oozie, Pig, Hive, Map Reduce
  • Familiarity with BigTable and Redis
  • Experienced in Infrastructure and Applied DevOps principles in daily work
  • Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform
  • Proven experience in engineering batch processing systems at scale
Job Responsibility
Job Responsibility
  • Design and develop scalable batch processing systems using technologies like Hadoop, Oozie, Pig, Hive, MapReduce, and HBase, with hands-on coding in Java or Python (Java is a must)
  • Must be able to lead Jira Epics
  • Write clean, efficient, and production-ready code with a strong focus on data structures and algorithmic problem-solving applied to real-world data engineering tasks
  • Develop, manage, and optimize complex data workflows within the Apache Hadoop ecosystem, with a strong focus on Oozie orchestration and job scheduling
  • Leverage Google Cloud Platform (GCP) tools such as Dataproc, GCS, and Composer to build scalable and cloud-native big data solutions
  • Implement DevOps and automation best practices, including CI/CD pipelines, infrastructure as code (IaC), and performance tuning across distributed systems
  • Collaborate with cross-functional teams to ensure data pipeline reliability, code quality, and operational excellence in a remote-first environment
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer – Clinical Data Foundation

The Sr. Data Engineer is responsible for designing, building, maintaining, analy...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
amgen.com Logo
Amgen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s /Bachelor’s degree with 9-12 years of experience in Computer Science, IT or related field
  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
  • Hands on experience with various Python/R packages for data analysis, feature engineering and machine learning model training
  • Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools
  • Excellent problem-solving skills and the ability to work with large, complex datasets
  • Strong understanding of data governance frameworks, tools, and best practices.
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Be a key team member that assists in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
Read More
Arrow Right

Sr Data Scientists

Sr Data Scientists is located in Frisco, TX and will support teams’ mission to p...
Location
Location
United States , Frisco
Salary
Salary:
141773.00 - 155000.00 USD / Year
https://www.t-mobile.com Logo
T-Mobile
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Mathematics, Statistics, Economics, Computer Science, Physics, Electronic Engineering, or related, and 5 years of relevant work experience
  • Master’s degree in Mathematics, Statistics, Economics, Computer Science, Physics, Electronic Engineering, or related, and 3 years of relevant work experience
  • Experience in developing and deploying predictive models, advanced machine learning, deep learning, NLP, and generative AI solutions by applying a wide range of algorithms
  • Experience in developing solutions using Python, PySpark, SQL, and R, with libraries LangChain, LangGraph, Keras, Pandas, NumPy, SciPy, Matplotlib, and Scikit-Learn
  • Experience in working with data querying, wrangling, cleaning, and feature engineering across relational and non-relational databases: SQL, Snowflake, and Redshift in big data environments: Azure, AWS, and GCP, and leveraging Spark, Hadoop, Hive, and Kafka
  • Experience in building CI/CD pipelines, automating training and retraining workflows, deploying inference services, and monitoring ML algorithms in production environments in Databricks using tools: MLflow, and cloud-native services
  • Experience in articulating and reframing business problems, applying statistical and advanced analytics techniques in Python, R, and SQL, and leveraging SciPy, Scikit-Learn, and PySpark to generate actionable insights and recommendations
  • Experience in delivering impactful, data-driven presentations and effectively communicating machine learning and analytical concepts to technical teams, business stakeholders, and senior leadership, supported by visualizations created in Tableau, Power BI, Matplotlib, and Seaborn
  • At least 18 years of age
  • Legally authorized to work in the United States
Job Responsibility
Job Responsibility
  • Support business partners and product owners to understand business challenges, develop business cases, capture requirements, co-create solutions that drive business change that solve the challenges and deliver impactful business outcomes
  • Provide senior-level guidance and mentorship to the data science team, including reviewing projects, models, and code for peers and junior team members
  • Design advanced analytics to solve business problems
  • preprocess and perform exploratory data analysis on structured and unstructured data
  • create features based on expertise in the domain
  • use predictive modeling techniques and statistical analysis to predict outcomes and behaviors
  • Leverage the Agile methodology to ensure alignment of data science roadmap, features, and stories to business priorities and value streams
  • Collaborate with cross-functional team comprised of other data scientists, data engineers, ML engineers, and data analysts
  • Partner with other technology partners such as architects, engineers, product managers, scrum masters, release train engineers, and agile coaches to deliver on targeted business outcomes
What we offer
What we offer
  • Competitive base salary and compensation package
  • Annual stock grant
  • Employee stock purchase plan
  • 401(k)
  • Access to free, year-round money coaches
  • Annual bonus or periodic sales incentive or bonus
  • Medical, dental and vision insurance
  • Flexible spending account
  • Paid time off
  • Up to 12 paid holidays
  • Fulltime
Read More
Arrow Right