CrawlJobs Logo

AI & Data Science Engineering Expert

vodafone.com Logo

Vodafone

Location Icon

Location:
Portugal , Lisboa

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

At Vantage Towers, we’re on a mission to power Europe’s sustainable digital transformation. As a leading tower company, we’re ushering in an era of technology-driven advances to help connect people, businesses, and internet-enabled devices like never before. We combine the scale, stability and quality of our tower network with the agility, optimism and energy of a start-up. As part of our team, you’ll work in a dynamic and multicultural environment that embraces open communication, collaboration and teamwork. If you’re ready to take responsibility and shape the future of telco infrastructure with us, then let’s level up in your career and reach the top – together.

Job Responsibility:

  • Develop and operationalize machine learning and generative AI models for high-impact business use cases
  • Design and implement data science models, including feature engineering and rapid prototyping
  • Build and automate AI pipelines, integrating them into business processes and tools
  • Manage the full AI lifecycle, including MLOps, monitoring, retraining, and governance alignment
  • Design and deploy GenAI applications such as content summarization, intelligent document processing, and chatbot assistants
  • Collaborate with data engineers and architects to align AI components with enterprise data architecture
  • Translate complex AI models into actionable business insights through compelling storytelling

Requirements:

  • University degree (Bachelor’s or Master’s) in Data Science, Computer Science, Statistics, or a related field
  • Minimum of 3 years of hands-on experience in building and deploying machine learning models
  • At least 2 years working with cloud-based AI services, preferably on Google Cloud Platform (e.g., Vertex AI, BigQuery ML)
  • Strong proficiency in Python and experience with ML/GenAI libraries (scikit-learn, TensorFlow, Hugging Face, OpenAI SDK)
  • Proven experience with LLM-based applications, RAG patterns, and agentic workflows
  • Knowledge of MLOps and GenAIOps workflows, including model versioning and performance tracking
  • Fluency in Business English (C1 level written and spoken)
What we offer:
  • Diverse, multicultural setup based on values – Accountability, Respect, Teamwork, and Trust
  • Attractive salary package
  • Meal Allowance delivered on Pluxee card - €10.20/day
  • Pension Plan
  • Full Health Insurance for employees and co-payment for family members
  • Life Insurance
  • 7 extra vacation days: 4 flexible, plus 3 fixed — 1 on Carnival, 1 on Christmas, and half a day on Easter and New Year's
  • Parking Slot

Additional Information:

Job Posted:
April 23, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for AI & Data Science Engineering Expert

Data Science Lead, Guest Funnel Science

This tech lead will be at the core of using AI and casual inference to understan...
Location
Location
United States
Salary
Salary:
194000.00 - 240000.00 USD / Year
airbnb.com Logo
Airbnb
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Causal inference expertise with marketplace experience
  • Preferred domain experience in search, UX discovery, personalized evidence systems
  • Advanced degree in Computer Science, Statistics, Econometrics or related field
  • 9+ years of industry experience with a PhD (or 12+ years with a Masters)
  • Strong in communication with XFN partners in product, engineering, and design to enable data-driven product development with a focus on the user experience
  • Expert in at least one programming language for data analysis (Python or R) with familiarity in SQL
  • Comfort with developing proof-of-concept prototypes
  • Passionate about AI and possessing a learner’s mindset towards LLMs and dynamic systems
  • Proven ability to succeed in both collaborative and independent work environments
  • Demonstrated willingness and track record of engagement with the technical community
Job Responsibility
Job Responsibility
  • Learn: Develop deep understanding of how guests navigate and re-engage with our app via analysis, research, and by leveraging granular user action and sequence datasets
  • Partner: With product and engineering, drive technical frameworks and science leadership to explore innovative paradigms for detecting revealed preferences and quantifying online frictions
  • Build: Write code for prototypes to detect and quantify taxonomy of guest preferences via iterative development of data frameworks, models and artefacts derived from AI toolkits
  • Evaluate: Assess assumptions and efficacy of derived guest preferences via measurement and validating hypotheses linked to online guest action and engagement. Setup experiments and data feedback loops to own a high bar over continuous impact
  • Influence: Regularly present findings and recommendations to leadership audiences to inform strategy and cross-functional deliverables
What we offer
What we offer
  • bonus
  • equity
  • benefits
  • Employee Travel Credits
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Scientist AI/ML

Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS in Computer Science or Data Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • General understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, Neural network, Graph ML, etc
  • Experience building data science-driven solutions including data collection, feature selection, model training, post-deployment validation
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Works well in a team setting and is self-driven
Job Responsibility
Job Responsibility
  • Collaborate with team to understand feature, work with domain experts to identify relevant “signals” during feature engineering, deliver generic and performant ML solutions
  • Keep up to date with newest technology trends
  • Communicate results and ideas to key decision makers
  • Implement new statistical or other mathematical methodologies as needed for specific models or analysis
  • Optimize joint development efforts through appropriate database use and project design
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Data Scientist - AI

The Data Scientist builds from the ground up to meet the needs of mission-critic...
Location
Location
Puerto Rico , Aguadilla
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, engineering, information systems, or closely related quantitative discipline. Master’s desirable
  • Typically, 4-7 years’ experience
  • Strong background in statistical and machine learning techniques such as anomaly detection, clustering and ranking of events, time series analysis, event stream mining, hypothesis testing, causal inference, deep learning
  • Great at communicating concepts and results
  • strong data visualization skills
  • Expert Python coder (PySpark, Scikit-learn)
  • experience with software engineering best practice
  • Relevant industry experience in data science, machine learning
  • Experience with online learning algorithms, reinforcement learning, semi-supervised learning, or mixed time-series/event streams
  • Experience with Agentic AI
Job Responsibility
Job Responsibility
  • Work with domain experts to identify and formalize machine learning problems for wireless and wired network diagnostics, root causing, problem remediation, and optimization
  • Discover new problem signatures in customer networks
  • Design, implement, and validate machine learning algorithms on big data
  • Guide and oversee deployment of implemented machine learning solutions and monitor their operation
  • Use Agentic AI to solve networking problems
  • Analyses the feature specifications and determines the required coding, testing, and integration activities
  • Designs and develops moderate to complex cloud application modules per feature specifications adhering to security policies
  • Identifies debugs and creates solutions for issues with code and integration into application architecture
  • Develops and executes comprehensive test plans for features adhering to performance, scale, usability, and security requirements
  • Deploy cloud-based systems and applications code using continuous integration/deployment (CI/CD) pipelines to automate cloud applications' management, scaling, and deployment
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Junior Data Scientist - AI

Hewlett Packard Enterprise (HPE) is seeking a Junior Data Scientist - AI to desi...
Location
Location
United States , San Juan
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in computer science, engineering, information systems, or closely related quantitative discipline
  • typically, 2-4 years’ experience
  • strong background in statistical and machine learning techniques such as anomaly detection, clustering and ranking of events, time series analysis, event stream mining, hypothesis testing, causal inference, deep learning
  • great at communicating concepts and results
  • strong data visualization skills
  • expert Python coder (PySpark, Scikit-learn)
  • experience with software engineering best practice
  • relevant industry experience in data science, machine learning
  • experience with online learning algorithms, reinforcement learning, semi-supervised learning, or mixed time-series/event streams
  • experience with Agentic AI
Job Responsibility
Job Responsibility
  • Works with domain experts to identify and formalize machine learning problems for wireless and wired network diagnostics, root causing, problem remediation, and optimization
  • discover new problem signatures in customer networks
  • design, implement, and validate machine learning algorithms on big data
  • guide and oversee deployment of implemented machine learning solutions and monitor their operation
  • use Agentic AI to solve networking problems
  • analyses the feature specifications and determines the required coding, testing, and integration activities
  • designs and develops moderate to complex cloud application modules per feature specifications adhering to security policies
  • identifies debugs and creates solutions for issues with code and integration into application architecture
  • develops and executes comprehensive test plans for features adhering to performance, scale, usability, and security requirements
  • deploy cloud-based systems and applications code using continuous integration/deployment (CI/CD) pipelines to automate cloud applications' management, scaling, and deployment
What we offer
What we offer
  • comprehensive suite of benefits supporting physical, financial and emotional wellbeing
  • specific programs catered to helping reach career goals
  • unconditional inclusion.
  • Fulltime
Read More
Arrow Right

Senior Data Science Solution Architect

The consultant will need to have extensive experience working across Microsoft e...
Location
Location
United States , Irving
Salary
Salary:
Not provided
hireitpeople.com Logo
Hire IT People, Inc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience working across Microsoft environments utilizing Azure Cloud Solutions
  • Broad skill set encompassing SQL, Python, Dataiku, Alation, Databricks, Snowflake, Power BI, Tableau, AWS and Azure Clouds
  • Strong experience creating design documentation, and knowledge-based artifacts
  • Prior experience working with Accounting and Finance Leaders, as well as Sales and Operations Leaders – with an understanding of financial data and operational data (job coding, vendor data, costs, etc)
  • Expertise as a Microsoft Certified Trainer, Data science professional and Amazon Subject matter expert
  • Prior expertise from the Center of Excellence for Data, AI and ML
  • Minimum qualification is a bachelor's degree or equivalent in engineering or equivalent in a related field or a foreign equivalent is required closely related field with relevant experience
Job Responsibility
Job Responsibility
  • Work in collaboration with the business operations leaders to gain insight to how the company utilizes their data, uncover additional needs, and ultimately build an enhanced and scalable environment to meet business initiatives
  • Data Analysis and Data engineering: Utilizing tools and technologies like SQL and Python for data analysis and engineering tasks
  • Implementing data engineering activities using platforms like Databricks and Snowflake
  • Data Science Methodologies: Applying data science methodologies, using Python and tools like Dataiku for data science workflows
  • Apply Machine Learning strategies and models for a better Model deployment and pipelines
  • Data Governance: Serving as subject matter expert in Data Governance, leveraging Alation to manage and Govern Data effectively by doing data cataloging and stewardship
  • Team Leadership: Acting as a team or technical lead, responsible for Overseeing and delivering the team's work in data engineering and analysis
  • Dashboard Creation: Creating dashboards using visualization tools such as Power BI, Tableau and Microsoft suite
  • Cloud Services: Leveraging Amazon web services (S3, IAM, Redshift, Cloudwatch, Glue, Sagemaker,etc.. ) and Azure( Microsoft Fabric, Azure Data Factory,SQL Server,Azure SQL,SSAS,Synapse,Power BI) Cloud Services to implement and deliver business activities along with compliance, Security, Disaster recovery , testing and Devops
  • Assist the team or firm with the expertise as a Microsoft Certified Trainer, Data science professional and Amazon Subject matter expert
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
84835.61 - 149076.17 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows, BQML, Vertex AI.
  • Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics.
  • Hands-on experience working with real-time, unstructured, and synthetic data.
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
  • Expert knowledge on Python programming and SQL.
  • Experience with cloud platforms (AWS, GCP, Azure) and their data services
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
  • Familiarity with synthetic data generation and unstructured data processing
Job Responsibility
Job Responsibility
  • Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets
  • Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth.
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound.
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
  • Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Eligible to participate in an annual incentive program
  • Fulltime
Read More
Arrow Right

Presales AI & Data

As a Data & AI Presales Expert, you will play a critical role in driving busines...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Science, Engineering, or related field
  • 3 years presales/technical consulting experience in AI/Data
  • Ability to lead workshops, Proof of Concepts (POCs), and technical demonstrations
  • Strong understanding of AI/ML concepts and data technologies
  • Experience with cloud AI/data services (AWS, Azure, GCP)
  • Fluency in Polish and professional English
Job Responsibility
Job Responsibility
  • Understand customer needs and translate them into AI/data solutions
  • Develop and present technical proposals and demonstrations
  • Design solution architectures and collaborate with internal teams
  • Act as a technical expert and advocate for our AI/data offerings
  • Support sales efforts, including RFPs/RFIs
  • Manage technical aspects of PoCs and pilot projects
  • Stay updated on AI/data trends and competitor landscape
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses
  • Ongoing guidance from a dedicated Team Manager
  • Tailored technical mentoring
  • Dedicated team-building budget
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

The Data Engineer is responsible for designing, building, and maintaining robust...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
ibvogt.com Logo
ib vogt GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • experience in renewable energy, engineering, or asset-heavy industries is a plus
  • Strong experience with modern data stack (e.g., PowerPlatform, Azure Data Factory, Databricks, Airflow, dbt, Synapse, Snowflake, BigQuery, etc.)
  • Proficiency in Python and SQL for data transformation and automation
  • Experience with APIs, message queues (Kafka, Event Hub), data streaming and knowledge of data lakehouse and data warehouse architectures
  • Familiarity with CI/CD pipelines, DevOps practices, and containerization (Docker, Kubernetes)
  • Understanding of cloud environments (preferably Microsoft Azure, PowerPlatform)
  • Strong analytical mindset and problem-solving attitude paired with a structured, detail-oriented, and documentation-driven work style
  • Team-oriented approach and excellent communication skills in English
Job Responsibility
Job Responsibility
  • Design, implement, and maintain efficient ETL/ELT data pipelines connecting internal systems (M365, Sharepoint, ERP, CRM, SCADA, O&M, etc.) and external data sources
  • Integrate structured and unstructured data from multiple sources into the central data lake / warehouse / Dataverse
  • Build data models and transformation workflows to support analytics, reporting, and AI/ML use cases
  • Implement data quality checks, validation rules, and metadata management according to the company’s data governance framework
  • Automate workflows, optimize performance, and ensure scalability of data pipelines and processing infrastructure
  • Work closely with Data Scientists, Software Engineers, and Domain Experts to deliver reliable datasets for Digital Twin and AI applications
  • Maintain clear documentation of data flows, schemas, and operational processes
What we offer
What we offer
  • Competitive remuneration and motivating benefits
  • Opportunity to shape the data foundation of ib vogt’s digital transformation journey
  • Work on cutting-edge data platforms supporting real-world renewable energy assets
  • A truly international working environment with colleagues from all over the world
  • An open-minded, collaborative, dynamic, and highly motivated team
  • Fulltime
Read More
Arrow Right