CrawlJobs Logo

Data Engineer Intern

ericsson.com Logo

Ericsson

Location Icon

Location:
Hungary , Budapest

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Here at Data & AI Foundation, we collect, ingest, and archive vast amounts of data from Ericsson's customers all over the world. Our unit is responsible for providing an Analytics platform with Data & AI Frameworks and shared analytics functions to be used broadly within Ericsson. Support the R&D digital transformation with diverse services connected to Data & AI. In this position you will belong to an agile, cross-functional team that works with Ericsson Network Benchmarking which utilizes anonymized customer data to calculate, visualize and benchmark the current network performance of operators all over the world to drive sales in Ericsson’s products.

Job Responsibility:

  • Write, test, and maintain high-quality code in Python
  • Assist in building data pipelines and create new benchmarking graphs
  • Cooperate with data scientists and data engineers to understand their current needs

Requirements:

  • Being a student in BSc or MSc in Computer Science, Electrical Engineering, or an equivalent degree
  • Experience in Python and scripting
  • Basic knowledge of Spark and Pandas
  • Familiarity with data warehousing, data pipelines/flows, microservices/cloud, Git, Linux, CI, and unit testing
  • An open and innovative mindset that adapts to changes easily
  • English proficiency, both written and spoken

Nice to have:

  • Knowledge of frameworks and applications: Spark, Hadoop, PowerBI, Kubernetes/Docker, AWS S3
  • Previous experience with data monitoring systems

Additional Information:

Job Posted:
March 18, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer Intern

Data Engineer, 2025/2026 Intern

Join Atlassian as an intern and spend your summer with us having an impact on ho...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Be currently enrolled in a Bachelors or Masters program in Software Engineering, Computer Science or other related technical field and completing your studies before January 2027
  • Experience programming with Python, or other related object-oriented programming languages
  • Knowledge of data structures, in particular how they are implemented and how to apply them to meet data challenges
  • Proficiency in SQL and relational databases experience
  • Demonstrated interest in the Data Engineering field through academic coursework, previous work or internship experience, or personal projects
Job Responsibility
Job Responsibility
  • Influence product teams
  • inform Data Science and Analytics Platform teams
  • partner with data consumers and products to ensure quality and usefulness of data assets
  • help strategize measurement
  • collecting data
  • generating insights
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Data Engineer Intern

We deliver sustainable, extraordinary growth by creating a new, unique, inspirin...
Location
Location
China , Shanghai
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Good interpersonal skills with the ability to collaborate, network, and build strong relations with team members and stakeholders
  • Good knowledge of advanced data structures and distributed computing
  • Good Knowledge of AI and machine learning concepts and algorithms
  • Broad knowledge of programming languages (e.g., Python, Java, Go, or Scala), including concepts from functional and object-oriented programming paradigms
  • Experience with AI/ML frameworks such as TensorFlow and PyTorch
  • Project experience with prompt tuning or fine-tuning experience on mainstream large language models such as ChatGPT(3.5, 4.0) and Meta Llama2
  • Fluent in English
  • At least 3 days working in office per week, and at least 6 months as the internship duration
Job Responsibility
Job Responsibility
  • Dive the transformation of IKEA into a more data-driven company by building and operating modern platforms and systems that are aligned with our constantly evolving data and AI landscape
  • Build the AI ecosystem at the top retail company
  • Play with PB level data of IKEA eco-systems(Online channels, Retail, customer fulfillment, etc.)
  • Work with top talents and get a jumpstart to your career
  • Parttime
Read More
Arrow Right

Product Data Engineering Intern

Product Data Engineering Intern role at Hewlett Packard Enterprise. This is an o...
Location
Location
Puerto Rico , Aguadilla
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Currently pursuing a Bachelor's degree in Systems Engineering, Industrial Engineering or Computer Engineering
  • Familiarity with SAP
  • Basic programming or scripting knowledge (e.g., Python, Java, C++)
  • Strong interest in high-tech and passion for learning
  • Excellent communication and interpersonal skills
  • Strong problem-solving and analytical skills
  • Time management skills and working with strict deadlines
  • A collaborative, solution-focused mindset and overall sense of urgency
Job Responsibility
Job Responsibility
  • Support senior team members on assigned technical projects
  • Help identify and troubleshoot technical issues, providing support and suggesting solutions
  • Assist with maintaining and updating hardware, software, and other technical systems
  • Participate in team activities by attending team meetings, learn about project methodologies, and collaborate effectively with colleagues
  • Actively engage in learning about new technologies and methodologies relevant to work
  • Fulfill tasks and responsibilities assigned by a supervisor in a timely and efficient manner
  • Participate in periodic reviews to share updates and incorporate feedback on assigned projects/initiatives
What we offer
What we offer
  • Health & Wellbeing benefits
  • Personal & Professional Development programs
  • Unconditional Inclusion environment
  • Comprehensive suite of benefits supporting physical, financial and emotional wellbeing
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Data and Analytics Engineer Intern

The Data and Analytics Engineer Intern will assist in designing, building, and t...
Location
Location
United States , Irvine
Salary
Salary:
17.00 - 22.00 USD / Hour
trace3.com Logo
Trace3
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Enrollment in the Junior or Senior year of an undergraduate program or master’s program at an accredited college or university
  • Candidates should be pursuing a field of study applicable to the Data Intelligence internship
  • Cumulative grade point average (GPA) of 3.0 or better
  • Ability to work independently on assigned tasks and accepts direction on given assignments
  • Self-motivated individuals with a customer mindset and desire to help people
  • Enthusiasm for technical problem solving with attention to detail and strong communication skills
  • Ability to learn and research in a dynamic and engaging environment
  • Availability to work 40 hours per week throughout the internship
Job Responsibility
Job Responsibility
  • Assist in designing, building, and testing data platforms and analytics solutions to generate actionable insights for our customers
  • Partner with our Data Intelligence Team to determine the best approach around data ingestion, structure, and storage, then work with the team to ensure these are implemented accurately
  • Contribute ideas on how to make our customers’ data more valuable and work with members of Trace3’s Engineering Team to implement solutions
What we offer
What we offer
  • Comprehensive medical, dental and vision plans for you and your dependents
  • 401(k) Retirement Plan with Employer Match, 529 College Savings Plan, Health Savings Account, Life Insurance, and Long-Term Disability
  • Competitive Compensation
  • Training and development programs
  • Major offices stocked with snacks and beverages
  • Collaborative and cool culture
  • Work-life balance and generous paid time off
  • Fulltime
Read More
Arrow Right

Data Engineer, Enterprise Data, Analytics and Innovation

Are you passionate about building robust data infrastructure and enabling innova...
Location
Location
United States
Salary
Salary:
110000.00 - 125000.00 USD / Year
vaniamgroup.com Logo
Vaniam Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience in data engineering, ETL, or related roles
  • Strong proficiency in Python and SQL for data engineering
  • Hands-on experience building and maintaining pipelines in a lakehouse or modern data platform
  • Practical understanding of Medallion architectures and layered data design
  • Familiarity with modern data stack tools, including: Spark or PySpark
  • Workflow orchestration (Airflow, dbt, or similar)
  • Testing and observability frameworks
  • Containers (Docker) and Git-based version control
  • Excellent communication skills, problem-solving mindset, and a collaborative approach
Job Responsibility
Job Responsibility
  • Design, build, and operate reliable ETL and ELT pipelines in Python and SQL
  • Manage ingestion into Bronze, standardization and quality in Silver, and curated serving in Gold layers of our Medallion architecture
  • Maintain ingestion from transactional MySQL systems into Vaniam Core to keep production data flows seamless
  • Implement observability, data quality checks, and lineage tracking to ensure trust in all downstream datasets
  • Develop schemas, tables, and views optimized for analytics, APIs, and product use cases
  • Apply and enforce best practices for security, privacy, compliance, and access control, ensuring data integrity across sensitive healthcare domains
  • Maintain clear and consistent documentation for datasets, pipelines, and operating procedures
  • Lead the integration of third-party datasets, client-provided sources, and new product-generated data into Vaniam Core
  • Partner with product and innovation teams to build repeatable processes for onboarding new data streams
  • Ensure harmonization, normalization, and governance across varied data types (scientific, engagement, operational)
What we offer
What we offer
  • 100% remote environment with opportunities for local meet-ups
  • Positive, diverse, and supportive culture
  • Passionate about serving clients focused on Cancer and Blood diseases
  • Investment in you with opportunities for professional growth and personal development through Vaniam Group University
  • Health benefits – medical, dental, vision
  • Generous parental leave benefit
  • Focused on your financial future with a 401(k) Plan and company match
  • Work-Life Balance and Flexibility
  • Flexible Time Off policy for rest and relaxation
  • Volunteer Time Off for community involvement
  • Fulltime
Read More
Arrow Right

Data Engineer

Become a player in our data engineering team, grow on a personal level and help ...
Location
Location
Serbia , Novi Beograd
Salary
Salary:
Not provided
mdpi.com Logo
MDPI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A university degree, ideally in Computer Science or related science, technology or engineering field
  • 2+ years of relevant work experience in data engineering roles
  • Experience in data acquisition, laking, warehousing, modeling, and orchestration
  • Proficiency in SQL (including window functions and CTE)
  • Proficiency in RDBMS (e.g., MySQL, PostgreSQL)
  • Strong programming skills in Python (with libraries like Polars, optionally Arrow / PyArrow API)
  • First exposure to OLAP query engines (e.g., Clickhouse, DuckDB, Apache Spark)
  • Familiarity with Apache Airflow (or similar tools like Dagster or Prefect)
  • Strong teamwork and communication skills
  • Ability to work independently and manage your time effectively
Job Responsibility
Job Responsibility
  • Assist in designing, building, and maintaining efficient data pipelines
  • Work on data modeling tasks to support the creation and maintenance of data warehouses
  • Integrate data from multiple sources, ensuring data consistency and reliability
  • Collaborate in implementing and managing data orchestration processes and tools
  • Help establish monitoring systems to maintain high standards of data quality and availability
  • Work closely with the Data Architect, Senior Data Engineers, and other members across the organization on various data infrastructure projects
  • Participate in the optimization of data processes, seeking opportunities to enhance system performance
What we offer
What we offer
  • Competitive salary and benefits package
Read More
Arrow Right

Senior Data Engineer

As a Senior Software Engineer, you will play a key role in designing and buildin...
Location
Location
United States
Salary
Salary:
156000.00 - 195000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in platform engineering, data engineering or in a data facing role
  • Experience in building data applications
  • Deep knowledge of data eco system with an ability to collaborate cross-functionally
  • Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
  • Excellent communication skills
  • Self-motivated and self-directed
  • Inquisitive, able to ask questions and dig deeper
  • Organized, diligent, and great attention to detail
  • Acts with the utmost integrity
  • Genuinely curious and open
Job Responsibility
Job Responsibility
  • Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
  • Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
  • Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
  • Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
  • Monitor and ensure the health, quality, and reliability of data flows and platform services
  • Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
  • Continuously evaluate and integrate new tools and technologies to improve platform capabilities
  • Contribute to architectural discussions, code reviews, and cross-functional projects
  • Document your work, champion best practices, and help level up the team through knowledge sharing
What we offer
What we offer
  • Equity
  • Company bonus or sales commissions/bonuses
  • 401(k) plan
  • At least 10 paid holidays per year
  • Flex PTO
  • Parental leave
  • Employee assistance program and wellbeing benefits
  • Global travel coverage
  • Life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right