CrawlJobs Logo

Data Engineer 3

comcastcorporation.com Logo

Comcast

Location Icon

Location:
India , Chennai

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Responsible for designing, building and overseeing the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required.

Job Responsibility:

  • Design, build and oversee deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data
  • Establish and build processes and structures to channel data from multiple inputs
  • Develop technical tools and programming that leverage AI, machine learning and big-data techniques to cleanse, organize and transform data
  • Create and establish design standards and assurance processes for software, systems and applications development
  • Review internal and external business and product requirements for data operations
  • Work with data modelers/analysts to understand business problems and create or augment data assets
  • Develop data structures and pipelines to organize, collect, standardize and transform data
  • Ensure data quality during ingest, processing and final load
  • Create standard ingestion frameworks for structured and unstructured data
  • Create standard methods for end users to consume data (database views, extracts, APIs)
  • Develop and maintain information systems (data warehouses, data lakes)
  • Participate in implementation of solutions via data architecture on on-prem and Cloud platforms
  • Determine appropriate storage platform based on privacy, access and sensitivity
  • Understand data lineage and transformation rules
  • Collaborate with technology partners to optimize data sourcing and processing
  • Develop strategies for data acquisition, archive recovery, and database implementation
  • Manage data migrations/conversions and troubleshoot data processing issues
  • Apply data privacy rules consistently
  • Identify and react to system notifications to ensure quality standards
  • Solve critical issues and share knowledge
  • Work nights and weekends, variable schedule(s) as necessary

Requirements:

  • Bachelor's Degree
  • 5-7 Years Relevant Work Experience
  • In-depth experience, knowledge and skills in own discipline
  • Experience in designing, building and overseeing deployment and operation of technology architecture, solutions and software for data
  • Experience in developing data structures and pipelines
  • Experience with data acquisition, archive recovery, and database implementation
  • Experience with on-prem platforms like Kubernetes and Teradata
  • Experience with Cloud platforms like Databricks, AWS S3, Redshift
  • Understanding of data lineage and transformation rules
  • Understanding of data sensitivity and customer data privacy rules and regulations
What we offer:
  • Paid Time off
  • Physical Wellbeing benefits
  • Financial Wellbeing benefits
  • Emotional Wellbeing benefits
  • Life Events + Family Support benefits

Additional Information:

Job Posted:
January 16, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer 3

Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
Canada , Toronto
Salary
Salary:
124000.00 - 145000.00 CAD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • bonus opportunities
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right

Data Architect / Engineer

The Data Architect – Engineer will shape the future of the telecommunications in...
Location
Location
France
Salary
Salary:
Not provided
netsf.fr Logo
NETWORK SOLUTIONS FACTORY
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a Data Architect, Data Engineer, or similar role in large-scale data environments
  • Expertise in data modeling, metadata management, and data governance
  • Experience with ETL/ELT frameworks and data pipeline orchestration
  • Solid SQL skills and experience with databases and distributed architectures
  • Professional working English
Job Responsibility
Job Responsibility
  • Define data standards, governance policies, naming conventions, and documentation practices
  • Build scalable data pipelines and lakehouse solutions
  • Develop and optimize data models for analytics, reporting, and artificial intelligence / machine learning.
  • Optimize data architectures for performance, cost-efficiency, and reliability
  • Evaluate and recommend data platforms, storage technologies, and integration patterns
  • Collaborate with analytics and business teams to understand data needs and translate them into architectural designs
  • Provide technical leadership on data projects and mentor junior team members
What we offer
What we offer
  • Remote work (with a compensatory allowance) up to 3 days a week
  • Profit-sharing and employee savings plan
  • Annual vacation bonus
  • Meal vouchers subsidized at 59,89% by NetSF, up to €11
  • Family health insurance, subsidized at 75% by NetSF
  • Provision of additional health insurance
  • A friendly work atmosphere: monthly breakfast, internal newsletter, fruit basket, free coffee and tea, etc…
  • Transport allowance and sustainable mobility package
  • International work environment
  • Fulltime
Read More
Arrow Right

Data & Analytics Engineer

As a Data & Analytics Engineer with MojoTech you will work with our clients to s...
Location
Location
United States
Salary
Salary:
90000.00 - 150000.00 USD / Year
mojotech.com Logo
MojoTech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in Data Engineering, Data Science, Data Warehousing
  • Strong experience in Python
  • Experience building and maintaining ETL/ELT pipelines, data warehouses, or real-time analytics systems
  • BA/BS in Computer Science, Data Science, Engineering, or a related field or equivalent experience in data engineering or analytics
  • Track record of developing and optimizing scalable data solutions and larger-scale data initiatives
  • Strong understanding of best practices in data management, including sustainment, governance, and compliance with data quality and security standards
  • Commitment to continuous learning and sharing knowledge with the team
Job Responsibility
Job Responsibility
  • Work with our clients to solve complex problems and to deliver high quality solutions as part of a team
  • Collaborating with product managers, designers, and clients, you will lead discussions to define data requirements and deliver actionable insights and data pipelines to support client analytics needs
What we offer
What we offer
  • Performance based end of year bonus
  • Medical, Dental, FSA
  • 401k with 4% match
  • Trust-based time off
  • Catered lunches when in office
  • 5 hours per week dedicated to self-directed learning, innovation projects, or skill development
  • Dog Friendly Offices
  • Paid conference attendance/yearly education stipend
  • Custom workstation
  • 6 weeks parental leave
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Kiddom is redefining how technology powers learning. We combine world-class curr...
Location
Location
United States , San Francisco
Salary
Salary:
150000.00 - 220000.00 USD / Year
kiddom.co Logo
Kiddom
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a data engineer
  • 8+ years of software engineering experience (including data engineering)
  • Proven experience as a Data Engineer or in a similar role with strong data modeling, architecture, and design skills
  • Strong understanding of data engineering principles including infrastructure deployment, governance and security
  • Experience with MySQL, Snowflake, Cassandra and familiarity with Graph databases. (Neptune or Neo4J)
  • Proficiency in SQL, Python, (Golang)
  • Proficient with AWS offerings such as AWS Glue, EKS, ECS and Lambda
  • Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders
  • Strong understanding of PII compliance and best practices in data handling and storage
  • Strong problem-solving skills, with a knack for optimizing performance and ensuring data integrity and accuracy
Job Responsibility
Job Responsibility
  • Design, implement, and maintain the organization’s data infrastructure, ensuring it meets business requirements and technical standards
  • Deploy data pipelines to AWS infrastructure such as EKS, ECS, Lambdas and AWS Glue
  • Develop and deploy data pipelines to clean and transform data to support other engineering teams, analytics and AI applications
  • Extract and deploy reusable features to Feature stores such as Feast or equivalent
  • Evaluate and select appropriate database technologies, tools, and platforms, both on-premises and in the cloud
  • Monitor data systems and troubleshoot issues related to data quality, performance, and integrity
  • Work closely with other departments, including Product, Engineering, and Analytics, to understand and cater to their data needs
  • Define and document data workflows, pipelines, and transformation processes for clear understanding and knowledge sharing
What we offer
What we offer
  • Meaningful equity
  • Health insurance benefits: medical (various PPO/HMO/HSA plans), dental, vision, disability and life insurance
  • One Medical membership (in participating locations)
  • Flexible vacation time policy (subject to internal approval). Average use 4 weeks off per year
  • 10 paid sick days per year (pro rated depending on start date)
  • Paid holidays
  • Paid bereavement leave
  • Paid family leave after birth/adoption. Minimum of 16 paid weeks for birthing parents, 10 weeks for caretaker parents. Meant to supplement benefits offered by State
  • Commuter and FSA plans
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for a Data Engineer to scale our Data Operations team and help us...
Location
Location
Spain , Barcelona
Salary
Salary:
Not provided
yokoy.io Logo
Yokoy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of relevant experience as a Data Engineer, Business Intelligence, Big-Data Engineer or similar role working with large-scale data systems
  • Excellent communication skills, both written and spoken, in English
  • Master in SQL, optimizing queries for performance, scalability, and ease of maintenance
  • Comfortable querying different types of databases (PostgreSQL, Redshift, Snowflake) and have knowledge of different AWS services
  • Ace at data modeling, accustomed to designing and implementing complex architectures with a constant eye on their future evolution while taking into account the needs of multiple users
  • Experience building data pipelines using Python
  • Experience integrating data from multiple sources including DBs, product tracking, and APIs
  • Instinct for automation
  • Desire to work in an international environment, with minimal direction, and with highly engaged individuals
Job Responsibility
Job Responsibility
  • Ensure we are able to keep all data in our DWH updated and accessible for analysis
  • Partner with the Data analysts to support the requirements of the company in terms of analytics, reliability and efficiency
  • Develop and maintain data pipelines to extract data from different sources and integrate it in the DWH following data modeling best practices
  • Take charge of the required data processing while ensuring sustainable and organic growth of the data model and the infrastructure
  • Keep our data infrastructure up to date and working like a clock
  • Integrate and model datasets from different sources
  • Support our Data Analysts & BI Developers to get the right data to build awesome dashboards and complex analytical models
  • Support Product and Analytics teams in defining the best approaches for data modeling
  • Use data to investigate and help resolve issues in our product or processes
  • Proactively suggest improvements to data reliability, efficiency and quality
What we offer
What we offer
  • Competitive compensation including equity in the company
  • Generous vacation days so you can rest and recharge
  • Health perks such as private healthcare or gym allowance depending on your location
  • Flexible compensation plan to help you diversify and increase the net salary
  • Unforgettable TravelPerk events including to travel to one of our hubs
  • A mental health support tool for your wellbeing
  • Exponential growth opportunities
  • Fulltime
Read More
Arrow Right

AWS Data Engineer

AlgebraIT is hiring an AWS Data Engineer in Austin, Texas! If you have at least ...
Location
Location
United States , Austin
Salary
Salary:
Not provided
algebrait.com Logo
AlgebraIT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering with AWS
  • Proficiency in Python, SQL, and big data tools
  • Experience with AWS services such as Lambda and EC2
  • Strong communication and teamwork skills
  • Bachelor’s in Computer Science or similar
Job Responsibility
Job Responsibility
  • Develop and maintain data pipelines using AWS services
  • Automate data ingestion and processing workflows
  • Collaborate with cross-functional teams to ensure robust data solutions
  • Monitor and optimize data pipeline performance
  • Ensure data quality and implement security best practices
  • Integrate data from multiple sources for analytics
  • Implement data validation and error-handling processes
  • Write and maintain technical documentation for data workflows
  • Manage and configure cloud infrastructure related to data pipelines
  • Provide technical support and troubleshooting for data-related issues
  • Fulltime
Read More
Arrow Right

Data Engineer

At Adyen, we treat data and data artifacts as first-class citizens. They form ou...
Location
Location
Netherlands , Amsterdam
Salary
Salary:
Not provided
adyen.com Logo
Adyen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience working as a Data Engineer or in a similar role
  • Solid understanding of both Software and Data Engineering practices
  • Proficient in tools and languages such as: Python, PySpark, Airflow, Hadoop, Spark, Kafka, SQL, Git
  • Able to effectively communicate complex data-related concepts and outcomes to a diverse range of stakeholders
  • Capable of identifying opportunities, devising solutions, and handling projects independently
  • Experimental mindset with a ‘launch fast and iterate’ mentality
  • Skilled in promoting a data-centric culture within technical teams and advocating for setting standards and continuous improvement
Job Responsibility
Job Responsibility
  • Collaborative Solution Development: Engage with a diverse range of stakeholders, including data scientists, analysts, software engineers, product managers, and customers, to understand their requirements and craft effective solutions
  • Quality Pipelines and Architecture: Design, develop, deploy and operate high-quality production ELT pipelines and data architectures. Integrate data from various sources and formats, ensuring compatibility, consistency, and reliability
  • Data Best Practices: Help establish and share best practices in performance, code quality, data validation, data governance, and discoverability in your team and in other teams. Participate in mentoring and knowledge sharing initiatives
  • High Quality Data and Code: Ensure data is accurate, complete, reliable, relevant, and timely. Implement testing, monitoring and validation protocols for your code and data, leveraging tools such as Pytest
  • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and systems. Improve query performance and resource utilization to meet SLAs and performance requirements, using technologies Spark optimizations
Read More
Arrow Right

Middle Data Engineer

At LeverX, we have had the privilege of delivering over 950 projects. With 20+ y...
Location
Location
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience in data engineering
  • Strong SQL and solid Python for data processing
  • Hands-on experience with at least one cloud and a modern warehouse/lakehouse: Snowflake, Redshift, Databricks, or Apache Spark/Iceberg/Delta
  • Experience delivering on Data Warehouse or Lakehouse projects: star/snowflake modeling, ELT/ETL concepts
  • Familiarity with orchestration (Airflow, Prefect, or similar) and containerization fundamentals (Docker)
  • Understanding of data modeling, performance tuning, cost-aware architecture, and security/RBAC
  • English B1+
Job Responsibility
Job Responsibility
  • Design, build, and maintain batch/streaming pipelines (ELT/ETL) from diverse sources into DWH/Lakehouse
  • Model data for analytics (star/snowflake, slowly changing dimensions, semantic/metrics layers)
  • Write production-grade SQL and Python
  • optimize queries, file layouts, and partitioning
  • Implement orchestration, monitoring, testing, and CI/CD for data workflows
  • Ensure data quality (validation, reconciliation, observability) and document lineage
  • Collaborate with BI/analytics to deliver trusted, performant datasets and dashboards
What we offer
What we offer
  • Projects in different domains: Healthcare, manufacturing, e-commerce, fintech, etc.
  • Projects for every taste: Startup products, enterprise solutions, research & development projects, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay in the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right