CrawlJobs Logo

Data Engineer with Python

bytex.net Logo

Bytex Technologies

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We seek a senior data engineer with Python knowledge to join one of our Apple projects. This engineer will analyze data to generate business insights using Python and new and relevant technologies. They will also help create and maintain the analytic data pipelines that empower the analytics and reporting key to decision-making across retail while leveraging large and complex data sources. Our new colleague will partner with cross-functional teams to identify, create, and maintain metrics and data pipelines to support analytics, reporting, and key decisions on various topics, including product launches, customer experience, and operational performance.

Job Responsibility:

  • Perform Data Analysis to generate business insights
  • Design, create, refine, and maintain data pipelines and ingestion processes for modeling, analysis, and reporting
  • Support Critical data processes running in production
  • Collaborate with other data scientists, analysts, and engineers to build full-service data solutions
  • Work with cross-functional business partners and vendors to acquire and transform raw data sources
  • Develop a deep understanding of our retail customer base and purchase choices and contribute to developing tools to improve business efficiency and productivity

Requirements:

  • Minimum 4 years of production experience working with Python
  • Experience building robust and scalable data processes and pipelines for modeling, analysis, and reporting
  • Experience with microservices, Apache Spark, Ray, REST, Docker, Kubernetes, Redis, AWS Cloud, S3, RDBMS (such as PostgreSQL), NoSQL, Vector DB
  • Solid knowledge of Fast API/Django/Flask, Pydantic, Matplotlib, and Plotly
  • Extensive experience with building highly scalable and available REST APIs
  • Experience in using workflow systems such as Airflow is a plus
  • Experience with Kubernetes, CI/CD deployment, Helm, and monitoring systems such as Prometheus and Grafana
  • Knowledge of Graph Database, LLM/RAG, Tensor, LangChain is a plus

Nice to have:

  • Experience in using workflow systems such as Airflow is a plus
  • Knowledge of Graph Database, LLM/RAG, Tensor, LangChain is a plus
What we offer:
  • Open office policy - Work from Anywhere and Utilities expenses coverage
  • Extra days off & Bi-monthly team lunch / activities
  • Stock options
  • Offers for dental and optical clinics & health subscription & sports discounts
  • Access to learning platforms & Bookster subscription
  • Good coffee and snacks at the office & discounts at coffee shops nearby

Additional Information:

Job Posted:
January 01, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer with Python

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Senior AWS Data Engineer / Data Platform Engineer

We are seeking a highly experienced Senior AWS Data Engineer to design, build, a...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering and data platform development
  • Strong hands-on experience with: AWS Glue
  • Amazon EMR (Spark)
  • AWS Lambda
  • Apache Airflow (MWAA)
  • Amazon EC2
  • Amazon CloudWatch
  • Amazon Redshift
  • Amazon DynamoDB
  • AWS DataZone
Job Responsibility
Job Responsibility
  • Design, develop, and optimize scalable data pipelines using AWS native services
  • Lead the implementation of batch and near-real-time data processing solutions
  • Architect and manage data ingestion, transformation, and storage layers
  • Build and maintain ETL/ELT workflows using AWS Glue and Apache Spark on EMR
  • Orchestrate complex data workflows using Apache Airflow (MWAA)
  • Develop and manage serverless data processing using AWS Lambda
  • Design and optimize data warehouses using Amazon Redshift
  • Implement and manage NoSQL data models using Amazon DynamoDB
  • Utilize AWS DataZone for data governance, cataloging, and access management
  • Monitor, log, and troubleshoot data pipelines using Amazon CloudWatch
  • Fulltime
Read More
Arrow Right

Python Data Engineer

Arthur Lawrence is looking for a Python Data Engineer one of our clients in Hous...
Location
Location
United States , Houston
Salary
Salary:
Not provided
arthurlawrence.net Logo
Arthur Lawrence
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of professional Python development
  • Strong knowledge of OOP, design patterns, and SOA
  • Hands-on experience in data engineering, data pipeline development, and web scraping (Requests, BeautifulSoup, Selenium)
  • Oracle/PL SQL expertise, stored procedures
  • Bachelor’s degree in Computer Science, MIS, or related field
  • Agile/Scrum experience
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Big Data / Scala / Python Engineering Lead

The Applications Development Technology Lead Analyst is a senior level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least two years (Over all 10+ hands on Data Engineering experience) of experience building and leading highly complex, technical data engineering teams
  • Lead data engineering team, from sourcing to closing
  • Drive strategic vision for the team and product
  • Experience managing an data focused product, ML platform
  • Hands on experience relevant experience in design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala
  • Experience managing, hiring and coaching software engineering teams
  • Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality
  • 7 to 10+ years of hands-on experience in big data development, focusing on Apache Spark, Scala, and distributed systems
  • Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines
  • Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Fulltime
Read More
Arrow Right

Python Data Engineer

The FX Data Analytics & AI Technology team, within Citi's FX Technology organiza...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8 to 12 Years experience
  • Master’s degree or above (or equivalent education) in a quantitative discipline
  • Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate
  • Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc)
  • Experience developing and implementing Gen AI applications from data in a financial context
  • Proficiency working with version control systems such as Git, and familiarity with Linux computing environments
  • Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc
  • Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools
  • Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences
  • Ideally, some experience working with CI/CD pipelines and containerization technologies like Docker and Kubernetes
Job Responsibility
Job Responsibility
  • Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business
  • Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability
  • Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology
  • Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance
  • Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders
  • Fulltime
Read More
Arrow Right

Data Engineering Support Engineer / Manager

Wissen Technology is hiring a seasoned Data Engineering Support Engineer / Manag...
Location
Location
India , Mumbai; Pune
Salary
Salary:
Not provided
votredircom.fr Logo
Wissen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor of Technology or master's degree in computer science, Engineering, or related field
  • 8-12 years of work experience
  • Python, SQL
  • Familiarity with data engineering
  • Experience with AWS data and analytics services or similar cloud vendor services
  • Strong problem solving and communication skills
  • Ability to organise and prioritise work effectively
Job Responsibility
Job Responsibility
  • Incident and user management for data and analytics platform
  • Development and maintenance of Data Quality framework (including anomaly detection)
  • Implementation of Python & SQL hotfixes and working with data engineers on more complex issues
  • Diagnostic tools implementation and automation of operational processes
  • Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment
  • Support research analysts and traders with issue resolution
  • Fulltime
Read More
Arrow Right

Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
Canada , Toronto
Salary
Salary:
124000.00 - 145000.00 CAD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • bonus opportunities
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right