CrawlJobs Logo

Senior Data Engineer (GCP)

ariteknokent.com.tr Logo

İTÜ ARI Teknokent

Location Icon

Location:
Turkey , Sarıyer/İstanbul

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Get ready to take your place on n11, an open market platform has made valuable contributions to the e-commerce sector since its establishment by bringing more than 330 thousand registered business partners to customers. We are looking for "Senior Data Engineer" to join our team in Data & Insights Department. Data is at the very heart of n11 and as such Data related positions play key roles in our strategic initiatives. n11 Data & Insights team is looking for individuals with a background in full life cycle complex data-implementation projects such as growing our Data Lake on GCP or building/improving data processing systems and machine learning/dep learning models. The ideal candidates should have experience in setting up and managing cloud (i.e. GCP) and on-premise infrastructures, and who are able to translate business needs into data architecture solutions and then drive implementation of solutions in production environments. The Data Engineer will support our teams on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects as well as being part of the technical vision and next-generation direction for Data & Insights team’s products.

Job Responsibility:

  • Analyze, Design, Implement, Test, and Document all new or modified AA/ML/DL systems, models and applications
  • Integrating data management technologies and software engineering tools into existing structures
  • Building and managing data jobs with various data platforms (on-prem rdbms to cloud and vice versa)
  • Implementation of data orchestration pipelines, data sourcing, cleansing, augmentation and quality control processes
  • Deployment of data pipelines in GCP
  • DevOps and DataOps skills including “infrastructure as code” systems
  • Leverage latest technologies to deliver better insights more quickly and cost effectively

Requirements:

  • Academic degree in Data Engineering, Computer Engineering, Computer Science, Software Engineering, Applied Mathematics or similar background
  • Minimum of 5 years work experience
  • 2+ years professional development experience with the GCP data stack
  • Expertise in architecting, developing, and managing real-time data pipelines
  • Expertise in deploying machine learning models in production
  • Experience with stream data pipeline frameworks or solutions
  • Experience working in message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience with Kafka, PubSub, or other event-based systems
  • Experience working in cloud environments and with containerization frameworks, tools and platforms (e.g Docker, Kubernetes, GKE, etc)
  • Expertise in building data products incrementally and integrating/managing datasets on GCP from multiple sources
  • Expertise in developing ETL/ELT workflows with Python or Scala on premise and cloud data sources and external systems
  • Expertise in ML/DL frameworks (e.g. PyTorch, TensorFlow, Keras)
  • Expertise in Database Scripting languages
  • Expertise in columnar, distributed, row based databases
  • Intellectual curiosity and ability to handle multiple projects and challenging deadlines
  • Strong analytical, interpersonal and communication skills
  • Ability to communicate in English language

Additional Information:

Job Posted:
February 01, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer (GCP)

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a senior data engineer, you will help our clients with building a variety of ...
Location
Location
Belgium , Brussels
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience as a Data Engineer or in software engineering in a data context
  • Programming experience with one or more languages: Python, Scala, Java, C/C++
  • Knowledge of relational database technologies/concepts and SQL is required
  • Experience building, scheduling and maintaining data pipelines (Spark, Airflow, Data Factory)
  • Practical experience with at least one cloud provider (GCP, AWS or Azure). Certifications from any of these is considered a plus
  • Knowledge of Git and CI/CD
  • Able to work independently, prioritize multiple stakeholders and tasks, and manage work time effectively
  • You have a degree in Computer Engineering, Information Technology or related field
  • You are proficient in English, knowledge of Dutch and/or French is a plus.
Job Responsibility
Job Responsibility
  • Gather business requirements and translate them to technical specifications
  • Design, implement and orchestrate scalable and efficient data pipelines to collect, process, and serve large datasets
  • Apply DataOps best practices to automate testing, deployment and monitoring
  • Continuously follow & learn the latest trends in the data world.
What we offer
What we offer
  • A variety of perks, such as mobility options (including a company car), insurance coverage, meal vouchers, eco-cheques, and more
  • Continuous learning opportunities through the Sopra Steria Academy to support your career development
  • The opportunity to connect with fellow Sopra Steria colleagues at various team events.
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role at UpGuard supporting analytics teams to extract insig...
Location
Location
Australia , Sydney; Melbourne; Brisbane; Hobart
Salary
Salary:
Not provided
https://www.upguard.com Logo
UpGuard
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with data sourcing, storage and modelling to effectively deliver business value right through to BI platform
  • AI first mindset and experience scaling an Analytics and BI function at another SaaS business
  • Experience with Looker (Explores, Looks, Dashboards, Developer interface, dimensions and measures, models, raw SQL queries)
  • Experience with CloudSQL (PostgreSQL) and BigQuery (complex queries, indices, materialised views, clustering, partitioning)
  • Experience with Containers, Docker and Kubernetes (GKE)
  • Familiarity with n8n for automation
  • Experience with programming languages (Go for ETL workers)
  • Comfortable interfacing with various APIs (REST+JSON or MCP Server)
  • Experience with version control via GitHub and GitHub Flow
  • Security-first mindset
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable data pipelines to consolidate information from various internal systems and third-party sources
  • Develop and manage comprehensive semantic layer using technologies like LookML, dbt or SQLMesh
  • Implement and enforce data quality checks, validation rules, and governance processes
  • Ensure AI agents have access to necessary structured and unstructured data
  • Create clear, self-maintaining documentation for data models, pipelines, and semantic layer
What we offer
What we offer
  • Great Place to Work certified company
  • Equal Employment Opportunity and Affirmative Action employer
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role in Data & Analytics, Group Digital to build trusted da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on building production data systems
  • Experience designing and operating batch and streaming pipelines on cloud platforms (GCP preferred)
  • Proficiency with tools like BigQuery, Dataflow/Beam, Pub/Sub (or Kafka), Cloud Composer/Airflow, and dbt
  • Fluent in SQL and production-grade Python/Scala for data processing and orchestration
  • Understanding of data modeling (star/snowflake, vault), partitioning, clustering, and performance at TB-PB scale
  • Experience turning ambiguous data needs into robust, observable data products with clear SLAs
  • Comfort with messy external data and geospatial datasets
  • Experience partnering with Data Scientists to productionize features, models, and feature stores
  • Ability to automate processes, codify standards, and champion governance and privacy by design (GDPR, PII handling, access controls)
Job Responsibility
Job Responsibility
  • Build Expansion360, the expansion data platform
  • Architect and operate data pipelines on GCP to ingest and harmonize internal and external data
  • Define canonical models, shared schemas, and data contracts as single source of truth
  • Enable interactive maps and location analytics through geospatial processing at scale
  • Deliver curated marts and APIs that power scenario planning and product features
  • Implement CI/CD for data, observability, access policies, and cost controls
  • Contribute to shared libraries, templates, and infrastructure-as-code
What we offer
What we offer
  • Intellectually stimulating, diverse, and open atmosphere
  • Collaboration with world-class peers across Data & Analytics, Product, and Engineering
  • Opportunity to create measurable, global impact
  • Modern tooling on Google Cloud Platform
  • Hardware and OS of your choice
  • Continuous learning (aim to spend ~20% of time on learning)
  • Flexible, friendly, values-led working environment
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adswerve is looking for a Senior Data Engineer to join our Adobe Services team. ...
Location
Location
United States
Salary
Salary:
130000.00 - 155000.00 USD / Year
adswerve.com Logo
Adswerve, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience)
  • 5+ years of experience in a data engineering, analytics, or marketing technology role
  • Hands-on expertise in Adobe Experience Platform (AEP), Real-Time CDP, Journey Optimizer, or similar tools is a big plus
  • Strong proficiency in SQL and hands-on experience with data transformation and modeling
  • Understanding of ETL/ELT workflows (e.g., dbt, Fivetran, Airflow, etc.) and cloud data platforms (e.g., GCP, Snowflake, AWS, Azure)
  • Experience with ingress/egress patterns and interacting with API’s to move data
  • Experience with Python, or JavaScript in a data or scripting context
  • Experience with customer data platforms (CDPs), event-based tracking, or customer identity management
  • Understanding of Adobe Experience Cloud integrations (e.g., Adobe Analytics, Target, Campaign) is a plus
  • Strong communication skills with the ability to lead technical conversations and present to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Lead the end-to-end architecture of data ingestion and transformation in Adobe Experience Platform (AEP) using Adobe Data Collection (Tags), Experience Data Model (XDM), and source connectors
  • Design and optimize data models, identity graphs, and segmentation strategies within Real-Time CDP to enable personalized customer experiences
  • Implement schema mapping, identity resolution, and data governance strategies
  • Collaborate with Data Architects to build scalable, reliable data pipelines across multiple systems
  • Conduct data quality assessments and support QA for new source integrations and activations
  • Write and maintain internal documentation and knowledge bases on AEP best practices and data workflows
  • Simplify complex technical concepts and educate team members and clients in a clear, approachable way
  • Contribute to internal knowledge sharing and mentor junior engineers in best practices around data modeling, pipeline development, and Adobe platform capabilities
  • Stay current on the latest Adobe Experience Platform features and data engineering trends to inform client strategies
What we offer
What we offer
  • Medical, dental and vision available for employees
  • Paid time off including vacation, sick leave & company holidays
  • Paid volunteer time
  • Flexible working hours
  • Summer Fridays
  • “Work From Home Light” days between Christmas and New Year’s Day
  • 401(k) Plan with 5% company match and no vesting period
  • Employer Paid Parental Leave
  • Health-care Spending Accounts
  • Dependent-care Spending Accounts
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right