CrawlJobs Logo

Senior Data Engineer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
United States , Irving

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

125760.00 - 188640.00 USD / Year

Job Description:

Citi is seeking a highly skilled and experienced Senior Data Engineer to join our dynamic and innovative technology team. The ideal candidate will have a robust background in data engineering, with deep expertise in a variety of modern data technologies and a proven track record of working on large-scale data projects. This role will be pivotal in designing, building, and optimizing our data infrastructure on cloud platforms, and will also provide exposure to cutting-edge Artificial Intelligence projects, including Retrieval-Augmented Generation (RAG) and Agentic AI systems. The candidate must be proficient in Agile methodologies and possess strong leadership and client-facing skills to guide projects to successful completion while balancing stakeholder needs and organizational goals.

Job Responsibility:

  • Design, build, and maintain scalable ETL/ELT pipelines using PySpark, Spark SQL, and Delta Lake on Databricks, ensuring efficient ingestion, transformation, and integration of large-scale datasets across cloud platforms.
  • Cloud Data Platform Management: Implement and manage data solutions on cloud platforms (e.g., AWS, GCP, Azure). Leverage cloud-native services for data storage, processing, and analytics.
  • Big Data Technologies: Work extensively with big data frameworks and platforms such as Databricks, Snowflake, and open table formats like Apache Iceberg to process and analyze petabyte-scale datasets.
  • Optimize Spark workloads and Databricks clusters by tuning jobs, managing partitioning strategies, caching, and autoscaling to improve performance, reduce processing time, and control infrastructure costs.
  • Implement and manage Lakehouse architecture using Delta Lake, enforcing data quality, schema evolution, and governance (e.g., Unity Catalog), while ensuring reliable, secure, and high-quality data for analytics and downstream applications.
  • Lead the design and architecture of Starburst-based data solutions, ensuring scalability, performance, and reliability for enterprise-level data platforms.
  • Implement and manage data federation strategies using Starburst connectors to seamlessly integrate and query data across disparate systems (e.g., Data Lakes, RDBMS, NoSQL databases, Cloud Storage).
  • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and queries. Optimize data storage and processing for cost and efficiency.
  • Develop and optimize robust data pipelines with a strong focus on data governance, ensuring high data quality, comprehensive data lineage, and efficient, compliant data flow from ingestion to consumption for analytical and operational needs.
  • Data Modeling and Architecture: Design and implement data models that support business intelligence, analytics, and machine learning use cases. Ensure data architecture is robust, scalable, and secure.
  • AI and Machine Learning Collaboration: Partner with data scientists and AI specialists to support the development and deployment of AI models. Contribute to innovative projects involving RAG and Agentic AI by providing the necessary data infrastructure and support.
  • Agile Methodology: Operate effectively within an Agile development environment, actively participating in sprint planning, daily stand-ups, and retrospectives to ensure iterative and timely delivery of project milestones.
  • Leadership and Project Guidance: Provide technical leadership to steer the project in the right direction, making critical decisions that align with both client interests and the organization's strategic benefits. Mentor junior engineers and promote best practices.
  • Stakeholder and Client Interaction: Serve as a key point of contact for stakeholders and clients. Effectively communicate project progress, manage expectations, and translate complex business requirements into actionable technical tasks.

Requirements:

  • Python: Expert-level proficiency with Python and its data ecosystem (e.g., Pandas, NumPy, Dask). Experience should include writing production-grade code for data processing, automation, and API development.
  • PySpark: Extensive hands-on experience with the Spark framework, including deep knowledge of the DataFrame API, Spark SQL, and performance tuning techniques for distributed data processing.
  • Databricks: Proven experience developing on the Databricks Lakehouse Platform, including proficiency with Delta Lake, structured streaming, and optimizing Spark jobs within the Databricks environment.
  • Ab Initio: Strong, practical experience with the Ab Initio suite of products (GDE, Co>Operating System, Conduct>It) for designing and implementing enterprise-grade ETL workflows.
  • Snowflake: Hands-on experience designing, building, and maintaining data warehouses in Snowflake. This includes data modeling, implementing security (RBAC), performance tuning, and utilizing features like Snowpipe and Time Travel.
  • Starburst/Trino: Experience using federated query engines to provide unified access across disparate data sources. Should understand the principles of query federation and have experience connecting to various data systems.
  • Apache Iceberg: Familiarity or experience with open table formats like Apache Iceberg for managing large analytic datasets.
  • In-depth knowledge and multi-year experience with at least one major cloud provider (AWS, Google Cloud Platform, or Azure).
  • Practical experience building and managing data pipelines using cloud-native services such as AWS Glue, Lambda, S3, Redshift
  • Azure Data Factory, Synapse Analytics
  • or Google Cloud Composer, Dataflow, and BigQuery.
  • A solid understanding of the data lifecycle required for machine learning projects.
  • Experience in building data pipelines to support AI/ML models. Exposure to or a strong interest in preparing data for advanced AI applications, such as building ingestion and transformation pipelines for vector databases used in Retrieval-Augmented Generation (RAG) and Agentic AI systems.
  • Agile Proficiency: Deep familiarity with Agile and Scrum methodologies, with a proven ability to deliver projects iteratively and adapt to changing requirements.
  • Leadership & Influence: Demonstrated ability to provide technical leadership, influence architectural decisions, and steer projects towards successful outcomes that align with both client needs and long-term organizational strategy.
  • Client Engagement: Exceptional communication and interpersonal skills, with proven proficiency in client interaction. Must be able to articulate complex technical concepts to diverse audiences and build strong stakeholder relationships.
  • 6-10 years of hands-on experience in data engineering, preferably within a large-scale enterprise or financial services environment.
  • Demonstrable experience leading project work streams and mentoring junior team members.
  • Relevant industry certifications (e.g., AWS Certified Big Data, Google Professional Data Engineer, Snowflake SnowPro).
  • Experience with containerization technologies like Docker and orchestration tools like Kubernetes.
  • Deep understanding of data governance, data quality, and data security principles.
  • Excellent analytical and problem-solving skills with the ability to work independently or as part of a team.
  • Experience as Applications Development Manager
  • Experience as senior level in an Applications Development role
  • Stakeholder and people management experience
  • Demonstrated leadership skills
  • Proven project management skills
  • Basic knowledge of industry practices and standards
  • Consistently demonstrates clear and concise written and verbal communication
  • Bachelor's degree/University degree or equivalent experience
What we offer:
  • medical, dental & vision coverage
  • 401(k)
  • life, accident, and disability insurance
  • wellness programs
  • paid time off packages, including planned time off (vacation), unplanned time off (sick leave), and paid holidays

Additional Information:

Job Posted:
May 14, 2026

Expiration:
May 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer

Senior Data Engineer

Join a leading global live-entertainment discovery tech platform. As a Senior Da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • You have a strong background in at least two of: data engineering, business intelligence, software engineering
  • You are an expert in Python3 and its data ecosystem
  • You have proven experience working with SQL languages
  • You have worked with complex data pipelines
  • You are a collaborative team player with strong communication skills
  • You are proactive, driven, and bring positive energy
  • You possess strong analytical and problem-solving abilities backed by solid software engineering skills
  • You are proficient in business English.
Job Responsibility
Job Responsibility
  • Own critical data pipelines of our data warehouse
  • Ideate and implement tools and processes to exploit data
  • Work closely with other business units to create structured and scalable solutions
  • Contribute to the development of a complex data and software ecosystem
  • Build trusted data assets
  • Build automatizations to create business opportunities
  • Design, build and support modern data infrastructure.
What we offer
What we offer
  • Attractive compensation package with potential bonus
  • Stock options
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Responsibility from day one
  • Great work environment with a young international team
  • Health insurance
  • Flexible remuneration with 100% tax exemption through Cobee
  • English lessons
  • Gympass membership
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data En...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services, architect, design, and enable self-serve capabilities at scale
  • Work on an AWS-based data lake backed by open source projects such as Spark and Airflow
  • Identify ways to make our platform better and improve user experience
  • Apply strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data En...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services
  • Architect, design, and enable self-serve capabilities at scale
  • Apply your strong technical experience building highly reliable services
  • Manage and orchestrate a multi-petabyte scale data lake
  • Transform vague requirements into solid solutions
  • Solve challenging problems creatively
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions of events each month into ou...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering Tea...
Location
Location
United States , San Francisco
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS in Computer Science or equivalent experience with 5+ years as Data Engineer or similar role
  • Programming skills in Python & Java (good to have)
  • Design data models for storage and retrieval to meet product and requirements
  • Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka)
  • Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering
  • Enhance data quality through internal tools/frameworks detecting DQ issues
  • Working knowledge of relational databases and SQL query authoring
Job Responsibility
Job Responsibility
  • Collaborating with partners, you will design data models, acquisition processes, and applications to address needs
  • Lead business growth and enhance product experiences
  • Collaborate with Technology Teams, Global Analytical Teams, and Data Scientists across programs
  • Extracting/cleaning data, understanding generating systems
  • Improve data quality by adding sources, coding rules, and producing metrics as requirements evolve
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Microsoft Stack Data Engineer

Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senio...
Location
Location
United States , West Des Moines
Salary
Salary:
155000.00 USD / Year
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of DATA WAREHOUSE EXPERIENCE / Data Lake experience
  • Advanced SQL Server
  • Strong SQL experience, working with structured and unstructured data
  • Strong in SSIS ETL
  • Proficiency in SQL and SQL Queries
  • Experience with SQL Server and SQL Server
  • Knowledge of Data Warehousing and Data Warehousing
  • Data Warehouse experience: Star Schema and Fact & Dimension data warehouse structure
  • Experience with Azure Data Lake and Data lakes
  • Proficiency in ETL / SSIS and SSAS
Job Responsibility
Job Responsibility
  • Modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD
  • REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting
  • ALL ASPECTS OF Data Engineering
  • Technical Leader of the team
What we offer
What we offer
  • Bonus
  • 2 1/2 day weekends
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right