CrawlJobs Logo

Senior Data Engineer

jll.com Logo

JLL

Location Icon

Location:
Mexico , Jalisco

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Senior Data Engineer P3 Job Description. About JLL Technologies (JLLT): JLL Technologies is a specialized group within JLL that delivers unparalleled digital advisory, implementation, and services solutions to organizations globally. We provide best-in-class technologies to bring digital ambitions to life by aligning technology, people, and processes. Our goal is to leverage technology to increase the value and liquidity of the world's buildings while enhancing the productivity and happiness of those who occupy them. What the Job Involves: We are seeking a Senior Data Engineer who is a self-starter to work in a diverse and fast-paced environment as part of our Capital Markets Data Engineering team. This individual contributor role is responsible for designing and developing data solutions that are strategic to the business and built on the latest technologies and patterns. This is a global role that requires partnering with the broader JLLT team at the country, regional, and global levels by utilizing in-depth knowledge of data, infrastructure, technologies, and data engineering experience.

Job Responsibility:

  • Design and implement robust, scalable data pipelines using Databricks, Apache Spark, and Delta Lake as well as BigQuery
  • Design and implement efficient data pipeline frameworks, ensuring the smooth flow of data from various sources to data lakes, data warehouses, and analytical platforms
  • Troubleshoot and resolve issues related to data processing, data quality, and data pipeline performance
  • Document data infrastructure, data pipelines, and ETL processes, ensuring knowledge transfer and smooth handovers
  • Create automated tests and integrate them into testing frameworks
  • Configure and optimize Databricks workspaces, clusters, and job scheduling
  • Work in a Multi-cloud environment including Azure, GCP and AWS
  • Implement security best practices including access controls, encryption, and audit logging
  • Build integrations with market data vendors, trading systems, and risk management platforms
  • Establish monitoring and performance tuning for data pipeline health and efficiency
  • Collaborate with cross-functional teams to understand data requirements, identify potential data sources, and define data ingestion
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needs

Requirements:

  • Bachelor's degree in Computer Science, Data Engineering, or a related field (Master's degree preferred)
  • Minimum 5+ years of experience in data engineering or full-stack development, with a focus on cloud-based environments
  • Advanced expertise in managing big data technologies (Python, SQL, PySpark, Spark) with a proven track record of working on large-scale data projects
  • Strong Databricks experience
  • Advanced database/backend testing with the ability to write complex SQL queries for data validation and integrity
  • Strong experience in designing and implementing data pipelines, ETL processes, and workflow automation
  • Experience with data warehousing concepts, dimensional modeling, data governance best practices, and cloud-based data warehousing platforms (e.g., Google BigQuery, Snowflake)
  • Experience with cloud platforms such as Microsoft Azure, or Google Cloud Platform (GCP)
  • Experience working in DevOps model
  • Experience with Unit, Functional, Integration, User Acceptance, System, and Security testing of data pipelines
  • Proficiency in object-oriented programming and software design patterns
  • Familiarity with cutting-edge AI technologies and demonstrated ability to rapidly learn and adapt to emerging concepts and frameworks
  • Strong problem-solving skills and ability to analyze complex data processing issues
  • Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams
  • Attention to detail and commitment to delivering high-quality, reliable data solutions
  • Ability to adapt to evolving technologies and work effectively in a fast-paced, dynamic environment

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer

Senior Data Engineer

Join a leading global live-entertainment discovery tech platform. As a Senior Da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • You have a strong background in at least two of: data engineering, business intelligence, software engineering
  • You are an expert in Python3 and its data ecosystem
  • You have proven experience working with SQL languages
  • You have worked with complex data pipelines
  • You are a collaborative team player with strong communication skills
  • You are proactive, driven, and bring positive energy
  • You possess strong analytical and problem-solving abilities backed by solid software engineering skills
  • You are proficient in business English.
Job Responsibility
Job Responsibility
  • Own critical data pipelines of our data warehouse
  • Ideate and implement tools and processes to exploit data
  • Work closely with other business units to create structured and scalable solutions
  • Contribute to the development of a complex data and software ecosystem
  • Build trusted data assets
  • Build automatizations to create business opportunities
  • Design, build and support modern data infrastructure.
What we offer
What we offer
  • Attractive compensation package with potential bonus
  • Stock options
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Responsibility from day one
  • Great work environment with a young international team
  • Health insurance
  • Flexible remuneration with 100% tax exemption through Cobee
  • English lessons
  • Gympass membership
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data En...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services, architect, design, and enable self-serve capabilities at scale
  • Work on an AWS-based data lake backed by open source projects such as Spark and Airflow
  • Identify ways to make our platform better and improve user experience
  • Apply strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data En...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services
  • Architect, design, and enable self-serve capabilities at scale
  • Apply your strong technical experience building highly reliable services
  • Manage and orchestrate a multi-petabyte scale data lake
  • Transform vague requirements into solid solutions
  • Solve challenging problems creatively
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions of events each month into ou...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering Tea...
Location
Location
United States , San Francisco
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS in Computer Science or equivalent experience with 5+ years as Data Engineer or similar role
  • Programming skills in Python & Java (good to have)
  • Design data models for storage and retrieval to meet product and requirements
  • Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka)
  • Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering
  • Enhance data quality through internal tools/frameworks detecting DQ issues
  • Working knowledge of relational databases and SQL query authoring
Job Responsibility
Job Responsibility
  • Collaborating with partners, you will design data models, acquisition processes, and applications to address needs
  • Lead business growth and enhance product experiences
  • Collaborate with Technology Teams, Global Analytical Teams, and Data Scientists across programs
  • Extracting/cleaning data, understanding generating systems
  • Improve data quality by adding sources, coding rules, and producing metrics as requirements evolve
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Microsoft Stack Data Engineer

Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senio...
Location
Location
United States , West Des Moines
Salary
Salary:
155000.00 USD / Year
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of DATA WAREHOUSE EXPERIENCE / Data Lake experience
  • Advanced SQL Server
  • Strong SQL experience, working with structured and unstructured data
  • Strong in SSIS ETL
  • Proficiency in SQL and SQL Queries
  • Experience with SQL Server and SQL Server
  • Knowledge of Data Warehousing and Data Warehousing
  • Data Warehouse experience: Star Schema and Fact & Dimension data warehouse structure
  • Experience with Azure Data Lake and Data lakes
  • Proficiency in ETL / SSIS and SSAS
Job Responsibility
Job Responsibility
  • Modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD
  • REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting
  • ALL ASPECTS OF Data Engineering
  • Technical Leader of the team
What we offer
What we offer
  • Bonus
  • 2 1/2 day weekends
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right