CrawlJobs Logo

Senior Big Data Engineer

phasorsoft.com Logo

PhasorSoft Group

Location Icon

Location:
United States , Flowood

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Design and develop scalable data pipelines and solutions using Python and PySpark
  • Utilize big data technologies such as Hadoop, Spark, Kafka, or similar tools for processing and analyzing large datasets
  • Develop and maintain ETL processes to extract, transform, and load data into data lakes or warehouses
  • Collaborate with data engineers and scientists to implement machine learning models and algorithms
  • Optimize and tune data processing workflows for performance and efficiency
  • Implement data governance and security measures to ensure data integrity and privacy
  • Create and maintain documentation for data pipelines, workflows, and processes
  • Provide technical leadership and mentorship to junior team members

Requirements:

  • Proficiency in Python programming for data manipulation and analysis
  • Experience with PySpark for processing large-scale data
  • Strong understanding and practical experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • Knowledge of designing and implementing ETL processes for data integration
  • Ability to work with large datasets, perform data cleansing, transformations, and aggregations
  • Familiarity with machine learning concepts and experience implementing ML models
  • Understanding of data governance principles and experience implementing data security measures
  • Ability to create clear and concise documentation for data pipelines and processes
  • Strong teamwork and collaboration skills to work with cross-functional teams
  • Analytical and problem-solving skills to optimize data workflows and processes
  • For senior roles, the ability to provide technical leadership, mentorship, and guidance to junior team members
  • Knowledge of SQL for querying and manipulating data in databases

Nice to have:

Experience with cloud platforms like AWS, Azure, or Google Cloud is a plus

Additional Information:

Job Posted:
December 11, 2025

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Big Data Engineer

Data engineer senior

Within a dynamic, high-level team, you will contribute to both R&D and client pr...
Location
Location
France , Paris
Salary
Salary:
Not provided
artelys.com Logo
Artelys
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree from a top engineering school or a high-level university program
  • At least 3 years of experience in designing and developing data-driven solutions with high business impact, particularly in industrial or large-scale environments
  • Excellent command of Python for both application development and data processing, with strong expertise in libraries such as Pandas, Polars, NumPy, and the broader Python Data ecosystem
  • Experience implementing data processing pipelines using tools like Apache Airflow, Databricks, Dask, or flow orchestrators integrated into production environments
  • Contributed to large-scale projects combining data analysis, workflow orchestration, back-end development (REST APIs and/or Messaging), and industrialisation, within a DevOps/DevSecOps-oriented framework
  • Proficient in using Docker for processing encapsulation and deployment
  • Experience with Kubernetes for orchestrating workloads in cloud-native architectures
  • Motivated by practical applications of data in socially valuable sectors such as energy, mobility, or health, and thrives in environments where autonomy, rigour, curiosity, and teamwork are valued
  • Fluency in English and French is required
Job Responsibility
Job Responsibility
  • Design and develop innovative and high-performance software solutions addressing industrial challenges, primarily using the Python language and a microservices architecture
  • Gather user and business needs to design data collection and storage solutions best suited to the presented use cases
  • Develop technical solutions for data collection, cleaning, and processing, then industrialise and automate them
  • Contribute to setting up technical architectures based on Data or even Big Data environments
  • Carry out development work aimed at industrialising and orchestrating computations (statistical and optimisation models) and participate in software testing and qualification
What we offer
What we offer
  • Up to 2 days of remote work per week possible
  • Flexible working hours
  • Offices located in the city center of each city where we are located
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
United Kingdom
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Serve as a core contributor, owning and maintaining critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Corporate Tools, you will work closely with our Sof...
Location
Location
United States
Salary
Salary:
150000.00 USD / Year
corporatetools.com Logo
Corporate Tools
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s (BA or BS) in computer science, or related field
  • 2+ years in a full stack development role
  • 4+ years of experience working in a data engineer role, or related position
  • 2+ years of experience standing up and maintaining a Redshift warehouse
  • 4+ years of experience with Postgres, specifically with RDS
  • 4+ years of AWS experience, specifically S3, Glue, IAM, EC2, DDB, and other related data solutions
  • Experience working with Redshift, DBT, Snowflake, Apache Airflow, Azure Data Warehouse, or other industry standard big data or ETL related technologies
  • Experience working with both analytical and transactional databases
  • Advanced working SQL (Preferably PostgreSQL) knowledge and experience working with relational databases
  • Experience with Grafana or other monitoring/charting systems
Job Responsibility
Job Responsibility
  • Focus on data infrastructure. Lead and build out data services/platforms from scratch (using OpenSource tech)
  • Creating and maintaining transparent, bulletproof ETL (extract, transform, and load) pipelines that cleans, transforms, and aggregates unorganized and messy data into databases or data sources
  • Consume data from roughly 40 different sources
  • Collaborate closely with our Data Analysts to get them the data they need
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Improve existing data models while implementing new business capabilities and integration points
  • Creating proactive monitoring so we learn about data breakages or inconsistencies right away
  • Maintaining internal documentation of how the data is housed and transformed
  • Improve existing data models, and design new ones to meet the needs of data consumers across Corporate Tools
  • Stay current with latest cloud technologies, patterns, and methodologies
What we offer
What we offer
  • 100% employer-paid medical, dental and vision for employees
  • Annual review with raise option
  • 22 days Paid Time Off accrued annually, and 4 holidays
  • After 3 years, PTO increases to 29 days. Employees transition to flexible time off after 5 years with the company—not accrued, not capped, take time off when you want
  • The 4 holidays are: New Year’s Day, Fourth of July, Thanksgiving, and Christmas Day
  • Paid Parental Leave
  • Up to 6% company matching 401(k) with no vesting period
  • Quarterly allowance
  • Use to make your remote work set up more comfortable, for continuing education classes, a plant for your desk, coffee for your coworker, a massage for yourself... really, whatever
  • Open concept office with friendly coworkers
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are currently looking for a Data Engineer to join our client’s forward-thinki...
Location
Location
United Kingdom , London
Salary
Salary:
70000.00 - 75000.00 GBP / Year
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience leading data engineering initiatives in a cloud environment
  • Expertise in Azure Synapse Analytics (SQL pools, Spark, pipelines, serverless SQL)
  • Hands-on experience with Microsoft Fabric (OneLake, Lakehouse, Data Engineering/Data Science workloads)
  • Strong SQL skills (T-SQL, Synapse SQL, Lakehouse SQL)
  • Experience with Python and/or Spark for big data processing
  • Knowledge of cloud data architecture (ADLS Gen2, Delta Lake, Parquet)
  • Understanding of data governance, security, and compliance standards
Job Responsibility
Job Responsibility
  • Lead and mentor internal and external data engineering teams to deliver best-in-class cloud solutions
  • Design, build, and optimise enterprise data platforms using Azure Synapse, Microsoft Fabric, OneLake, and lakehouse architectures
  • Modernise traditional ETL/ELT processes and transition workloads to cloud-native tools
  • Develop scalable data pipelines, Spark-based processing, and dataflows for unified analytics
  • Implement governance, security, and compliance standards across all cloud data environments
  • Collaborate with stakeholders to support high-value analytics and Power BI initiatives
  • Produce documentation, technical designs, and support materials to enable continuous improvement
What we offer
What we offer
  • Competitive salary
  • Strong benefits package
  • Hybrid working model
  • Opportunity to own and shape a modern cloud data ecosystem
  • Continuous training and professional development in Microsoft Fabric, Azure, and modern data engineering
  • Clear path for career progression within a growing data organisation
  • Supportive, innovative, and people-focused working environment
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Location
Location
Salary
Salary:
Not provided
kloud9.nyc Logo
Kloud9
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in developing scalable Big Data applications or solutions on distributed platforms
  • 4+ years of experience working with distributed technology tools, including Spark, Python, Scala
  • Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture
  • Proficient in working on Amazon Web Services(AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience working in Agile and Scrum development process
  • 3+ years of experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience architecting data product in Streaming, Serverless and Microservices Architecture and platform
  • 3+ years of experience working with Data platforms, including EMR, Airflow, Databricks (Data Engineering & Delta)
  • Experience with creating/configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc.
  • Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.
Job Responsibility
Job Responsibility
  • Design and develop scalable Big Data applications on distributed platforms to support large-scale data processing and analytics needs
  • Partner with others in solving complex problems by taking a broad perspective to identify innovative solutions
  • Build positive relationships across Product and Engineering
  • Influence and communicate effectively, both verbally and written, with team members and business stakeholders
  • Quickly pick up new programming languages, technologies, and frameworks
  • Collaborate effectively in a high-speed, results-driven work environment to meet project deadlines and business goals
  • Utilize Data Warehousing tools such as SQL databases, Presto, and Snowflake for efficient data storage, querying, and analysis
  • Demonstrate experience in learning new technologies and skills.
What we offer
What we offer
  • Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields.
Read More
Arrow Right

Senior Data Engineering Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
  • Proficiency in Python, PySpark, SQL
  • Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment
  • Knowledge of data governance principles and best practices, including data privacy and security regulations
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
  • Grow as we grow as a company. 76% of our managers are internal promotions
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right