CrawlJobs Logo

Senior Solutions Engineer – Big Data & Data Infrastructure

vastdata.com Logo

VAST Data

Location Icon

Location:
Israel , Tel Aviv

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

This is a great opportunity to be part of one of the fastest-growing infrastructure companies in history, an organization that is in the center of the hurricane being created by the revolution in artificial intelligence. VAST Data is the data platform company for the AI era. We are building the enterprise software infrastructure to capture, catalog, refine, enrich, and protect massive datasets and make them available for real-time data analysis and AI training and inference. Designed from the ground up to make AI simple to deploy and manage, VAST takes the cost and complexity out of deploying enterprise and AI infrastructure across data center, edge, and cloud. We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.

Job Responsibility:

  • Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes
  • Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools
  • Implement event-driven and serverless workflows
  • Create technical guides, architecture docs, and demo pipelines
  • Integrate data validation, observability tools, and governance directly into the pipeline lifecycle
  • Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark)
  • Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets
  • Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads
  • Operate and debug object store–backed data lake infrastructure

Requirements:

  • 2–4 years in software / solution or infrastructure engineering
  • 2–4 years focused on building / maintaining large-scale data pipelines / storage & database solutions
  • Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka
  • Coding background in Python (must-have)
  • Deep understanding of data storage architectures including SQL, NoSQL, and HDFS
  • Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform)
  • Experience with distributed systems, stream processing, and event-driven architecture
  • Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines
  • Excellent communication skills

Nice to have:

familiarity with Bash and scripting tools is a plus

Additional Information:

Job Posted:
January 06, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Solutions Engineer – Big Data & Data Infrastructure

Senior Principal Data Platform Software Engineer

We’re looking for a Sr Principal Data Platform Software Engineer (P70) to be a k...
Location
Location
Salary
Salary:
239400.00 - 312550.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years in Data Engineering, Software Engineering, or related roles, with substantial exposure to big data ecosystems
  • Demonstrated experience building and operating data platforms or large‑scale data services in production
  • Proven track record of building services from the ground up (requirements → design → implementation → deployment → ongoing ownership)
  • Hands‑on experience with AWS, GCP (e.g., compute, storage, data, and streaming services) and cloud‑native architectures
  • Practical experience with big data technologies, such as Databricks, Apache Spark, AWS EMR, Apache Flink, or StarRocks
  • Strong programming skills in one or more of: Kotlin, Scala, Java, Python
  • Experience leading cross‑team technical initiatives and influencing senior stakeholders
  • Experience mentoring Staff/Principal engineers and lifting the technical bar for a team or org
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience
Job Responsibility
Job Responsibility
  • Design, develop and own delivery of high quality big data and analytical platform solutions aiming to solve Atlassian’s needs to support millions of users with optimal cost, minimal latency and maximum reliability
  • Improve and operate large‑scale distributed data systems in the cloud (primarily AWS, with increasing integration with GCP and Kubernetes‑based microservices)
  • Drive the evolution of our high-performance analytical databases and its integrations with products, cloud infrastructures (AWS and GCP) and isolated cloud environments
  • Help define and uplift engineering and operational standards for petabyte scale data platforms, with sub‑second analytic queries and multi‑region availability (coding guidelines, code review practices, observability, incident response, SLIs/SLOs)
  • Partner across multiple product and platform teams (including Analytics, Marketplace/Ecosystem, Core Data Platform, ML Platform, Search, and Oasis/FedRAMP) to deliver company‑wide initiatives that depend on reliable, high‑quality data
  • Act as a technical mentor and multiplier, raising the bar on design quality, code quality, and operational excellence across the broader team
  • Design and implement self‑healing, resilient data platforms with strong observability, fault tolerance, and recovery characteristics
  • Own the long‑term architecture and technical direction of Atlassian’s product data platform with projects that are directly tied to Atlassian’s company-level OKRs
  • Be accountable for the reliability, cost efficiency, and strategic direction of Atlassian’s product analytical data platform
  • Partner with executives and influence senior leaders to align engineering efforts with Atlassian’s long-term business objectives
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Data Engineer to join our team and support with designing, ...
Location
Location
Salary
Salary:
Not provided
foundever.com Logo
Foundever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years plus experience in data engineering
  • Track record of deploying and maintaining complex data systems at an enterprise level within regulated environments
  • Expertise in implementing robust data security measures, access controls, and monitoring systems
  • Proficiency in data modeling and database management
  • Strong programming skills in Python and SQL
  • Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases
  • Deep experience with ETL processes and data pipeline development
  • Strong understanding of data warehousing concepts and best practices
  • Experience with cloud platforms such as AWS and Azure
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design and optimize complex data storage solutions, including data warehouses and data lakes
  • Develop, automate, and maintain data pipelines for efficient and scalable ETL processes
  • Ensure data quality and integrity through data validation, cleansing, and error handling
  • Collaborate with data analysts, machine learning engineers, and software engineers to deliver relevant datasets or data APIs for downstream applications
  • Implement data security measures and access controls to protect sensitive information
  • Monitor data infrastructure for performance and reliability, addressing issues promptly
  • Stay abreast of industry trends and emerging technologies in data engineering
  • Document data pipelines, processes, and best practices for knowledge sharing
  • Lead data governance and compliance efforts to meet regulatory requirements
  • Collaborate with cross-functional teams to drive data-driven decision-making within the organization
What we offer
What we offer
  • Impactful work
  • Professional growth
  • Competitive compensation
  • Collaborative environment
  • Attractive salary and benefits package
  • Continuous learning and development opportunities
  • A supportive team culture with opportunities for occasional travel for training and industry events
Read More
Arrow Right

Senior Data Engineer

Madbox is a fast-growing mobile gaming company. We are looking for a Senior Data...
Location
Location
Spain , Barcelona
Salary
Salary:
Not provided
madbox.io Logo
Madbox
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s degree in Engineering, Computer Science, or equivalent
  • 4+ years of proven experience in Big Data systems architecture and data pipelines
  • Experience with both SQL and NoSQL databases is required
  • Strong proficiency in Python
  • Proven experience with cloud platforms (preferably GCP) and their data services
  • Hands-on experience setting up and monitoring CI/CD pipelines
  • Proficient with orchestration tools (ideally Airflow)
  • Strong communication skills
  • Analytical and problem-solving mindset with attention to detail
  • Autonomous, proactive, and always looking for opportunities to improve solutions and processes
Job Responsibility
Job Responsibility
  • Start by onboarding on our full Data Stack, both technically (GCP, streaming, big data) and from a business logic perspective
  • Take ownership of end-to-end data pipelines, ensuring scalability, reliability, and maintainability
  • Continuously improve our data stack, proposing technical evolutions, monitoring infrastructure performance, and optimizing costs
  • Contribute to the growth and mentoring of junior profiles, providing guidance, sharing best practices, and supporting their development within the team
  • Design and optimize data models tailored to solve complex analytical and business requirements
  • Ensure data quality through cleansing, unit testing, and embedded DQA processes
  • Communicate effectively with technical and non-technical stakeholders, explaining solutions clearly and constructively
  • Demonstrate autonomy and decision-making skills, moving projects forward while keeping stakeholders informed
  • Take a key part in building and maintaining the Data Engineering roadmap, covering infrastructure, robustness, and business-driven initiatives
Read More
Arrow Right

Senior Data Engineering Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
  • Proficiency in Python, PySpark, SQL
  • Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment
  • Knowledge of data governance principles and best practices, including data privacy and security regulations
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
  • Grow as we grow as a company. 76% of our managers are internal promotions
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Corporate Tools, you will work closely with our Sof...
Location
Location
United States
Salary
Salary:
150000.00 USD / Year
corporatetools.com Logo
Corporate Tools
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s (BA or BS) in computer science, or related field
  • 2+ years in a full stack development role
  • 4+ years of experience working in a data engineer role, or related position
  • 2+ years of experience standing up and maintaining a Redshift warehouse
  • 4+ years of experience with Postgres, specifically with RDS
  • 4+ years of AWS experience, specifically S3, Glue, IAM, EC2, DDB, and other related data solutions
  • Experience working with Redshift, DBT, Snowflake, Apache Airflow, Azure Data Warehouse, or other industry standard big data or ETL related technologies
  • Experience working with both analytical and transactional databases
  • Advanced working SQL (Preferably PostgreSQL) knowledge and experience working with relational databases
  • Experience with Grafana or other monitoring/charting systems
Job Responsibility
Job Responsibility
  • Focus on data infrastructure. Lead and build out data services/platforms from scratch (using OpenSource tech)
  • Creating and maintaining transparent, bulletproof ETL (extract, transform, and load) pipelines that cleans, transforms, and aggregates unorganized and messy data into databases or data sources
  • Consume data from roughly 40 different sources
  • Collaborate closely with our Data Analysts to get them the data they need
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Improve existing data models while implementing new business capabilities and integration points
  • Creating proactive monitoring so we learn about data breakages or inconsistencies right away
  • Maintaining internal documentation of how the data is housed and transformed
  • Improve existing data models, and design new ones to meet the needs of data consumers across Corporate Tools
  • Stay current with latest cloud technologies, patterns, and methodologies
What we offer
What we offer
  • 100% employer-paid medical, dental and vision for employees
  • Annual review with raise option
  • 22 days Paid Time Off accrued annually, and 4 holidays
  • After 3 years, PTO increases to 29 days. Employees transition to flexible time off after 5 years with the company—not accrued, not capped, take time off when you want
  • The 4 holidays are: New Year’s Day, Fourth of July, Thanksgiving, and Christmas Day
  • Paid Parental Leave
  • Up to 6% company matching 401(k) with no vesting period
  • Quarterly allowance
  • Use to make your remote work set up more comfortable, for continuing education classes, a plant for your desk, coffee for your coworker, a massage for yourself... really, whatever
  • Open concept office with friendly coworkers
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Manager

Lead and mentor data engineering team to scale data platform, establish best pra...
Location
Location
United States , Work at Home, Illinois
Salary
Salary:
130295.00 - 260590.00 USD / Year
https://www.cvshealth.com/ Logo
CVS Health
Expiration Date
January 19, 2026
Flip Icon
Requirements
Requirements
  • Bachelor's or master's degree in Computer Science, Data Science, or related field
  • 7+ years of experience in data engineering
  • 5+ years in technical leadership or management role
  • Experience building and leading high-performing data engineering teams
  • Deep expertise with cloud platforms (AWS, Azure, or GCP)
  • Experience with big data frameworks (Apache Spark, Hadoop)
  • Experience with data warehousing solutions (Snowflake, Redshift, BigQuery)
  • Experience with workflow orchestration tools (Airflow, Dagster)
  • Solid experience with SQL, Python, PySpark
  • Excellent communication, interpersonal, and leadership skills
Job Responsibility
Job Responsibility
  • Lead, mentor, and grow team of 6-8 data engineers
  • Manage hiring, training, and professional development
  • Conduct performance reviews and provide feedback
  • Own technical vision and roadmap for data platform
  • Lead design, development, and maintenance of data pipelines and data warehouses
  • Drive best practices in data modeling, ETL/ELT processes, and data governance
  • Oversee implementation of new data technologies and architectures
  • Partner with product managers, data scientists, analysts, and other engineering teams
  • Ensure data platform meets standards for performance, security, and data quality
  • Establish culture of operational excellence including monitoring and incident response
What we offer
What we offer
  • Affordable medical plan options
  • 401(k) plan with matching company contributions
  • Employee stock purchase plan
  • Wellness screenings
  • Tobacco cessation and weight management programs
  • Confidential counseling and financial coaching
  • Paid time off
  • Flexible work schedules
  • Family leave
  • Dependent care resources
  • Fulltime
!
Read More
Arrow Right