CrawlJobs Logo

Spark Engineer

bvteck.com Logo

Bright Vision Technologies

Location Icon

Location:
United States , Bridgewater

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Bright Vision Technologies is looking for a skilled Spark Engineer to join our dynamic team. We leverage cutting-edge distributed data processing technologies to build scalable, high-performance analytics platforms. This is a fantastic opportunity to join an established and well-respected organization offering tremendous career growth potential. We are looking for OPT/CPT/H4 EAD/TN/E3 or any other Non-immigrant visa people who are looking for an H1B sponsorship for the year 2027 quota.

Job Responsibility:

  • Contribute to the mission of transforming business processes through technology
  • Build scalable, high-performance analytics platforms

Requirements:

  • Apache Spark
  • PySpark
  • Spark SQL
  • Scala / Python
  • Hadoop ecosystem (HDFS, YARN)
  • Kafka
  • DataFrames & Datasets
  • ETL pipelines
  • SQL
  • NoSQL databases
  • Cloud Platforms (AWS EMR / Azure Databricks)
  • Performance tuning
  • Linux
  • CI/CD pipelines
  • Git
  • Agile methodologies
  • At least 3 to 5 years real time experience
  • Must be willing to take an AI-proctored coding test
  • Must be willing to relocate nationwide based on project needs
  • Must have at least 1 year of real-time project experience in the USA
  • Must be looking for H-1B sponsorship for the 2026/2027 quota
  • Must be on OPT/CPT/H4 EAD/TN/E3 or any other Non-immigrant visa
  • Must work on W2 (not C2C/1099/3rd party)
What we offer:
  • H-1B sponsorship
  • Career growth potential

Additional Information:

Job Posted:
February 13, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Spark Engineer

Senior Data Engineer

Are you an experienced Data Engineer ready to tackle complex, high-load, and dat...
Location
Location
Salary
Salary:
Not provided
sigma.software Logo
Sigma Software Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Apache Spark / expert
  • Python / expert
  • SQL / expert
  • Kafka / good
  • Data Governance (Apache Ranger/Atlas) / good
What we offer
What we offer
  • Diversity of Domains & Businesses
  • Variety of technology
  • Health & Legal support
  • Active professional community
  • Continuous education and growing
  • Flexible schedule
  • Remote work
  • Outstanding offices (if you choose it)
  • Sports and community activities
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
United States , Menlo Park
Salary
Salary:
146000.00 - 198000.00 USD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • Market competitive and pay equity-focused compensation structure
  • 100% paid health insurance for employees with 90% coverage for dependents
  • Annual lifestyle wallet for personal wellness, learning and development, and more
  • Lifetime maximum benefit for family forming and fertility benefits
  • Dedicated mental health support for employees and eligible dependents
  • Generous time away including company holidays, paid time off, sick time, parental leave, and more
  • Lively office environment with catered meals, fully stocked kitchens, and geo-specific commuter benefits
  • Bonus opportunities
  • Equity
  • Fulltime
Read More
Arrow Right

Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
Canada , Toronto
Salary
Salary:
124000.00 - 145000.00 CAD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • bonus opportunities
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right

Data Engineer

Join Inetum as a Data Engineer to lead the design, development, and maintenance ...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Spark on Scala
  • CI/CD tools (Gitlab, Jenkins…)
  • S3 storage / COS and parque format
  • Strong Knowledge of SQL and NoSQL databases
  • Hadoop
  • Apache Airflow
  • Shell script
  • Software Development Lifecycle (SDLC) awareness
  • Agile principles and ceremonies
  • Kubernetes containerization
Job Responsibility
Job Responsibility
  • Lead the design, development, and maintenance of scalable data pipelines and systems and guidelines according to the customers’ needs
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions
  • Ensure the performance, security of the data infrastructure and the data quality/integrity
  • Optimize data workflows for performance and efficiency to enrich and transform large volumes of data with complex business rules
  • Automate data pipelines and streamline data ingestion through the implementation of different orchestrators and scheduling processes (Airflow as a Service mainly)
  • Designing and implementing scalable and secure data processing pipelines using Scala, Spark, Cloud Object Storage (COS)
  • Set up CI/CD pipelines to automate deployments, unit testing and development management
  • Write and conduct unit and validation tests to ensure accuracy and integrity of code developed
  • Write technical documentation (specifications and operational documents) to ensure knowledge capitalization
  • Support the different teams in troubleshoot and resolution of data-related issues
  • Fulltime
Read More
Arrow Right

Senior AWS Data Engineer / Data Platform Engineer

We are seeking a highly experienced Senior AWS Data Engineer to design, build, a...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering and data platform development
  • Strong hands-on experience with: AWS Glue
  • Amazon EMR (Spark)
  • AWS Lambda
  • Apache Airflow (MWAA)
  • Amazon EC2
  • Amazon CloudWatch
  • Amazon Redshift
  • Amazon DynamoDB
  • AWS DataZone
Job Responsibility
Job Responsibility
  • Design, develop, and optimize scalable data pipelines using AWS native services
  • Lead the implementation of batch and near-real-time data processing solutions
  • Architect and manage data ingestion, transformation, and storage layers
  • Build and maintain ETL/ELT workflows using AWS Glue and Apache Spark on EMR
  • Orchestrate complex data workflows using Apache Airflow (MWAA)
  • Develop and manage serverless data processing using AWS Lambda
  • Design and optimize data warehouses using Amazon Redshift
  • Implement and manage NoSQL data models using Amazon DynamoDB
  • Utilize AWS DataZone for data governance, cataloging, and access management
  • Monitor, log, and troubleshoot data pipelines using Amazon CloudWatch
  • Fulltime
Read More
Arrow Right

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right

Palantir Data Engineer

We are seeking an experienced Palantir Data Engineer to support large-scale tran...
Location
Location
Romania
Salary
Salary:
Not provided
airswift.com Logo
Airswift Sweden
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with Palantir and Spark in large transformation projects
  • Strong background in data engineering, data modelling, and data quality
  • Proficiency with Git and automated testing frameworks
  • Familiarity with data catalogue tools
Job Responsibility
Job Responsibility
  • Design and implement data pipelines within Palantir and Spark environments
  • Develop and maintain data models, catalogues, and quality frameworks
  • Collaborate on workshops and AIP Logic initiatives (Agentic AI advantageous)
  • Integrate automated testing and version control (Git) into workflows
  • Support ontology development and chatbot integration
  • Contribute to front-end applications using React and TypeScript
  • Fulltime
Read More
Arrow Right