CrawlJobs Logo

Real-Time Data Engineer

bvteck.com Logo

Bright Vision Technologies

Location Icon

Location:
United States , Bridgewater

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Bright Vision Technologies is looking for a skilled Real-Time Data Engineer (Flink / Spark Streaming) to join our dynamic team. We leverage cutting-edge real-time data processing technologies to build scalable, low-latency, and fault-tolerant streaming platforms. This is a fantastic opportunity to join an established and well-respected organization offering tremendous career growth potential. We are looking for OPT/CPT/H4 EAD/TN/E3 or any other Non-immigrant visa people who are looking for an H1B sponsorship for the year 2027 quota.

Job Responsibility:

  • Contribute to building scalable, low-latency, and fault-tolerant streaming platforms
  • Contribute to the mission of transforming business processes through technology

Requirements:

  • At least 3 to 5 years real time experience
  • Experience with Apache Flink, Spark Streaming, Apache Kafka, Event-Driven Architecture, Real-Time Data Pipelines
  • Proficiency in Java / Scala / Python
  • Knowledge of Windowing & State Management, Stream Processing Semantics, SQL, NoSQL Databases
  • Experience with Cloud Platforms (AWS / Azure / GCP), Docker, Linux, CI/CD pipelines, Monitoring & Alerting
  • Familiarity with Git, Agile methodologies
  • Must be willing to take an Online Coding Test
  • Must have at least 1 year of real-time project experience in the USA
  • Must be willing to relocate nationwide based on project needs
  • Must be looking for H-1B sponsorship for the 2026 quota
  • Must be willing to work on W2
  • Must not be a C2C/1099/3rd Party candidate
What we offer:
  • H-1B sponsorship for the 2026 quota
  • Career growth potential

Additional Information:

Job Posted:
February 13, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Real-Time Data Engineer

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

PointClickCare is searching for a Principal Data Engineer who will contribute to...
Location
Location
United States
Salary
Salary:
183200.00 - 203500.00 USD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi
  • Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
Job Responsibility
Job Responsibility
  • Lead and guide the design and implementation of scalable streaming data pipelines
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work
What we offer
What we offer
  • Benefits starting from Day 1!
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more!
  • Fulltime
Read More
Arrow Right

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join their product DE team. T...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner across engineering teams to tackle company-wide initiatives
  • Mentor junior members of the team
  • Partner with leadership, engineers, program managers and data scientists to understand data needs
  • Apply expertise and build scalable data solution
  • Develop and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
  • Intelligently design data models for storage and retrieval
  • Deploy data quality checks to ensure high quality of data
  • Ownership of the end-to-end data engineering component of the solution
  • Support on-call shift to support the team
  • Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
Job Responsibility
Job Responsibility
  • Build top-notch data solutions and data architecture to inform our most critical strategic and real-time decisions
  • Help translate business needs into data requirements and identify efficiency opportunities
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer to join our product DE team and report...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner across engineering teams to tackle company-wide initiatives
  • Mentor junior members of the team
  • Partner with leadership, engineers, program managers and data scientists to understand data needs
  • Apply expertise and build scalable data solution
  • Develop and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
  • Intelligently design data models for storage and retrieval
  • Deploy data quality checks to ensure high quality of data
  • Ownership of the end-to-end data engineering component of the solution
  • Support on-call shift as needed to support the team
  • Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
Job Responsibility
Job Responsibility
  • Build top-notch data solutions and data architecture to inform our most critical strategic and real-time decisions
  • Help translate business needs into data requirements and identify efficiency opportunities
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
84835.61 - 149076.17 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows, BQML, Vertex AI.
  • Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics.
  • Hands-on experience working with real-time, unstructured, and synthetic data.
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
  • Expert knowledge on Python programming and SQL.
  • Experience with cloud platforms (AWS, GCP, Azure) and their data services
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
  • Familiarity with synthetic data generation and unstructured data processing
Job Responsibility
Job Responsibility
  • Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets
  • Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth.
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound.
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
  • Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Eligible to participate in an annual incentive program
  • Fulltime
Read More
Arrow Right

Data & Analytics Engineer

As a Data & Analytics Engineer with MojoTech you will work with our clients to s...
Location
Location
United States
Salary
Salary:
90000.00 - 150000.00 USD / Year
mojotech.com Logo
MojoTech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in Data Engineering, Data Science, Data Warehousing
  • Strong experience in Python
  • Experience building and maintaining ETL/ELT pipelines, data warehouses, or real-time analytics systems
  • BA/BS in Computer Science, Data Science, Engineering, or a related field or equivalent experience in data engineering or analytics
  • Track record of developing and optimizing scalable data solutions and larger-scale data initiatives
  • Strong understanding of best practices in data management, including sustainment, governance, and compliance with data quality and security standards
  • Commitment to continuous learning and sharing knowledge with the team
Job Responsibility
Job Responsibility
  • Work with our clients to solve complex problems and to deliver high quality solutions as part of a team
  • Collaborating with product managers, designers, and clients, you will lead discussions to define data requirements and deliver actionable insights and data pipelines to support client analytics needs
What we offer
What we offer
  • Performance based end of year bonus
  • Medical, Dental, FSA
  • 401k with 4% match
  • Trust-based time off
  • Catered lunches when in office
  • 5 hours per week dedicated to self-directed learning, innovation projects, or skill development
  • Dog Friendly Offices
  • Paid conference attendance/yearly education stipend
  • Custom workstation
  • 6 weeks parental leave
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Provectus, a leading AI consultancy and solutions provider specializing in Data ...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.
  • Experience in AWS
  • Proficiency in programming languages relevant to data engineering, such as Python and SQL
  • Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation
  • Experience in building scalable APIs
  • Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization
  • Upper-Intermediate or higher English skills
  • Ability to take ownership, solve problems proactively, and collaborate effectively in dynamic settings
Job Responsibility
Job Responsibility
  • Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals
  • Collect and manage large volumes of varied data sets
  • Work directly with ML Engineers to create robust and resilient data pipelines that feed Data Products
  • Define data models that integrate disparate data across the organization
  • Design, implement, and maintain ETL/ELT data pipelines
  • Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently
  • Develop, continuously test, and deploy Data API Products with Python and frameworks like Flask or FastAPI
What we offer
What we offer
  • Participate in internal training programs (Leadership, Public Speaking, etc.) with full support for AWS and other professional certifications
  • Work with the latest AI tools, premium subscriptions, and the freedom to use them in your daily work
  • Long-term B2B collaboration
  • 100% remote — with flexible hours
  • Collaboration with an international, cross-functional team
  • Comprehensive private medical insurance or budget for your medical needs
  • Paid sick leave, vacation, and public holidays
  • Equipment and all the tech you need for comfortable, productive work
  • Special gifts for weddings, childbirth, and other personal milestones
Read More
Arrow Right