CrawlJobs Logo

Oracle Data Engineer

bhsg.com Logo

Beacon Hill

Location Icon

Location:
United States , Philadelphia

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

55.00 - 64.00 USD / Hour

Job Description:

We are seeking a Senior Oracle Data Engineer to lead foundational data model and ETL enhancements for a large‑scale enterprise data platform. This role owns Oracle‑side design and delivery of new data attributes, including complex historical backfills, to support downstream data processing and file‑based integrations.

Job Responsibility:

  • Design and implement Oracle ETL enhancements to support new and evolving data attributes across both daily processing and multi‑year historical datasets
  • Extend and maintain Oracle data models and curated tables to reliably support downstream analytics and extract‑driven consumers
  • Analyze and implement identifier normalization and cross‑reference strategies, assessing downstream impacts and data lineage
  • Build and automate historical extraction and backfill processes, ensuring alignment with go‑forward incremental loads
  • Produce data extracts and placeholder artifacts required for parallel development across multiple downstream teams
  • Partner closely with QA and downstream engineering to support integration testing, defect resolution, and production readiness

Requirements:

  • 6+ years of experience in Oracle‑based data engineering, including advanced SQL and PL/SQL
  • Proven experience delivering large historical backfills alongside ongoing incremental loads
  • Strong data modeling skills and experience supporting extract‑heavy data platforms
  • Experience working with high‑volume, regulated, or compliance‑sensitive data domains

Additional Information:

Job Posted:
March 26, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Oracle Data Engineer

Data Engineer

The Data Engineer is accountable for developing high quality data products to su...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • First Class Degree in Engineering/Technology/MCA
  • 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies
  • Experience of relational databases and using SQL for data querying, transformation and manipulation
  • Experience of modelling data for analytical consumers
  • Ability to automate and streamline the build, test and deployment of data pipelines
  • Experience in cloud native technologies and patterns
  • A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
  • Excellent communication and problem-solving skills
  • ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
  • Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing
Job Responsibility
Job Responsibility
  • Developing and supporting scalable, extensible, and highly available data solutions
  • Deliver on critical business priorities while ensuring alignment with the wider architectural vision
  • Identify and help address potential risks in the data supply chain
  • Follow and contribute to technical standards
  • Design and develop analytical data models
  • Fulltime
Read More
Arrow Right

Data Engineer

The Data Engineer is accountable for developing high quality data products to su...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • First Class Degree in Engineering/Technology/MCA
  • 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies
  • Experience of relational databases and using SQL for data querying, transformation and manipulation
  • Experience of modelling data for analytical consumers
  • Ability to automate and streamline the build, test and deployment of data pipelines
  • Experience in cloud native technologies and patterns
  • A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
  • Excellent communication and problem-solving skills
  • ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
  • Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing
Job Responsibility
Job Responsibility
  • Developing and supporting scalable, extensible, and highly available data solutions
  • Deliver on critical business priorities while ensuring alignment with the wider architectural vision
  • Identify and help address potential risks in the data supply chain
  • Follow and contribute to technical standards
  • Design and develop analytical data models
  • Fulltime
Read More
Arrow Right

Python Data Engineer

Arthur Lawrence is looking for a Python Data Engineer one of our clients in Hous...
Location
Location
United States , Houston
Salary
Salary:
Not provided
arthurlawrence.net Logo
Arthur Lawrence
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of professional Python development
  • Strong knowledge of OOP, design patterns, and SOA
  • Hands-on experience in data engineering, data pipeline development, and web scraping (Requests, BeautifulSoup, Selenium)
  • Oracle/PL SQL expertise, stored procedures
  • Bachelor’s degree in Computer Science, MIS, or related field
  • Agile/Scrum experience
  • Fulltime
Read More
Arrow Right

Data Engineer

Are you a Data Engineer based in Austin, Texas, who is inspired by working with ...
Location
Location
United States , Austin
Salary
Salary:
100000.00 - 135000.00 USD / Year
beezwax.net Logo
Beezwax Datatools
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-6 years of hands-on data modeling and data engineering experience
  • Strong expertise in dimensional modeling and data warehousing
  • Database design and development experience with relational or MPP databases such as Postgres/ Oracle/ Teradata/ Vertica
  • Experience in design and development of custom ETL pipelines using SQL and scripting languages (Python/ Shell/ Golang)
  • Proficiency in advanced SQL and performance tuning
  • Hands on experience with Big-Data platforms like Spark, Dremio, Hadoop, MapReduce, Hive etc
  • Experience with Java, Scala and Python
  • Experience with cloud computing platforms like AWS, Google Cloud
  • Experience working with APIs
  • Ability to learn and adapt to new tools and technologies
What we offer
What we offer
  • Competitive compensation
  • Retirement plan with employer matching
  • Excellent healthcare package with vision and dental
  • Support for productivity and continued learning in the forms of hardware, software, learning materials, training, and conferences
  • Fulltime
Read More
Arrow Right

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Ab Initio Data Engineer

The Applications Development Intermediate Programmer Analyst is an intermediate ...
Location
Location
India , Chennai; Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics)
  • Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications
  • Expertise in handling complex large-scale Data Lake and Warehouse environments
  • Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities
Job Responsibility
Job Responsibility
  • Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans
  • Build Web-Service and RESTful graphs and create RAML or Swagger documentations
  • Complete understanding and analytical ability of Metadata Hub metamodel
  • Strong hands on Multifile system level programming, debugging and optimization skill
  • Hands on experience in developing complex ETL applications
  • Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues
  • Strong in UNIX Shell/Perl Scripting
  • Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3
  • Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment
  • Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now
  • Fulltime
Read More
Arrow Right