CrawlJobs Logo

PostgreSQL Senior Data Engineer

nttdata.com Logo

NTT DATA

Location Icon

Location:
Romania , Bucharest

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Senior Data Engineer role requires a minimum of 5 years of experience in data engineering, with a strong focus on PostgreSQL and relational databases. The candidate will design and develop data solutions, collaborate with teams, and ensure high-quality service delivery. A university degree in computer science is required. The position offers opportunities for professional growth and a supportive work environment.

Job Responsibility:

  • Employ a range of techniques to analyze problems and evaluate multiple solutions against engineering, business & strategic criteria
  • Collaborate to achieve consensus on topics and issues and contribute to communities
  • Identify and resolve barriers to business deliveries implementing solutions which iteratively deliver value
  • Design solutions using common design patterns with a range of design tools & techniques
  • Conduct peer reviews to ensure designs are fit for purpose, extensible & re-usable
  • Deliver end to end first-class engineering solutions across a range of technology platforms
  • Improve delivery standards, working practices, tools & solutions and driving adoption of automation tools
  • Ensure stable and reliable production environments and high-quality service delivery to customers
  • Investigate production incidents & lead remediation, analyze services & components
  • Design & build solutions which are secure & controlled
  • Investigate & report on issues and potential risk events within designs and solutions

Requirements:

  • University degree in computer science or a comparable qualification
  • At least 5 years of data engineering experience in a similar role
  • Excellent technical skills in PostgreSQL
  • Strong understanding of relational databases
  • Ability to design and develop application/components including packages, store procedures, functions, triggers
  • Experience in working with Oracle SQL and PL/SQL
  • Proficiency in scripting languages such as Python or Shell scripting
  • Proficiency in data modelling and database design
  • Open to learn Google Cloud technologies
  • Data analysis skills
  • Analytical thinker experienced in a range of problem-solving techniques & approaches
  • Team player who shares and collaborates, a keen listener and quick learner
  • Communicates clearly and influences using coherent, fact and benefit-based proposals
  • Enables experimentation and fast learning approaches to creating business solutions
  • Understands key elements of security, risk & control

Nice to have:

  • Experience in banking projects
  • Experience with Agile Projects
What we offer:
  • Smooth integration and a supportive mentor
  • Choose from Remote, Hybrid or Office work opportunities
  • Projects have different working hours to suit your needs
  • Sponsored certifications, trainings and top e-learning platforms
  • Private Health Insurance
  • Individual coaching sessions or joining our accredited Coaching School
  • Epic parties or themed events

Additional Information:

Job Posted:
March 13, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for PostgreSQL Senior Data Engineer

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role at UpGuard supporting analytics teams to extract insig...
Location
Location
Australia , Sydney; Melbourne; Brisbane; Hobart
Salary
Salary:
Not provided
https://www.upguard.com Logo
UpGuard
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with data sourcing, storage and modelling to effectively deliver business value right through to BI platform
  • AI first mindset and experience scaling an Analytics and BI function at another SaaS business
  • Experience with Looker (Explores, Looks, Dashboards, Developer interface, dimensions and measures, models, raw SQL queries)
  • Experience with CloudSQL (PostgreSQL) and BigQuery (complex queries, indices, materialised views, clustering, partitioning)
  • Experience with Containers, Docker and Kubernetes (GKE)
  • Familiarity with n8n for automation
  • Experience with programming languages (Go for ETL workers)
  • Comfortable interfacing with various APIs (REST+JSON or MCP Server)
  • Experience with version control via GitHub and GitHub Flow
  • Security-first mindset
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable data pipelines to consolidate information from various internal systems and third-party sources
  • Develop and manage comprehensive semantic layer using technologies like LookML, dbt or SQLMesh
  • Implement and enforce data quality checks, validation rules, and governance processes
  • Ensure AI agents have access to necessary structured and unstructured data
  • Create clear, self-maintaining documentation for data models, pipelines, and semantic layer
What we offer
What we offer
  • Great Place to Work certified company
  • Equal Employment Opportunity and Affirmative Action employer
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Fospha is dedicated to building the world's most powerful measurement solution f...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blenheimchalcot.com Logo
Blenheim Chalcot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Excellent knowledge of PostgreSQL and SQL technologies
  • Fluent in Python
  • Understanding of data architecture, pipelines and ELT flows/ technology/ methodologies
  • Understanding of agile methodologies and practices
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
Job Responsibility
Job Responsibility
  • Implement and maintain ELT (Extract, Load, Transform) processes using scalable data pipelines and data architecture
  • Collaborate with cross-functional teams to understand data requirements and deliver effective solutions
  • Ensure data integrity and quality across various data sources
  • Support data-driven decision-making by providing clean, reliable, and timely data
  • Define the standards for high-quality data for Data Science and Analytics use-cases and help shape the data roadmap for the domain
  • Design, develop, and maintain the data models used by ML Engineers, Data Analysts and Data Scientists to access data
  • Conduct exploratory data analysis to uncover data patterns and trends
  • Identify opportunities for process improvement and drive continuous improvement in data operations
  • Stay updated on industry trends, technologies, and best practices in data engineering
What we offer
What we offer
  • Competitive salary
  • Be part of a leading global venture builder, Blenheim Chalcot, and learn from the incredible talent in BC
  • Be exposed to the right mix of challenges and learning and development opportunities
  • Flexible Benefits including Private Medical and Dental, Gym Subsidiaries, Life Assurance, Pension scheme etc.
  • 25 days of paid holiday + your birthday off
  • Free snacks in the office
  • Quarterly team socials
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Corporate Tools, you will work closely with our Sof...
Location
Location
United States
Salary
Salary:
150000.00 USD / Year
corporatetools.com Logo
Corporate Tools
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s (BA or BS) in computer science, or related field
  • 2+ years in a full stack development role
  • 4+ years of experience working in a data engineer role, or related position
  • 2+ years of experience standing up and maintaining a Redshift warehouse
  • 4+ years of experience with Postgres, specifically with RDS
  • 4+ years of AWS experience, specifically S3, Glue, IAM, EC2, DDB, and other related data solutions
  • Experience working with Redshift, DBT, Snowflake, Apache Airflow, Azure Data Warehouse, or other industry standard big data or ETL related technologies
  • Experience working with both analytical and transactional databases
  • Advanced working SQL (Preferably PostgreSQL) knowledge and experience working with relational databases
  • Experience with Grafana or other monitoring/charting systems
Job Responsibility
Job Responsibility
  • Focus on data infrastructure. Lead and build out data services/platforms from scratch (using OpenSource tech)
  • Creating and maintaining transparent, bulletproof ETL (extract, transform, and load) pipelines that cleans, transforms, and aggregates unorganized and messy data into databases or data sources
  • Consume data from roughly 40 different sources
  • Collaborate closely with our Data Analysts to get them the data they need
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Improve existing data models while implementing new business capabilities and integration points
  • Creating proactive monitoring so we learn about data breakages or inconsistencies right away
  • Maintaining internal documentation of how the data is housed and transformed
  • Improve existing data models, and design new ones to meet the needs of data consumers across Corporate Tools
  • Stay current with latest cloud technologies, patterns, and methodologies
What we offer
What we offer
  • 100% employer-paid medical, dental and vision for employees
  • Annual review with raise option
  • 22 days Paid Time Off accrued annually, and 4 holidays
  • After 3 years, PTO increases to 29 days. Employees transition to flexible time off after 5 years with the company—not accrued, not capped, take time off when you want
  • The 4 holidays are: New Year’s Day, Fourth of July, Thanksgiving, and Christmas Day
  • Paid Parental Leave
  • Up to 6% company matching 401(k) with no vesting period
  • Quarterly allowance
  • Use to make your remote work set up more comfortable, for continuing education classes, a plant for your desk, coffee for your coworker, a massage for yourself... really, whatever
  • Open concept office with friendly coworkers
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

A VC-backed conversational AI scale-up is expanding its engineering team and is ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark/PySpark
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • great equity
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

A VC-backed conversational AI scale-up is expanding its engineering team and is ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark/PySpark
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • great equity
  • Fulltime
Read More
Arrow Right