CrawlJobs Logo

SQL / Data Vault Expert

xcede.com Logo

Xcede

Location Icon

Location:
Austria , Wien

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

For a public sector client, we are seeking an experienced SQL / Data Vault Expert (m/f/d) to support Business Intelligence (BI) and Data Architecture initiatives. The role starts immediately and runs until end of 2026, offering a long-term, high-impact engagement in a complex data environment.

Job Responsibility:

  • Development of SQL-based database procedures and functions
  • Data cleansing, preprocessing, and data management using SQL
  • Implementation and performance optimization of complex data transformations
  • Relational data modeling with a strong focus on Data Vault architecture
  • Ensuring data consistency, integrity, and high data quality
  • Consulting stakeholders on complex business and technical topics
  • Design and implementation of customized, high-quality BI solutions
  • Evaluation of technical solution approaches and execution of feasibility analyses
  • Analysis of complex problem statements and development of holistic concepts and studies
  • Preparation and delivery of presentations for different target audiences
  • Structured, goal-oriented communication of complex technical topics to internal stakeholders

Requirements:

  • Fluent German & English
  • Expert-level Data Vault knowledge, including hands-on experience with self-developed Data Vault generators (mandatory)
  • 5–7 years of experience with Java Spring and data migration projects
  • 5–7 years of professional experience in IT consulting
  • Strong expertise in SQL, BI systems, and data architecture
  • Completed degree in Computer Science, Business Informatics, Business Administration, or a comparable qualification (university or university of applied sciences)
  • Professional demeanor with excellent communication and stakeholder management skills
  • Outstanding analytical and conceptual abilities
  • High proficiency in moderation, presentation, and stakeholder communication

Nice to have:

English is a strong advantage

What we offer:
  • Long-term public sector project with high stability
  • Opportunity to work on complex, enterprise-scale BI and Data Vault architectures
  • Flexible hybrid working model

Additional Information:

Job Posted:
February 17, 2026

Expiration:
March 28, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for SQL / Data Vault Expert

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Analytics Engineer

Location
Location
Japan , Tokyo
Salary
Salary:
10000000.00 - 15000000.00 JPY / Year
https://www.randstad.com Logo
Randstad
Expiration Date
October 30, 2026
Flip Icon
Requirements
Requirements
  • Expert-level DBT (Data Build Tool) proficiency for complex transformations
  • Deep expertise in Data Vault and Dimensional Modeling (Star Schema) methodologies
  • Advanced SQL and development experience in Databricks (preferred), Snowflake, or BigQuery
  • Expert-level IaC (Terraform preferred)
  • Strong background in CI/CD/CO (Continuous Operations) and version control (GitHub/AWS CodeCommit)
  • Extensive experience managing and scaling data pipeline orchestration tools
  • Master’s degree in Computer Science or Data Engineering
  • A successful history of spearheading complex technical initiatives and driving best practices in analytics engineering
Job Responsibility
Job Responsibility
  • Architect and implement scalable, automated data transformation workflows using DBT and modern data stack patterns
  • Lead the development of Data Vault (Silver Layer) core products designed to feed high-performance Dimensional (Gold Layer) analytical models
  • Build and maintain comprehensive testing and validation frameworks to ensure rigorous data quality and integrity
  • Utilize Terraform to develop and manage infrastructure, ensuring all components are version-controlled
  • Design and execute automated pipelines for Continuous Integration, Delivery, and Operations
  • Establish deployment pipelines that strictly adhere to segregation of duties principles
  • Coordinate closely with Ops teams to manage seamless transitions through pre-production and production environments
  • Ensure all transformation architectures are optimized for large-scale data processing and high efficiency
What we offer
What we offer
  • 健康保険
  • 厚生年金保険
  • 雇用保険
  • 土曜日
  • 日曜日
  • 祝日
  • 賞与
Read More
Arrow Right

Lead Data Engineer

Lead Data Engineer to design, build, and maintain the integrity of our core data...
Location
Location
United Kingdom , Swindon
Salary
Salary:
75000.00 GBP / Year
talenthawk.com Logo
TalentHawk
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience leading data engineering or BI teams within complex environments
  • Hands-on expertise in designing and implementing Enterprise Data Warehouses
  • Track record of building secure data pipelines across multiple source systems
  • A degree in Computer Science, Data Engineering, or a related field (or equivalent experience)
  • Relevant certifications (e.g., Azure/AWS Data Engineer, Snowflake) are highly desirable
  • Strong grasp of Data Vault, Kimball, or equivalent design patterns
  • Expert-level SQL, ETL/ELT pipeline development, and modern engineering tools
  • Proficiency with cloud-based services (Azure, AWS, or GCP) and Power BI
  • Deep understanding of data security, GDPR, and governance frameworks
  • Exceptional leadership and mentoring capabilities
Job Responsibility
Job Responsibility
  • Architecture & Implementation: Own the Data Warehouse lifecycle, ensuring high availability, security, and scalability
  • Data Integration: Build and maintain robust pipelines to ingest and transform data from diverse systems (Salesforce, NetSuite, and digital platforms)
  • Team Leadership: Manage and mentor the BI team, providing technical direction and fostering a high-performance culture
  • Data Governance & Security: Implement validation practices, metadata management, and data lineage to ensure GDPR compliance and data integrity
  • Stakeholder Collaboration: Act as a bridge between technical teams and business leaders to translate reporting needs into actionable technical solutions
  • Strategic Input: Evaluate new technologies and provide expert advice on programs requiring integrated data and analytics
  • Fulltime
Read More
Arrow Right

Middle Data Engineer

We are seeking a skilled Data Engineer with 3-5 years of experience to join our ...
Location
Location
Ukraine
Salary
Salary:
Not provided
n-ix.com Logo
N-iX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of professional experience in data engineering or backend software engineering with a data focus
  • Strong proficiency in Python for data manipulation and scripting
  • Expert-level SQL skills for complex querying and performance tuning
  • Hands-on production experience with modern cloud data platforms, specifically Snowflake and Databricks
  • Proven experience using dbt in a production environment for transformation layers
  • Experience building and managing complex DAGs in Apache Airflow
  • Cloud platform experience: AWS
  • Working knowledge of Terraform for deploying and managing cloud resources
Job Responsibility
Job Responsibility
  • Design, develop, and maintain reliable ETL/ELT pipelines using Python and SQL to ingest data from various sources into our data lake/warehouse
  • Orchestrate complex data workflows and dependencies using Apache Airflow, ensuring timely data delivery and robust failure handling
  • Champion the use of dbt (data build tool) for developing, testing, and documenting data transformation logic within the warehouse
  • Develop clean, highly optimized SQL models for reporting and analytics (data modeling concepts like Star Schema or Data Vault are a plus)
  • Work hands-on with both Snowflake and Databricks, optimizing compute resources, managing access controls, and ensuring high performance for end-users
  • Utilize Terraform to provision and manage cloud infrastructure (e.g., S3 buckets, IAM roles, Snowflake warehouses) in an Infrastructure-as-Code paradigm
  • Implement data quality checks and monitoring within pipelines to ensure the accuracy and integrity of our data
  • Troubleshoot pipeline failures, identify performance bottlenecks, and implement long-term fixes
What we offer
What we offer
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
Read More
Arrow Right

Azure Data Engineer

We are seeking a highly skilled and experienced Azure Data Engineer to join our ...
Location
Location
Salary
Salary:
Not provided
vidushiinfotech.com Logo
Vidushi Infotech SSP Pvt. Ltd.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on experience as a Data Engineer, primarily focused on the Microsoft Azure data stack
  • Expert-level proficiency in Databricks (Spark SQL/PySpark), Python, and SQL
  • Strong, practical knowledge of core Azure data services, including Azure Data Lake Storage (Gen2) and Azure Synapse Analytics (or Azure SQL Data Warehouse)
  • Deep understanding and experience with modern ETL/ELT principles and tools (e.g., Azure Data Factory)
  • Solid understanding of the capabilities and architecture of Microsoft Fabric
  • Proven experience with code versioning (Git), unit testing frameworks, and principles of writing production-ready, clean, and well-documented code
  • Demonstrated ability to identify and implement performance and cost optimization techniques across data storage and processing layers
  • Excellent analytical and problem-solving skills with a track record of successfully refactoring complex or legacy data infrastructure
Job Responsibility
Job Responsibility
  • Design, development, and implementation of robust and scalable ETL/ELT processes using Azure services and Databricks
  • Act as a subject matter expert for Databricks, leveraging its capabilities for large-scale data processing, advanced analytics, and machine learning workloads
  • Write, optimize, and maintain high-quality code primarily in Python and SQL for data transformation, cleaning, and aggregation
  • Utilize a comprehensive suite of Azure services including Azure Data Lake Storage (Gen2), Azure Synapse Analytics, Azure Data Factory, and Azure Key Vault to build and manage end-to-end data solutions
  • Demonstrate and apply strong working knowledge of Microsoft Fabric to unify data, analytics, and AI workloads, contributing to the modernization of our data platform
  • Refactor legacy code for improved performance, readability, and maintainability
  • Write and execute comprehensive unit tests to ensure the reliability and integrity of all data pipelines and code
  • Implement optimization techniques to significantly improve the performance and reduce the cost of existing and new data solutions, especially within Databricks and Synapse
  • Apply best practices for code versioning using tools like Git (e.g., GitHub, Azure DevOps) within a structured CI/CD environment
  • Work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical specifications
  • Fulltime
Read More
Arrow Right

Data Engineer

Seeking a Data Engineer: Rhino, are you there? At WE ARE META, we focus on findi...
Location
Location
Portugal , Porto
Salary
Salary:
Not provided
wearemeta.io Logo
We Are Meta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience working as a Data Engineer or Data Quality Engineer
  • Expert-level proficiency in SQL and SQL-like query languages
  • Strong expertise in Python, including experience structuring and organizing Python-based projects
  • Extensive hands-on experience with Azure Data Factory and Databricks
  • In-depth knowledge and practical experience with ETL processes
  • Solid experience working with Microsoft Azure Cloud and its related services
  • Strong understanding of data modeling methodologies, including Kimball, Inmon, and Data Vault
  • Fluency in English is mandatory
  • Availability for a remote work model based in Portugal
What we offer
What we offer
  • Welcome kit
  • Opportunities for career progression
  • Health insurance
  • Coverflex meal card
  • Other protocols and special discounts
Read More
Arrow Right

Principal Data Architect

At Datatonic, we are Google Cloud's premier partner in AI, driving transformatio...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
datatonic.com Logo
Datatonic
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience designing and building data warehouse / lakehouse solutions using technologies like BigQuery, Azure Synapse, Snowflake, Databricks
  • Strong expertise in data modeling and solution architecture, optimizing for performance and scalability
  • Experience with data platforms with data quality, security, privacy, and governance controls built-in
  • Ability to take projects from concept to completion, driving creative and effective solutions
  • Demonstrated problem-solving skills with a strong technical foundation and an innovative approach
  • Exceptional written and verbal communication skills with great attention to detail, capable of presenting complex concepts clearly to customers
  • Ability to build and maintain strong relationships with key external stakeholders across different business levels
  • Hands-on experience with Python, Java, and SQL for data engineering and solution development
Job Responsibility
Job Responsibility
  • Design & Deliver Cutting-Edge Data Solutions: Lead the analysis, design, and execution of state-of-the-art, data-driven solutions to meet our client’s business needs, leveraging the best of Google Cloud technologies
  • Data Architecture & Governance: Serve as an expert in data transformation, storage, retrieval, security, and governance, ensuring scalable, secure, and efficient data solutions
  • Guide & Mentor Engineers: Provide architectural direction to engineers, ensuring they build robust, high-performance solutions aligned with your target data architecture
  • Master Data Modeling Techniques: Apply expertise in various data modeling approaches, including 3NF, Data Vault, Star Schema, and One Big Table (OBT). Clearly articulate the benefits and trade-offs of each method and optimize their implementation within columnar databases such as BigQuery
  • Shape Data Strategy: Collaborate with the client to define and refine data strategy
  • Develop Fully Integrated Solutions: Work alongside Architecture, Engineering, and Data Science teams to design comprehensive, production-ready solutions that incorporate cloud best practices, scalable and efficient ingestion strategies, feature engineering methodologies, and end-to-end production readiness
  • Leverage Leading Technologies: Design and implement solutions using key partner technologies, including Google Cloud – BigQuery, Dataflow, Vertex AI, and more
  • dbt Labs – Modern analytics engineering and transformation
  • Snowflake – Cloud-native data warehousing
  • Fivetran – Automated data pipelines for seamless integration
What we offer
What we offer
  • 25 days plus bank holidays
  • Private health insurance (Vitality Health)
  • Smart Health Services
  • 50% gym membership discounts (Nuffield Health, Virgin Active, Pure Gym)
  • WFH allowance
  • Access to platforms like Udemy
  • Pension (Auto-enrolment after probation period. 3% employer contributions raising 1% per year of service to a max of 10%)
  • Life Insurance (3 x your base salary)
  • Income Protection (up to 75% of base salary, up to 2 years)
  • Cycle to Work Scheme
  • Fulltime
Read More
Arrow Right