This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a hands-on Data Engineer to help build and expand an enterprise data platform in Mason, Ohio. This role will focus on creating a scalable Azure and Microsoft Fabric environment that brings together information from multiple business systems to support reliable reporting, analytics, and future data-driven innovation. The position is ideal for someone who enjoys designing core data architecture, improving data quality, and enabling business teams with trusted insights across manufacturing, service, supply chain, sales, and finance.
Job Responsibility:
Design and develop a modern enterprise data ecosystem using Azure and Microsoft Fabric, covering ingestion, storage, transformation, and delivery for analytics use cases
Create and support automated data pipelines that collect information from operational applications, external portals, and databases into a centralized environment
Structure raw, refined, and business-ready data layers so teams can access consistent data for dashboards, reporting, and self-service analysis
Consolidate and standardize data from platforms such as Epicor, JobBOSS, Salesforce, and other internal or third-party systems used across the organization
Build reusable data models and business logic that support reporting across order management, procurement, inventory, manufacturing operations, service, and finance
Introduce data validation, reconciliation, monitoring, and error-handling processes to strengthen data accuracy and reduce manual correction efforts
Partner with reporting teams by enabling governed semantic models, optimizing datasets, and supporting secure access to detailed transactional data in Power BI
Define and apply security controls, including role-based permissions and data access rules, in alignment with internal governance standards and privacy expectations
Maintain clear technical documentation for mappings, lineage, transformation rules, data definitions, and engineering standards
Assess existing integration methods, including manual and legacy approaches, and help implement more scalable and controlled data delivery patterns over time
Requirements:
At least 5 years of experience in data engineering, data integration, or enterprise data platform development
Hands-on experience with Microsoft data technologies such as Azure Data Factory, Azure Data Lake, Synapse, Microsoft Fabric, or related Azure services
Strong SQL skills with a proven ability to build and optimize complex transformations for analytics workloads
Experience using Python, PySpark, Spark SQL, or similar tools within modern data engineering environments
Background designing data lake, lakehouse, or data warehouse solutions for business reporting and analysis
Solid understanding of dimensional modeling, semantic modeling, and data structures that support enterprise analytics
Experience building ETL or ELT pipelines across multiple source systems and maintaining data quality, lineage, and documentation standards
Strong communication skills and the ability to collaborate effectively with technical teams and business stakeholders in a changing environment