CrawlJobs Logo

Data Migration Junior Developer

testhr.pl Logo

Advisory Group TEST Human Resources

Location Icon

Location:
Poland , Kraków

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As a Data Migration Junior Developer, you’ll play a critical role in ensuring the successful transition from legacy HCM system to SAP SuccessFactors. Your work will directly support our HR-IT transformation program by enabling clean, validated, and structured data migration — a foundation for any reliable system go-live. You’ll work closely with business analysts, data owners, and technical teams to analyze source data, identify and address data quality issues, implement transformation logic, and execute the migration process using ABAP and other tools. Your contribution will be essential not only for technical execution but also for building trust in data integrity among stakeholders.

Job Responsibility:

  • Analyzing legacy HCM data structures and other required data sources, and then identifying data elements in scope for migration
  • Developing transformation logic to map source data to SuccessFactors data model
  • Coding data extraction and transformation routines using ABAP
  • Preparing and maintaining migration templates and conversion scripts
  • Executing test migrations and supporting defect resolution during testing cycles
  • Supporting data cleansing analysis and enrichment in coordination with business stakeholders
  • Validating migrated data against source systems for completeness and accuracy
  • Collaborating with functional consultants and data owners to align on data requirements
  • Supporting cutover planning, final data loads, and go-live activities
  • Perform technical troubleshooting of data load issues, including field mapping errors, transformation logic mismatches, and system performance bottlenecks during migration cycles
  • Documenting all migration logic, field mappings, and technical procedures for audit and reuse

Requirements:

  • Master’s or Bachelor’s degree
  • Min. 3 years of experience in data analysis using Excel (Power Query, Pivot Tables)
  • Basic experience with Python or/and Java
  • Willingness to work with and learn ABAP
  • Understanding of data mapping, transformation, and validation principles
  • Strong analytical thinking and ability to identify inconsistencies in large data sets
  • Ability to analyze and document data requirements and field-level mappings
  • Very good English skills
  • Independent and systematic approach to work
  • Clear and effective communication with technical and non-technical stakeholders
  • Ability to detect data inconsistencies, errors, and mismatches across large and complex data sets
  • Team player
  • Flexibility and adaptability

Nice to have:

Knowledge of SAP HCM or/and SuccessFactors data structures and infotypes will be an asset

What we offer:
  • Attractive development opportunities
  • Private medical care for employees and their families
  • Possibility to fulfil a dream about far journeys with attractive discounts for flight tickets
  • Possibility to adjust the place and hours of work to private needs
  • Hybrid working model (work in the office one day per week)
  • Opportunity to work from each part of Poland during remote work
  • Flexible working time
  • New branded and modern office: 200 adjustable desks, several types of meeting rooms, bike amenities, lounge rooms and chill out spaces, cozy kitchens and excellent public transport in a walking distance
  • Continuous improvement environment
  • Supportive & friendly working atmosphere - we work towards a joint success

Additional Information:

Job Posted:
January 20, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Migration Junior Developer

Data Migration Junior Developer

As a Data Migration Junior Developer, you’ll play a critical role in ensuring th...
Location
Location
Poland , Kraków
Salary
Salary:
Not provided
testhr.pl Logo
Advisory Group TEST Human Resources
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s or Bachelor’s degree
  • Min. 3 years of experience in data analysis using Excel (Power Query, Pivot Tables)
  • Basic experience with Python or/and Java
  • Willingness to work with and learn ABAP
  • Knowledge of SAP HCM or/and SuccessFactors data structures and infotypes will be an asset
  • Understanding of data mapping, transformation, and validation principles
  • Strong analytical thinking and ability to identify inconsistencies in large data sets
  • Ability to analyze and document data requirements and field-level mappings
  • Very good English skills
  • Independent and systematic approach to work
Job Responsibility
Job Responsibility
  • Analyzing legacy HCM data structures and other required data sources, and then identifying data elements in scope for migration
  • Developing transformation logic to map source data to SuccessFactors data model
  • Coding data extraction and transformation routines using ABAP
  • Preparing and maintaining migration templates and conversion scripts
  • Executing test migrations and supporting defect resolution during testing cycles
  • Supporting data cleansing analysis and enrichment in coordination with business stakeholders
  • Validating migrated data against source systems for completeness and accuracy
  • Collaborating with functional consultants and data owners to align on data requirements
  • Supporting cutover planning, final data loads, and go-live activities
  • Perform technical troubleshooting of data load issues, including field mapping errors, transformation logic mismatches, and system performance bottlenecks during migration cycles
What we offer
What we offer
  • Attractive development opportunities
  • Private medical care for employees and their families
  • Possibility to fulfil a dream about far journeys with attractive discounts for flight tickets
  • Possibility to adjust the place and hours of work to private needs
  • Hybrid working model (work in the office one day per week)
  • Opportunity to work from each part of Poland during remote work
  • Flexible working time
  • New branded and modern office: 200 adjustable desks, several types of meeting rooms, bike amenities, lounge rooms and chill out spaces, cozy kitchens and excellent public transport in a walking distance
  • Continuous improvement environment
  • Supportive & friendly working atmosphere - we work towards a joint success
Read More
Arrow Right

Forward Deployed Engineer - Data Migration & Data Consolidation Platforms

As a Forward Deployed Engineer (FDE) for Data Migration & Data Consolidation Pla...
Location
Location
United States
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles within complex, multi-platform environments
  • 3-5+ years directly leading end-to-end data migration or multi-system consolidation programs for Global Enterprises and Industry Leaders, with full ownership of technical delivery and client outcomes
  • Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives, enterprise architecture teams, and cross-functional business stakeholders
  • Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector
  • Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records, complex master data harmonization, and multi-phase cutover execution
  • Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript
  • Hands-on expertise with modern ETL/ELT and data integration platforms (Informatica, Talend, Matillion, Fivetran, AWS Glue, Azure Data Factory)
  • Proven ability to build scalable, version-controlled data pipelines with error handling, incremental loading, and Change Data Capture (CDC)
  • Strong working knowledge of at least one major cloud provider (AWS, Azure, or GCP), including core infrastructure, managed data services, and security configurations
  • Experience with enterprise data warehouse and lakehouse platforms (Snowflake, Databricks, BigQuery, Redshift, Synapse Analytics, Delta Lake)
Job Responsibility
Job Responsibility
  • Migration Execution & Cloud Architecture: Lead end-to-end delivery of enterprise data migrations from corporate systems (SAP, Oracle, Epic ERP) to target cloud data platforms, including the design of cloud landing zones, data governance frameworks, and system rationalization strategies. Establish migration compliance controls, automated rollback procedures, and operational readiness gates while owning full technical accountability for 12–18+ month migration roadmaps
  • Data Pipeline Engineering & Transformation: Build production-grade data connectors to SAP (RFC, IDoc, BAPI, OData), Oracle (AQ, GoldenGate, APIs), and SQL/non-relational sources. Develop ETL/ELT pipelines with LLM-enabled transformation logic, multi-layer validation and reconciliation frameworks, and optimized throughput for datasets scaling from tens of millions to billions of records with built-in CDC and incremental loading
  • Ontology Layer Development & Schema Automation: Construct semantic ontology layers translating raw ERP structures into business-consumable objects (Customer, Order, Invoice, Product, Vendor, Asset). Deploy automated schema mapping agents for source-to-target analysis and transformation logic generation. Build unified master data models with row/column-level security, cross-system lineage tracking, and AI-ready semantic structures
  • Application & Workflow Delivery: Build operational dashboards, migration control centers, and agent-driven workflows for automated validation, exception handling, and anomaly detection using low-code platform tools. Generate TypeScript/Python SDKs for custom integrations and deliver real-time monitoring and self-service interfaces for migration progress, data quality KPIs, and compliance tracking
  • Multi-System Consolidation & Master Data Management: Lead consolidation of 5–15+ fragmented ERP instances into standardized master data models. Resolve complex entity resolution challenges including customer matching, product harmonization, and chart of accounts unification. Establish golden record frameworks, data quality scorecards, survivorship rules, and data stewardship workflows for post-migration governance
  • Client Engagement, Discovery & Modernization Advisory: Serve as primary technical advisor to C-suite and enterprise architecture stakeholders across all engagement phases. Deploy discovery agents to analyze legacy data estates, conduct assessment workshops, facilitate solution design sessions, and deliver executive briefings, go/no-go readiness assessments, and prioritized modernization roadmaps
  • Knowledge Transfer, Enablement & IP Development: Build reusable migration accelerators, playbooks, and reference architectures that scale across engagements. Lead knowledge transfer to upskill client teams for post-migration ownership and collaborate with internal product and sales engineering teams to feed field insights back into platform development and delivery methodology
  • Leadership & Executive Engagement: Operate autonomously in ambiguous, high-stakes client environments, driving outcomes with minimal oversight
  • translate deeply technical concepts into clear, business-level narratives for C-suite audiences through executive briefings and stakeholder communications
  • navigate organizational complexity, competing stakeholder priorities, and enterprise change management dynamics to maintain momentum across multi-workstream engagements
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for a Data Engineer with a collaborative, “can-do” attitude who i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 3+ years of experience with setting up and operating data pipelines using Python or SQL
  • 3+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy/analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer with a collaborative, “can-do” attitud...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

Architect and deliver modern data platform solutions with a strong emphasis on D...
Location
Location
United States , Houston
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science or equivalent hands‑on experience
  • Deep, hands-on expertise working with Databricks for data engineering, ETL development, and migration initiatives
  • Databricks certifications demonstrating advanced platform proficiency
  • Experience operating within major cloud ecosystems such as AWS, Azure, or Google Cloud
  • Strong foundation in modern big‑data tools, distributed processing frameworks, and large‑scale data technologies
  • Solid understanding of data warehousing principles, dimensional modeling, and advanced SQL development
  • Background working with traditional relational database systems and migrating data from on‑premise RDBMS to cloud-native platforms
  • Proficiency in core programming and scripting languages, especially Python and SQL
  • Strong grasp of data governance concepts, data quality frameworks, and enterprise‑grade security practices
  • Extensive experience with relational databases (e.g., SQL Server) and expertise in database design, schema modeling, and performance considerations
Job Responsibility
Job Responsibility
  • Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies
  • Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives
  • Oversee and execute migrations from legacy relational databases into Databricks-based ecosystems
  • Design and structure scalable data pipelines and foundational data infrastructure aligned with organizational goals
  • Create and maintain ETL/ELT processes within Databricks to ensure efficient ingestion, transformation, and delivery of data
  • Continuously refine and optimize data workflows to improve performance, stability, and data quality across all processes
  • Manage end-to-end data transitions to ensure operational continuity with minimal business disruption
  • Monitor Databricks workloads and optimize performance, scalability, and cost efficiency across compute and storage layers
  • Partner with data engineers, scientists, analysts, and product stakeholders to gather requirements and build fit‑for‑purpose data solutions
  • Establish and enforce data engineering best practices, development standards, and architectural guidelines
What we offer
What we offer
  • medical
  • vision
  • dental
  • life and disability insurance
  • 401(k) plan
Read More
Arrow Right

OpenText Exstream Developer

This role is responsible for the development, installation, and maintenance of o...
Location
Location
India , Pune
Salary
Salary:
Not provided
cencora.com Logo
Cencora
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of hands‑on experience in OpenText Exstream 23.x / 24.x / 25.x (CloudNative)
  • Strong experience with XML, Print Miner, columnar and delimited data inputs
  • Expertise in batch and real‑time application design
  • Skilled in creating automated and complex table structures
  • Experience working with barcodes and inserter configurations
  • Proficient in generating outputs: PS, PDF, AFP, Empower, multi‑channel delivery
  • Deep knowledge of rules, formula variables, control files, document/pages setup, design layers, language layers
  • Hands‑on experience with two‑pass application design
  • Knowledge of orchestration workflows
  • Proficient in sorting, bundling, and post‑processing (AFP/PDF)
Job Responsibility
Job Responsibility
  • Develop OpenText Exstream applications
  • Design, implement, unit test, document, and deploy applications/APIs
  • Develop database solutions using SSIS, T‑SQL, and stored procedures
  • Collaborate with business teams to define logical designs aligned with data architecture
  • Perform code reviews, analyze execution plans, and optimize/re-factor code
  • Provide technical guidance to junior software engineers
  • Follow data standards, resolve data issues, perform unit testing, and document ETL processes
  • Assist managers with project documentation, progress tracking, and test plan creation
  • Work with business analysts and source system experts on data extraction & transformation requirements
  • Coordinate with IT operations and testing teams for timely, sustainable releases
  • Fulltime
Read More
Arrow Right

Programmer Analyst

At Boeing, we innovate and collaborate to make the world a better place. We’re c...
Location
Location
United States , Seal Beach
Salary
Salary:
144414.00 USD / Year
boeing.com Logo
Boeing
Expiration Date
May 02, 2026
Flip Icon
Requirements
Requirements
  • Bachelor’s degree or foreign equivalent in Computer Science, Information Technology, or a related technology field
  • 5 years of progressive, post-baccalaureate experience as a Programmer Analyst or in a related position
  • 3 years of experience with Salesforce development
  • 3 years of experience designing data models, security models, and governor limits with Salesforce
  • 3 years of experience integrating Salesforce using Salesforce Connect, REST API, SOAP API, or middleware platforms
  • 3 years of experience with Web technologies including HTML, CSS, or Javascript
  • 3 years of experience performing data migrations using Data Loader or Salesforce Data Import Wizard
  • 3 years of experience optimizing Salesforce Object Query Language, improving page load times and reducing governor limits
  • Salesforce Advance Administrator Certification
Job Responsibility
Job Responsibility
  • Solicit business requirements for feature designs on the Salesforce platform
  • Design, develop, and configure scalable features specific to business processes and customized business logic to extend the functionality of the Salesforce platform
  • Design and implement business-specific data models
  • Perform data migration and data transformation tasks
  • Conduct unit testing, integration testing, and user acceptance testing to ensure stability of the developed solutions
  • Leverage relationships with business users
  • Debug and resolve issues encountered during development and production
  • Identify and resolve performance bottlenecks in Salesforce applications
  • Manage deployment processes such as version control, change management, and release planning
  • Create technical documentation, including design documents, release notes, and user guides
What we offer
What we offer
  • Generous company match to your 401(k)
  • Industry-leading tuition assistance program pays your institution directly
  • Fertility, adoption, and surrogacy benefits
  • Up to $10,000 gift match when you support your favorite nonprofit organizations
  • Fulltime
Read More
Arrow Right