CrawlJobs Logo

Temporary Data Analyst

https://www.office-angels.com Logo

Office Angels

Location Icon

Location:
United Kingdom , Leeds

Category Icon
Category:

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

14.00 - 14.25 GBP / Hour

Job Description:

We are recruiting a Temporary Data Analysis professional to support the operations function of a globally recognised consumer products organisation operating within a fast-paced supply chain and distribution environment. This role is ideal for someone with a strong analytical background, who is highly competent in Excel and enjoys working with large data sets to produce clear, accurate insights that support business decisions.

Job Responsibility:

  • Analyse and manage inventory data, tracking stock movement, identifying discrepancies, and reporting non-compliance
  • Conduct regular data audits to ensure accuracy across systems and reports
  • Identify slow-moving, obsolete, or damaged stock, ensuring correct coding and follow-up actions
  • Reconcile inventory data across multiple systems (such as Oracle)
  • Produce clear, accurate reports highlighting inventory levels, movement trends, variances, and KPIs
  • Act as a point of contact for data-related inventory and quality issues, ensuring system updates are accurate and timely
  • Support receiving and dispatch data checks, resolving discrepancies between deliveries and purchase orders
  • Collaborate closely with internal teams to resolve data inconsistencies and support stock movement planning

Requirements:

  • Proven experience in data analysis, inventory analysis, or a similar analytical operations role
  • Strong Excel capability - confident using VLOOKUP/XLOOKUP, Pivot Tables, and formulas to analyse, reconcile, and report on large and complex data sets
  • Experience using inventory management systems, WMS, and ERP platforms (e.g. Oracle or SAP) is highly desirable but not essential
  • A highly analytical mindset with excellent attention to detail
  • Strong organisational skills and ability to manage multiple priorities
  • Ability to thrive in a fast-paced, changing environment

Additional Information:

Job Posted:
April 23, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Temporary Data Analyst

Senior Data Analyst

The candidate must have senior data analysis experience in the following: Experi...
Location
Location
United States , Columbus, Ohio
Salary
Salary:
Not provided
oceanbluecorp.com Logo
Ocean Blue Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience collecting and managing WIC data as found in the State of Ohio WIC applications, including but not limited to Omneity and WIC Vendor Management
  • Experience collecting and managing WIC data from State of Ohio WIC agency via the Innovation Ohio Platform (IOP)
  • Experience using Impala/ANSI SQL to select and transform data as needed
  • SQL experience must include data conversion, pivots, and logical assignments
  • Experience using Hue (Impala Editor) for querying large datasets, managing SQL scripts, and creating reusable query workflows within shared or sandbox environments
  • Experience creating views or temporary tables within a sandbox database and/or schema to facilitate more complex queries
  • Experience working with business users to understand and document their analytics and reporting requirements
  • Experience developing complex reports and dashboard for state and federal reporting
  • Experience with all types of testing from unit to system and user acceptance testing, developing test cases and scenarios along with business users, reporting and tracking defects and triaging/resolving issues
  • Experience training business users on analytic tools and data structures
Read More
Arrow Right

Senior Data Analyst

The candidate must have senior data analysis experience in the following: Experi...
Location
Location
United States , Columbus
Salary
Salary:
Not provided
oceanbluecorp.com Logo
Ocean Blue Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience collecting and managing birth defect data as found in the State of Ohio applications, including but not limited to the Maternal and Child Health Integrated Data System
  • Experience collecting and managing birth defect data from the State of Ohio Department of Health agency via the Innovation Ohio Platform (IOP)
  • Experience using Impala/ANSI SQL to select and transform data as needed. SQL experience must include data conversion, pivots, and logical assignments
  • Experience using Hue (Impala Editor) for querying large datasets, managing SQL scripts, and creating reusable query workflows within shared or sandbox environments
  • Experience creating views or temporary tables within a sandbox database and/or schema to facilitate more complex queries
  • Experience working with business users to understand and document their analytics and reporting requirements
  • Experience developing complex reports and dashboards for state and federal reporting
  • Experience with all types of testing from unit to system and user acceptance testing, developing test cases and scenarios along with business users, reporting and tracking defects, and triaging/resolving issues
  • Experience training business users on analytic tools and data structures
Read More
Arrow Right

Product Data Analyst

We're seeking our first Product Data Analyst to establish data-driven product in...
Location
Location
Spain , Madrid Remote, Barcelona Remote, Spain Remote
Salary
Salary:
Not provided
maisa.ai Logo
Maisa
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of product analytics experience, preferably with AI/ML products or enterprise software
  • Strong experience with AI product metrics (model performance, user-AI interaction patterns, automated workflow success)
  • Proficiency in SQL and experience with both relational and NoSQL databases (MongoDB, PostgreSQL)
  • Experience with data visualization tools (Tableau, Looker, Grafana, or similar)
  • Strong programming skills in Python or R for data analysis and automation
  • Experience analyzing complex user journeys and enterprise software adoption patterns
  • Understanding of statistical analysis, A/B testing, and experimental design
  • Experience working with event-driven architectures and real-time data streams
Job Responsibility
Job Responsibility
  • Design and implement comprehensive product analytics framework for Maisa Studio
  • Establish key product metrics, KPIs, and success measurements for AI Digital Workers
  • Build data pipelines and dashboards to track user behavior, worker performance, and platform adoption
  • Create the foundational data infrastructure to support product, engineering, and business decisions
  • Define data governance and quality standards for product analytics
  • Analyze Digital Worker creation, configuration, and deployment patterns
  • Track AI reasoning quality, execution success rates, and error patterns across the KPU system
  • Measure user adoption of different AI tools, integrations, and workflow configurations
  • Analyze the effectiveness of our "Chain of Work" traceability and explainability features
  • Monitor AI model performance, token usage, and computational efficiency across workers
What we offer
What we offer
  • Opportunity to shape the future of accountable enterprise AI agents
  • Competitive compensation package
  • Flat organization focused on impact rather than hierarchy
  • Work with cutting-edge computational AI technology
  • Dynamic, experienced team of technical experts
  • Continuous learning in the rapidly evolving field of Agentic AI
  • Fulltime
Read More
Arrow Right

Product Data Analyst

We're seeking our first Product Data Analyst to establish data-driven product in...
Location
Location
Salary
Salary:
Not provided
maisa.ai Logo
Maisa
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of product analytics experience, preferably with AI/ML products or enterprise software
  • Strong experience with AI product metrics (model performance, user-AI interaction patterns, automated workflow success)
  • Proficiency in SQL and experience with both relational and NoSQL databases (MongoDB, PostgreSQL)
  • Experience with data visualization tools (Tableau, Looker, Grafana, or similar)
  • Strong programming skills in Python or R for data analysis and automation
  • Experience analyzing complex user journeys and enterprise software adoption patterns
  • Understanding of statistical analysis, A/B testing, and experimental design
  • Experience working with event-driven architectures and real-time data streams
Job Responsibility
Job Responsibility
  • Design and implement comprehensive product analytics framework for Maisa Studio
  • Establish key product metrics, KPIs, and success measurements for AI Digital Workers
  • Build data pipelines and dashboards to track user behavior, worker performance, and platform adoption
  • Create the foundational data infrastructure to support product, engineering, and business decisions
  • Define data governance and quality standards for product analytics
  • Analyze Digital Worker creation, configuration, and deployment patterns
  • Track AI reasoning quality, execution success rates, and error patterns across the KPU system
  • Measure user adoption of different AI tools, integrations, and workflow configurations
  • Analyze the effectiveness of our "Chain of Work" traceability and explainability features
  • Monitor AI model performance, token usage, and computational efficiency across workers
What we offer
What we offer
  • Opportunity to shape the future of accountable enterprise AI agents
  • Competitive compensation package
  • Flat organization focused on impact rather than hierarchy
  • Work with cutting-edge computational AI technology
  • Dynamic, experienced team of technical experts
  • Continuous learning in the rapidly evolving field of Agentic AI
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

The Azure Data Engineer role involves designing, building, and maintaining ETL p...
Location
Location
India , Chennai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–8+ years of experience as a Data Engineer
  • Strong hands‑on expertise in Azure (Data Factory, Databricks, Data Lake Storage, SQL, Synapse preferred)
  • Proven ability to build production‑grade ETL/ELT pipelines supporting complex, multi‑regional business processes
  • Experience designing or implementing rules engines (Drools, ODM, or similar)
  • Strong SQL skills and experience with data modeling, data orchestration, and pipeline optimization
  • Experience working in Agile Scrum teams and collaborating across global regions (U.S. and India preferred)
  • Ability to partner closely with analysts and business stakeholders to translate rules into technical solutions
  • Excellent debugging, optimization, and engineering problem‑solving skills
  • Minimum Skills Required: SQL, Python, Azure Data Factory, Databricks, Azure Synapse
Job Responsibility
Job Responsibility
  • Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions
  • Engineer upstream processes to produce 9–10 monthly aggregated output files (customer, revenue, product, sales rep, etc.), delivered 3× per month
  • Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows
  • Contribute to the creation of a master stitched data file to replace Varicent’s current data‑assembly functions
  • Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code
  • Translate rules and logic captured by analysts and business SMEs into scalable, testable engine components
  • Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations
  • Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios
  • Partner with data architects to implement the target‑state Azure data architecture for compensation analytics
  • Develop optimized, scalable physical data models aligned to business logic and downstream needs
Read More
Arrow Right

Azure Data Engineer

The Azure Data Engineer role requires 5-8 years of experience in designing and m...
Location
Location
India , Chennai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–8+ years of experience as a Data Engineer with strong hands‑on expertise in Azure (Data Factory, Databricks, Data Lake Storage, SQL, Synapse preferred)
  • Proven ability to build production‑grade ETL/ELT pipelines supporting complex, multi‑regional business processes
  • Experience designing or implementing rules engines (Drools, ODM, or similar)
  • Strong SQL skills and experience with data modeling, data orchestration, and pipeline optimization
  • Experience working in Agile Scrum teams and collaborating across global regions (U.S. and India preferred)
  • Ability to partner closely with analysts and business stakeholders to translate rules into technical solutions
  • Excellent debugging, optimization, and engineering problem‑solving skills
  • Minimum Skills Required: SQL, Python, Azure Data Factory, Databricks, Azure Synapse
Job Responsibility
Job Responsibility
  • Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions
  • Engineer upstream processes to produce 9–10 monthly aggregated output files (customer, revenue, product, sales rep, etc.), delivered 3× per month
  • Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows
  • Contribute to the creation of a master stitched data file to replace Varicent’s current data‑assembly functions
  • Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code
  • Translate rules and logic captured by analysts and business SMEs into scalable, testable engine components
  • Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations
  • Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios
  • Partner with data architects to implement the target‑state Azure data architecture for compensation analytics
  • Develop optimized, scalable physical data models aligned to business logic and downstream needs
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

The Azure Data Engineer role involves designing and maintaining ETL pipelines us...
Location
Location
India , Chennai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–8+ years of experience as a Data Engineer with strong hands‑on expertise in Azure (Data Factory, Databricks, Data Lake Storage, SQL, Synapse preferred)
  • Proven ability to build production‑grade ETL/ELT pipelines supporting complex, multi‑regional business processes
  • Experience designing or implementing rules engines (Drools, ODM, or similar)
  • Strong SQL skills and experience with data modeling, data orchestration, and pipeline optimization
  • Experience working in Agile Scrum teams and collaborating across global regions (U.S. and India preferred)
  • Ability to partner closely with analysts and business stakeholders to translate rules into technical solutions
  • Excellent debugging, optimization, and engineering problem‑solving skills
  • Minimum Skills Required: SQL, Python, Azure Data Factory, Databricks, Azure Synapse
Job Responsibility
Job Responsibility
  • Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions
  • Engineer upstream processes to produce 9–10 monthly aggregated output files (customer, revenue, product, sales rep, etc.), delivered 3× per month
  • Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows
  • Contribute to the creation of a master stitched data file to replace Varicent’s current data‑assembly functions
  • Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code
  • Translate rules and logic captured by analysts and business SMEs into scalable, testable engine components
  • Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations
  • Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios
  • Partner with data architects to implement the target‑state Azure data architecture for compensation analytics
  • Develop optimized, scalable physical data models aligned to business logic and downstream needs
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

We are currently seeking a Azure Data Engineer to join our team in Chennai, Tami...
Location
Location
India , Chennai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–8+ years of experience as a Data Engineer with strong hands‑on expertise in Azure (Data Factory, Databricks, Data Lake Storage, SQL, Synapse preferred)
  • Proven ability to build production‑grade ETL/ELT pipelines supporting complex, multi‑regional business processes
  • Experience designing or implementing rules engines (Drools, ODM, or similar)
  • Strong SQL skills and experience with data modeling, data orchestration, and pipeline optimization
  • Experience working in Agile Scrum teams and collaborating across global regions (U.S. and India preferred)
  • Ability to partner closely with analysts and business stakeholders to translate rules into technical solutions
  • Excellent debugging, optimization, and engineering problem‑solving skills
Job Responsibility
Job Responsibility
  • Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions
  • Engineer upstream processes to produce 9–10 monthly aggregated output files (customer, revenue, product, sales rep, etc.), delivered 3× per month
  • Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows
  • Contribute to the creation of a master stitched data file to replace Varicent’s current data‑assembly functions
  • Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code
  • Translate rules and logic captured by analysts and business SMEs into scalable, testable engine components
  • Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations
  • Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios
  • Partner with data architects to implement the target‑state Azure data architecture for compensation analytics
  • Develop optimized, scalable physical data models aligned to business logic and downstream needs
Read More
Arrow Right