CrawlJobs Logo

Junior Data Analyst - Distribution

bookergroupjobs.co.uk Logo

Booker Group

Location Icon

Location:
United Kingdom , Wellingborough

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Kick‑start your analyst career at the heart of one of the UK’s busiest wholesale networks. Our team supports 9 Distribution Centres, 200+ branches and thousands of independent retailers - and you’ll be helping us keep everything moving. As a Junior Data Analyst, you’ll turn performance data into clear insights that help our Customer Services, Transport, Warehouse and Facilities teams work smarter. You’ll look at how our network is performing, spot trends, and help us improve the way we deliver for customers every day. This role is perfect for someone starting out in analytics. You’ll learn how a major distribution operation works, build real hands‑on experience with transport and warehouse data, and make an impact from day one.

Job Responsibility:

  • Developing a deep understanding of how the wholesale industry and our end-to-end distribution operations work and why
  • Using a combination of data and business understanding to turn data into insight, root causing issues, defining problems, designing creative solutions and implementing them in the business
  • Reporting data accurately and turning it into useful information which has context and supporting narrative
  • Supporting applications which simplify business processes for Customer Services, Facilities and the wider supply chain team
  • Supporting the day to day operation of the Customer Services, Facilities and the wider supply chain team
  • Creating tools which simplify their processes
  • Measuring the costs & benefits associated with your work
  • Following our Business Code of Conduct and always acting with integrity and due diligence

Requirements:

  • Strong business acumen and the drive to make changes to the way the company operates
  • Strong analytical and numerical skills with exceptional attention to detail
  • The ability to simplify complex issues and implement practical solutions
  • Excellent verbal and written communication skills to engage and collaborate with other teams to deliver to objectives
  • The ability to innovate new and pragmatic solutions using your own ideas
  • Experience of spreadsheet tools and data visualisation software (Tableau or Microsoft Power BI)
  • The aptitude to learn new technical skills quickly

Nice to have:

  • An understanding of VBA is desirable but not essential
  • Distribution experience is desirable but not essential
What we offer:
  • A Booker colleague card with 10% off purchases at Booker and double discount events up to three times a year
  • After 3 months service, a Tesco colleague discount card with 10% increasing to 15% off most purchases at Tesco for a 4 day period after every four-weekly pay day, ie. thirteen times a year
  • 10% off at Tesco Cafe and 20% off all F&F purchases
  • 10% off pay monthly & SIM only deals with Tesco Mobile for yourself
  • Up to 30% off car, pet and home insurance at Tesco bank
  • Free eye test when you spend £50 or more
  • 50% off health checks at Tesco Pharmacy
  • Exclusive access to discounted RAC breakdown cover rates
  • An exclusive deals and discounts website saving you money on everyday purchases including a cycle to work scheme
  • After 3 months service, you can join our annual Save As You Earn share scheme which allows you to buy Tesco shares in the future at a discount
  • Retirement savings plan (pension) - save up to 5% and Booker will match your contribution
  • Life Assurance - You are covered for death in service life cover of up to three times annual pay
  • Health and Wellbeing support and resources including our 24/7, confidential Employee Assistance Programme and Virtual GP for you and your family
  • A great holiday package

Additional Information:

Job Posted:
May 05, 2026

Expiration:
May 15, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Junior Data Analyst - Distribution

Data Engineer - Azure

Location
Location
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-6 years of professional experience in a similar role
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Independence and responsibility in delivering solutions
  • Ability to work under agile methodologies
  • Hands-on experience in designing and optimizing of data storage (data lakes, data warehouses, distributed file systems)
Job Responsibility
Job Responsibility
  • Building data pipelines to ingest data from various sources such as databases, APIs, or streaming platforms
  • Integrating and transforming data to ensure its compatibility with the target data model or format
  • Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems
  • Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval
  • Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency
  • Designing and implementing data models that support efficient data storage, retrieval, and analysis
  • Collaborating with data scientists and analysts to understand their requirements and provide them with well-structured and optimized data for analysis and modeling purposes
  • Collaborating with cross-functional teams including data scientists, analysts, and business stakeholders to understand their requirements and provide technical solutions
  • Communicating complex technical concepts to non-technical stakeholders in a clear and concise manner
  • Independence and responsibility for delivering a solution
What we offer
What we offer
  • Stable employment
  • Medical Insurance
  • Grocery Coupons
  • Saving fund
  • 30 days of Christmas bonus
  • Remote work bonus
  • Profit sharing
  • 50% vacation premium
  • 100% remote
  • Flexibility regarding working hours
  • Fulltime
Read More
Arrow Right

Senior Java Developer

The Fixed Income Data team is experiencing rapid growth, committed to delivering...
Location
Location
Canada , Mississauga
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-5 years of demonstrable and relevant experience in software development, with a strong focus on API development and big data solutions
  • expertise in developing high-performance APIs for large-scale data platforms and distributed systems
  • extensive hands-on experience with data distribution platforms like Apache Kafka, and various big data storage/querying systems (e.g., Trino, Pinot, Druid, Ignite) for low-latency access via APIs
  • solid understanding of Java / Scala with a focus on building high-performance, concurrent applications
  • strong experience with the Spring stack, particularly Spring Boot for building microservices that expose data via APIs
  • expert-level understanding and demonstrable experience in REST API development for data reporting and consumption
  • demonstrable experience in writing reusable, testable, and efficient code with proper error and exception handling, especially for fault-tolerant API services
  • experience with the design and implementation of cloud-native applications and deployment via Kubernetes / OpenShift, specifically for managing API-driven data services
  • hands-on experience in handling various data structures and optimizing them for API consumption and analytical queries
  • experience with API Gateway, Circuit Breaker, Spring Security, Discovery Server, and monitoring services (e.g., Prometheus, Grafana) is a plus, particularly in an API-driven data ecosystem
Job Responsibility
Job Responsibility
  • design, develop, and implement highly scalable and resilient API services for data access and processing, leveraging big data platforms
  • conduct feasibility studies, time and cost estimates for new API-driven data solutions and establish and implement new or revised applications and systems to meet specific business needs or user areas
  • monitor and control all phases of the development process (analysis, design, construction, testing, and deployment) for API-driven data applications, providing operational support
  • utilize in-depth specialty knowledge of API development for big data environments and analytics to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments
  • ensure essential procedures are followed and help define operating standards and processes for API-driven data infrastructure
  • serve as an advisor or coach to new or junior analysts on API development and big data access best practices
  • operate with a limited level of direct supervision, exercising independence of judgment and autonomy
  • act as a Subject Matter Expert (SME) to senior stakeholders and/or other team members on data API technologies and their application in finance
What we offer
What we offer
  • flexibility to work with a global team across geographies and time zones
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for an experienced Data Engineer with deep expertise in Databrick...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
  • Strong proficiency in Python and/or Scala for data processing
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
  • Experience with data governance, security, and compliance in cloud environments
  • Excellent problem-solving, communication, and leadership skills
  • English: Upper Intermediate level or higher
Job Responsibility
Job Responsibility
  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
  • Develop and maintain documentation, data models, and technical standards
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Senior Java and Scala Developer

The Fixed Income Data team is experiencing rapid growth, committed to delivering...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-5 years of demonstrable and relevant experience in software development
  • Strong focus on API development and big data solutions
  • Expertise in developing high-performance APIs for large-scale data platforms and distributed systems
  • Extensive hands-on experience with data distribution platforms like Apache Kafka
  • Experience with big data storage/querying systems (e.g., Trino, Pinot, Druid, Ignite)
  • Solid understanding of Java/Scala with focus on building high-performance, concurrent applications
  • Strong experience with Spring stack, particularly Spring Boot for building microservices
  • Expert-level understanding and demonstrable experience in REST API development
  • Experience with cloud-native applications and deployment via Kubernetes/OpenShift
  • Experience with CI/CD environment
Job Responsibility
Job Responsibility
  • Design, develop, and implement highly scalable and resilient API services for data access and processing
  • Conduct feasibility studies, time and cost estimates for new API-driven data solutions
  • Monitor and control all phases of the development process (analysis, design, construction, testing, and deployment)
  • Serve as an advisor or coach to new or junior analysts on API development and big data access best practices
  • Act as a Subject Matter Expert (SME) to senior stakeholders on data API technologies and their application in finance
What we offer
What we offer
  • Career growth opportunities
  • Global workforce benefits
  • Well-being support
  • Work-life balance programs
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Blue Margin, we are on a mission to build the go-to data platform for PE-back...
Location
Location
United States , Fort Collins
Salary
Salary:
110000.00 - 140000.00 USD / Year
bluemargin.com Logo
Blue Margin
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of professional experience in data engineering, with emphasis on Python & PySpark/Apache Spark
  • Proven ability to manage large datasets and optimize for speed, scalability, and reliability
  • Strong SQL skills and understanding of relational and distributed data systems
  • Experience with Azure Data Factory, Synapse Pipelines, Fivetran, Delta Lake, Microsoft Fabric, or Snowflake
  • Knowledge of data modeling, orchestration, and Delta/Parquet file management best practices
  • Familiarity with CI/CD, version control, and DevOps practices for data pipelines
  • Experience leveraging AI-assisted tools to accelerate engineering workflows
  • Strong communication skills
  • ability to convey complex technical details to both engineers and business stakeholders
Job Responsibility
Job Responsibility
  • Architect, design, and optimize large-scale data pipelines using tools like PySpark, SparkSQL, Delta Lake, and cloud-native tools
  • Drive efficiency in incremental/delta data loading, partitioning, and performance tuning
  • Lead implementations across Azure Synapse, Microsoft Fabric, and/or Snowflake environments
  • Collaborate with stakeholders and analysts to translate business needs into scalable data solutions
  • Evaluate and incorporate AI/automation to improve development speed, testing, and data quality
  • Oversee and mentor junior data engineers, establishing coding standards and best practices
  • Ensure high standards for data quality, security, and governance
  • Participate in solution design for client engagements, balancing technical depth with practical outcomes
What we offer
What we offer
  • Competitive pay
  • strong benefits
  • flexible hybrid work setup
  • Fulltime
Read More
Arrow Right
New

Data Scientist

The Data Scientist plays a pivotal role in planning, executing, and delivering m...
Location
Location
United States , Camden
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s, or PhD in Computer Science, Data Science, Engineering, Statistics, Applied Mathematics, Operations Research, or a related quantitative field
  • Specialization in ML, AI, cognitive science, or data science is highly preferred
  • 3-5 years of hands-on experience planning and executing end-to-end data science projects with demonstrated impact on clinical or operational outcomes in business environments
  • Advanced programming proficiency in Python or R with strong expertise in machine learning frameworks (scikit-learn, TensorFlow, PyTorch) and statistical analysis tools
  • Expertise in machine learning and statistical techniques including supervised/unsupervised learning, deep learning, NLP, computer vision, regression models, ensemble methods, and experimental design (A/B testing)
  • Strong data engineering capabilities including SQL/NoSQL database programming, distributed computing tools (Hadoop, Spark, Kafka), data pipeline development, and experience with cloud platforms (AWS, Azure, GCP)
  • Production ML and MLOps experience including model deployment, monitoring, containerization (Docker, Kubernetes), version control, and applying DevOps principles to data science workflows
  • Data visualization and communication excellence with ability to create compelling dashboards (Tableau, Power BI), translate complex technical findings into actionable insights, and present to diverse audiences from executives to frontline staff
  • Cross-functional collaboration skills with proven ability to work in agile environments, partner with stakeholders to align technical solutions with business objectives, and mentor junior team members
  • Healthcare domain knowledge preferred, particularly experience with Epic EHR systems, clinical workflows, and healthcare data standards, along with relevant certifications (Clarity /Caboodle, Google Cloud ML Engineer, AWS ML Specialist)
Job Responsibility
Job Responsibility
  • Collect, clean, and analyze datasets from diverse internal and external sources, applying advanced data wrangling techniques
  • Acquire access to various databases and source systems (SQL, NoSQL, graph databases) and create data pipelines
  • Apply statistical analysis and visualization techniques to explore and prepare data
  • Design, develop, and validate machine learning, statistical, and optimization models
  • Select appropriate algorithms and models for AI/ML and test them for accuracy, robustness, and fairness
  • Perform feature selection and engineering
  • Integrate domain knowledge into ML solutions
  • Conduct controlled experiments (A/B and multivariate testing)
  • Collaborate with MLOps, data engineers, and IT to evaluate deployment options
  • Continuously monitor execution and health of production ML models
  • Fulltime
Read More
Arrow Right

Apps Dev Tech Lead Analyst - Vice President

As a key member of our global development team, you will: Innovate & Develop: Pa...
Location
Location
United States , Irving
Salary
Salary:
125760.00 - 188640.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6-10 years of progressive experience in systems analysis and programming of software applications
  • Strong proficiency in Java application technologies, including deep experience with TDD (Test-Driven Development), Spring framework, and Microservices architecture
  • Extensive hands-on experience with PySpark and advanced Python programming skills
  • Proven experience with Big Data ecosystems, including Cloudera and/or Data Bricks
  • Hands-on experience with distributed query engines like Starburst (Trino/Presto)
  • Proficient in designing and managing complex workflows using scheduling tools, particularly Apache Airflow
  • Strong expertise in SQL and experience with relational and non-relational databases
  • Excellent knowledge of algorithms and data structures, design patterns
  • Strong Java experience: Java core, collections, concurrency, streams
  • Frameworks and APIs: Spring (Core, Batch, Integration, MVC, Boot, Data), Hibernate, Jackson, JAX RS, JPA, JAXB
Job Responsibility
Job Responsibility
  • Innovate & Develop: Partner closely with project managers, business stakeholders, and senior managers to translate complex business requirements into well-architected technical solutions
  • Drive cross-functional collaboration with diverse management teams
  • Proactively identify, define, and implement necessary system enhancements
  • Complex Problem Resolution: Lead the resolution of high-impact problems and critical projects
  • Consult with users, clients, and other technology groups on issues
  • Technical Architecture & Standards Leadership: Serve as a subject matter expert in application programming
  • Leverage an advanced understanding of system flow to develop and enforce robust standards for coding, testing, debugging, and implementation
  • Mentorship & Talent Development: Act as a trusted advisor and coach for mid-level developers and analysts
  • Provide technical guidance, mentorship, and code reviews to junior data engineers
  • Operational Excellence: Ensure adherence to best practices and essential procedures
What we offer
What we offer
  • medical, dental & vision coverage
  • 401(k)
  • life, accident, and disability insurance
  • wellness programs
  • paid time off packages including planned time off (vacation), unplanned time off (sick leave), and paid holidays
  • Fulltime
Read More
Arrow Right

Senior PySpark Data Engineer

We are seeking a highly skilled and experienced Senior PySpark Data Engineer to ...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
  • 6+ years of professional experience in a data engineering role
  • Extensive hands-on experience with PySpark and advanced Python programming skills
  • Proven experience with Big Data ecosystems, including Cloudera and/or DataBricks
  • Hands-on experience with distributed query engines like Starburst (Trino/Presto)
  • Proficient in designing and managing complex workflows using scheduling tools, particularly Apache Airflow
  • Strong expertise in SQL and experience with relational and non-relational databases
  • Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling techniques
  • Experience working in a Linux/Unix environment
  • GIT HUB, CI/CD Pipeline
Job Responsibility
Job Responsibility
  • Design, develop, and maintain robust, scalable, and high-performance data pipelines using PySpark
  • Develop, schedule, and monitor complex data workflows using orchestration tools like Apache Airflow
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality data solutions
  • Optimize and tune Spark jobs for performance and efficiency
  • Implement data quality checks and ensure data integrity across all data pipelines
  • Design and implement data models for optimal storage and retrieval
  • Mentor junior data engineers and promote best practices in data engineering
  • Ensure compliance with data governance and security policies
  • Troubleshoot and resolve data-related issues in a timely manner
  • Fulltime
Read More
Arrow Right