This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Technology Data Analyst will turn raw system, application, and infrastructure data into clear, actionable insights that drive reliability, performance, and scalability. You will partner closely with engineering, DevOps, SRE, and product teams to monitor, troubleshoot, and optimize our technical platforms.
Job Responsibility:
Collect, clean, and analyze large datasets from logs, metrics, traces, and business systems (ELK, Prometheus, Grafana, Datadog, Snowflake, BigQuery, etc.)
Build and maintain interactive dashboards, automated reports, and real-time visualizations (Looker, Tableau, Metabase, Superset, Power BI, or custom Streamlit/Dash apps)
Identify trends, anomalies, and performance bottlenecks in infrastructure, applications, and user-facing systems
Perform root-cause analysis for incidents, outages, and latency spikes using data-driven methods
Support capacity planning, cost optimization, and scaling decisions with accurate forecasting and usage modeling
Collaborate with developers, SREs, and product managers to define KPIs, instrumentation needs, and data requirements
Ensure data quality, consistency, and integrity across monitoring, logging, and analytics platforms
Contribute to lightweight predictive modeling, alerting rules, and automation initiatives (anomaly detection, forecasting, auto-scaling triggers)
Document methodologies, findings, and recommendations in clear, executive-ready formats
Requirements:
Bachelor’s degree in Data Analytics, Computer Science, Statistics, or equivalent experience
3–6+ years of hands-on data analysis experience in a technology/infrastructure environment
Advanced SQL (window functions, CTEs, performance tuning) and proficiency in Python (pandas, numpy) or R for data manipulation
Proven experience building dashboards and reports with at least one major BI tool (Looker, Tableau, Power BI, Metabase)
Strong grasp of systems metrics: CPU/memory/disk/network, request latency, error rates, saturation (USE/RED method, Golden Signals)
Familiarity with modern observability stacks (Prometheus + Grafana, Datadog, New Relic, OpenTelemetry, Jaeger)
Experience with log analysis (ELK/EFK, Loki, Splunk) and time-series databases (InfluxDB, TimescaleDB)
Excellent communication skills – ability to translate complex technical data into clear insights for both engineers and leadership
Nice to have:
Experience with cloud cost analysis and optimization (AWS Cost Explorer, GCP Billing, FinOps)