This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Bazaarvoice, we create smart shopping experiences. Through our expansive global network, product-passionate community & enterprise technology, we connect thousands of brands and retailers with billions of consumers. Our solutions enable brands to connect with consumers and collect valuable user-generated content, at an unprecedented scale. This content achieves global reach by leveraging our extensive and ever-expanding retail, social & search syndication network. And we make it easy for brands & retailers to gain valuable business insights from real-time consumer feedback with intuitive tools and dashboards. The result is smarter shopping: loyal customers, increased sales, and improved products.
Job Responsibility:
Designing, building, and supporting large-scale, distributed data systems that drive our organization's data infrastructure forward, and power our products and services
Developing data pipelines
Optimizing data storage and retrieval processes
Ensuring the reliability and scalability of our data architecture
Collaborating closely with cross-functional teams to understand data requirements, implement solutions, and troubleshoot issues as they arise
Advocating for and implementing software engineering best practices to ensure the efficiency, maintainability, and robustness of our data systems
Requirements:
BSc in Computer Science or related discipline
7+ years experience designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies
Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster
Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL)
Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink
Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform
Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc.
Strong technical leadership, problem solving skills and analytical thinking
Excellent communication and collaboration skills and the ability to work effectively with cross-functional teams
Passion for staying up to date with emerging data engineering technologies and trends