This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At SQA Group, we help companies leverage data to not just inform decisions, but to power purpose. Whether that means helping our clients build future-ready data foundations that are designed for scale and impact, crafting modern data platforms and pipelines that power real-time insights, or surfacing the metrics that truly matter, we help leaders unlock the true power of data modernization and humanization. We’re always looking for consultative-minded, hands-on practitioners to support our in-house Data & Advanced Analytics Solutions team with client projects and engagements.
Job Responsibility:
Design, build and integrate data pipelines from various sources
Assemble large, complex sets of data that meet non-functional and functional business requirements
Ensure that data models run reliably in production
Build and maintain architecture to support large volume data processing and storage, including Big Data
Identify, design, and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
Manage data warehouses and lakes, data applications and data production systems including curation of data, management of pipelines, and delivery to analysts, data scientists, and other users
Integrate data into data warehouse & data lake using ETL & ELT processes while ensuring constant data integrity
Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using various SQL & NoSQL databases, cloud and on-premises services, and data integration tools
Build analytical tools that utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
Work directly with stakeholders including Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues
Requirements:
Understanding of relational and non-relational data models
structured, semi-structured, and unstructured data
and the unique challenges presented by big data sets
Experience with Big data tools like Hadoop, Kafka, and Spark
Experience with NoSQL databases including Cosmos, Couchbase, Cassandra, MongoDB, or Google Firestore
Experience with SQL databases including PostgreSQL, MySQL, SQL Server, or Oracle
Experience with Workflow management or pipeline tools such as Airflow, Luigi or Azkaban
Experience with Cloud services including AWS (Redshift, RDS, EMR or EC2) and Azure (Data Factory, Databricks, or Stream Analytics)
Experience with Data integration & ETL/ELT tools such as dbt, Fivetran, or Talend
Experience with Object-oriented programming languages including Scala, C++, Java or Python