This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
BioCatch is the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online user’s physical and cognitive digital behavior to protect individuals online. BioCatch’s mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust, and ease coexist.Today, 32 of the world's largest 100 banks and 210 total financial institutions rely on BioCatch Connect™ to combat fraud, facilitate digital transformation, and grow customer relationships.. BioCatch’s Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps BioCatch to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, BioCatch continues to innovate to solve tomorrow’s problems.
Job Responsibility:
Provide the direction of our data architecture
Determine the right tools for the right jobs
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance
Optimize and monitor the team-related cloud costs
Design and construct monitoring tools to ensure the efficiency and reliability of data processes
Implement CI/CD for Data Workflows
Requirements:
5+ Years of Experience in data engineering and big data at large scales
Extensive experience with modern data stack: Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
Experience with Kafka, RabbitMQ, or similar for real-time data processing
Experience with Pyspark, Databricks
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines
Hands-on experience with Docker and Kubernetes
Expertise in ETL development, data modeling, and data warehousing best practices
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
Experience with infrastructure as code, deployment automation, and CI/CD
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins