CrawlJobs Logo

Data Engineer

IQZ Systems

Location Icon

Location:
Sri Lanka

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We’re looking for a hands-on Data Engineer with 2–5 years of experience to build reliable data pipelines, optimize data models, and support analytics and product use cases. You’ll work across batch and streaming workloads in the cloud, ensuring data is accurate, timely, and cost-efficient

Job Responsibility:

  • Build Pipelines: Develop, test, and deploy scalable ETL/ELT pipelines for batch and streaming use cases
  • Model Data: Design clean, query-optimized data models (star schema, SCD, slowly changing logic as needed)
  • SQL Excellence: Author performant SQL for transformations, materializations, and reports
  • Orchestrate Workflows: Implement DAGs/workflows with Airflow/Prefect
  • maintain SLAs and retries
  • Data Quality: Add validation checks, schema enforcement, and alerting (e.g., Great Expectations)
  • Performance & Cost: Tune Spark/warehouse queries, optimize storage formats/partitions, and control costs
  • Collaboration: Work with Analytics, Data Science, and Product to translate requirements into data models
  • Ops & Reliability: Monitor pipelines, debug failures, and improve observability and documentation
  • Security & Compliance: Handle data responsibly (PII), follow RBAC/least privilege, and secrets management

Requirements:

  • 2+ years of experience
  • Solid Python (pandas, PySpark or data frameworks)
  • modular, testable code
  • Strong SQL across analytical databases/warehouses (e.g., Snowflake/BigQuery/Redshift/Azure Synapse)
  • Experience building production-grade pipelines and transformations
  • Exposure to at least one cloud (AWS/Azure/GCP/Databricks) for data storage and compute
  • Hands-on with Spark (PySpark) or equivalent distributed processing
  • Airflow or Prefect (DAGs, schedules, sensors, retries, SLAs)
  • Git workflows
  • basic CI for data jobs
  • Good understanding of Parquet/ORC/Avro, partitioning, and file layout
  • Familiarity with Looker/Power BI/Tableau and semantic modeling

Nice to have:

  • Familiarity with Data Virtualization Tools like Denodo is a huge plus
  • Kafka/Kinesis/Event Hubs
  • basics of stream processing (Flink/Spark Structured Streaming)
  • Experience with dbt for SQL transformations, testing, and documentation
  • Collibra, Alation, Ataccamma, Great Expectations, Soda, OpenLineage/Marquez
  • Docker basics
  • Kubernetes exposure is a plus
  • Terraform/CloudFormation for data infra provisioning
What we offer:
  • A dynamic and collaborative work environment
  • Opportunities for professional growth and development
  • Competitive compensation and benefits
  • The chance to shape impactful products that solve real-world problems
  • Exposure to cutting-edge technologies and tools, with opportunities to innovate and explore new business solutions

Additional Information:

Job Posted:
December 09, 2025

Employment Type:
Fulltime
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.