CrawlJobs Logo

Cloud Data Platform Engineer

schwab.com Logo

Charles Schwab

Location Icon

Location:
United States , Southlake

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

135000.00 - 160000.00 USD / Year

Job Description:

The Cloud Data Platform Engineer is within the SAM Data Technology organization, with a strong emphasis on the Investment Data domain. The role owns end‑to‑end data capabilities—from ingestion and domain modeling to platform frameworks, Data APIs, and Python‑based visualization/UI layers—that power Schwab Asset Management’s investment, operational, analytical, and regulatory use cases.

Job Responsibility:

  • Design and build reusable data platform frameworks for ingestion, transformation, validation, and consumption
  • Establish standardized patterns for data pipelines, APIs, and visualization layers across SAMDA
  • Define best practices for schema evolution, versioning, error handling, and observability
  • Influence platform roadmap through hands‑on engineering leadership
  • Architect and implement complex, cloud‑native ETL/ELT pipelines supporting investment and analytical data
  • Build reliable workflows using GCS, Dataproc, Cloud Dataflow, Composer (Airflow), and Pub/Sub
  • Implement scalable transformations and curated layers in Snowflake and cloud data warehouses
  • Design enterprise‑grade investment data models using Kimball, relational, and domain‑driven design principles
  • Create operational and analytical data stores from the ground up, including taxonomies and canonical models
  • Ensure models support regulatory, performance, and investment analytics use cases
  • Design and implement Data APIs using Python (FastAPI / Flask) to expose curated investment datasets
  • Build scalable, secure RESTful services for analytical and operational consumers
  • Apply governance, access control, and data protection standards aligned with regulated environments
  • Develop Python‑based dashboards and UI applications using Streamlit, Dash, Panel, or similar frameworks
  • Create interactive visualizations using Plotly, Matplotlib, and Seaborn to support investment insights
  • Translate complex investment data into intuitive, self‑service analytical experiences
  • Build data and application services using Cloud Run, Cloud Functions, and Cloud SQL
  • Apply distributed processing frameworks such as Apache Spark, Beam, and Flink
  • Package and deploy data services, APIs, and UI components using Docker
  • Lead CI/CD design for pipelines, APIs, and visualization apps using Git, Bitbucket, Bamboo, Jenkins, and GitHub Actions
  • Implement automated testing, deployment, and release management patterns
  • Drive infrastructure automation using Terraform or Google Cloud Deployment Manager
  • Define and implement data quality frameworks, reconciliation checks, and monitoring standards
  • Proactively identify and resolve complex data, platform, and application issues
  • Act as a senior technical leader and mentor for data engineers across SAMDA
  • Lead design reviews and influence cross‑team engineering decisions
  • Communicate complex platform and investment data concepts to technical and business stakeholders

Requirements:

  • Bachelor’s degree in Computer Science, Information Technology, or equivalent practical experience
  • 6–8 years of experience building cloud‑based data platforms and enterprise data solutions
  • Strong experience in the Investment or Asset Management data domain
  • Hands‑on expertise with Snowflake and GCP services (GCS, Cloud Run, Cloud Functions, Pub/Sub, Composer, Cloud SQL)
  • Advanced proficiency in Python for data engineering, API development, and visualization
  • Proven experience building REST APIs using Python frameworks (FastAPI, Flask, or equivalent)
  • Experience with Python visualization and UI frameworks (Streamlit, Dash, Panel, or similar)
  • Strong background with distributed processing frameworks (Spark, Beam, or Flink)
  • Expertise in CI/CD, containerization (Docker), and infrastructure as code (Terraform or GCP Deployment Manager)

Nice to have:

  • Experience designing platform‑level frameworks adopted by multiple engineering teams
  • Deep understanding of regulated data environments, governance, lineage, and auditability
  • Strong grasp of modern data architecture and self‑service analytics patterns
  • Ability to influence platform strategy and mentor senior engineers
  • Excellent documentation and executive‑level communication skills
What we offer:
  • 401(k) with company match and Employee stock purchase plan
  • Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions
  • Paid parental leave and family building benefits
  • Tuition reimbursement
  • Health, dental, and vision insurance

Additional Information:

Job Posted:
March 21, 2026

Expiration:
March 25, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Cloud Data Platform Engineer

Cloud Technical Architect / Data DevOps Engineer

The role involves designing, implementing, and optimizing scalable Big Data and ...
Location
Location
United Kingdom , Bristol
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An organised and methodical approach
  • Excellent time keeping and task prioritisation skills
  • An ability to provide clear and concise updates
  • An ability to convey technical concepts to all levels of audience
  • Data engineering skills – ETL/ELT
  • Technical implementation skills – application of industry best practices & designs patterns
  • Technical advisory skills – experience in researching technological products / services with the intent to provide advice on system improvements
  • Experience of working in hybrid environments with both classical and DevOps
  • Excellent written & spoken English skills
  • Excellent knowledge of Linux operating system administration and implementation
Job Responsibility
Job Responsibility
  • Detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems
  • Participating in the full lifecycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between
  • Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies
  • Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader
  • Assisting with solution improvement activities driven either by the project or service
  • Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation
  • Cloud Engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code
  • Provide technical challenge and assurance throughout development and delivery of work
  • Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership
  • Work independently and/or within a team using a DevOps way of working
What we offer
What we offer
  • Extensive social benefits
  • Flexible working hours
  • Competitive salary
  • Shared values
  • Equal opportunities
  • Work-life balance
  • Evolving career opportunities
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Fulltime
Read More
Arrow Right

Internship - Cloud Platform Engineering

The Greenlake Platform Cloud Infrastructure Engineer (PIE) team is looking for a...
Location
Location
Ireland , Galway
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Currently enrolled in or recent graduate of a degree program in Computer Science, Software Engineering, Information Technology, or related field
  • Familiarity with programming languages (Python, Java, or similar), cloud concepts, and fundamental networking is a plus
  • Demonstrates an interest in troubleshooting and problem-solving with guidance from mentors
  • Strong written and verbal communication skills, with an eagerness to ask questions and learn
Job Responsibility
Job Responsibility
  • Support the development team by helping with the design, prototyping, and implementation of cloud-based solutions, under the guidance of experienced developers
  • Apply foundational technical skills to analyze data, troubleshoot issues, and suggest improvements to existing cloud infrastructure
  • Work closely with project managers, senior developers, and other interns to help ensure smooth delivery, deployment, and operation of cloud projects
  • Learn and assist in monitoring cloud environments and tools, understanding key metrics and alerting systems to ensure system reliability and performance
  • Contribute to the documentation of code, processes, and procedures to support knowledge sharing across the team
What we offer
What we offer
  • Initial extensive onboarding to support you with adjusting to the role
  • Ongoing learning and development throughout the program
  • Be mentored by at least one senior member of the team and after two years in the program you can grow into a true professional with valuable relationships and international working experience
  • Competitive salary and great benefits
  • Great work-life balance including hybrid working and Wellness Fridays initiative
  • Fulltime
Read More
Arrow Right

Cloud Platform Engineer

As a Cloud Platform Engineer, you will play a key role in implementing and maint...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 3+ years of hands-on experience in cloud engineering, preferably with Microsoft Azure
  • experience with CI/CD tools such as GitHub Actions or Azure DevOps
  • strong understanding of cloud infrastructure components including networking, storage, computer, and identity
  • experience with Infrastructure as Code (IaC) using Terraform or similar tools
  • proficiency in scripting or programming languages such as Python, PowerShell, or Bash
  • familiarity with containerization (Docker) and orchestration (Kubernetes)
  • experience with monitoring and logging tools (e.g., Azure Monitor, Log Analytics)
  • experience with FinOps practices for cloud cost optimization
  • understanding of Zero Trust architecture and its implementation in cloud environments
Job Responsibility
Job Responsibility
  • implement and maintain Azure-based cloud platform components to support data engineering and BI workloads
  • collaborate with architects to translate high-level designs into scalable, secure, and cost-effective infrastructure
  • automate infrastructure provisioning using tools like Terraform and manage configurations via Git
  • monitor and optimize cloud resources for performance, availability, and cost-efficiency
  • support CI/CD pipelines and deployment automation for platform services
  • ensure compliance with security and governance standards, including RBAC, encryption, Azure policies for governance, and network controls
  • troubleshoot platform issues and provide operational support for cloud services
  • participate in technical evaluations and proof-of-concepts for new tools and services
  • contribute to platform documentation, runbooks, and knowledge sharing across teams
  • manage ServiceNow intake workflows and access provisioning for cloud services
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Senior Data Engineer - Platform Enablement

SoundCloud empowers artists and fans to connect and share through music. Founded...
Location
Location
United States , New York; Atlanta; East Coast
Salary
Salary:
160000.00 - 210000.00 USD / Year
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, analytics engineering, or similar roles
  • Expert-level SQL skills, including performance tuning, advanced joins, CTEs, window functions, and analytical query design
  • Proven experience with Apache Airflow (designing DAGs, scheduling, task dependencies, monitoring, Python)
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Excellent communication skills and a collaborative mindset
Job Responsibility
Job Responsibility
  • Develop and optimize SQL data models and queries for analytics, reporting, and operational use cases
  • Design and maintain ETL/ELT workflows using Apache Airflow, ensuring reliability, scalability, and data integrity
  • Collaborate with analysts and business teams to translate data needs into efficient, automated data pipelines and datasets
  • Own and enhance data quality and validation processes, ensuring accuracy and completeness of business-critical metrics
  • Build and maintain reporting layers, supporting dashboards and analytics tools (e.g. Looker, or similar)
  • Troubleshoot and tune SQL performance, optimizing queries and data structures for speed and scalability
  • Contribute to data architecture decisions, including schema design, partitioning strategies, and workflow scheduling
  • Mentor junior engineers, advocate for best practices and promote a positive team culture
What we offer
What we offer
  • Comprehensive health benefits including medical, dental, and vision plans, as well as mental health resources
  • Robust 401k program
  • Employee Equity Plan
  • Generous professional development allowance
  • Creativity and Wellness benefit
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right

Senior Principal Data Platform Software Engineer

We’re looking for a Sr Principal Data Platform Software Engineer (P70) to be a k...
Location
Location
Salary
Salary:
239400.00 - 312550.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years in Data Engineering, Software Engineering, or related roles, with substantial exposure to big data ecosystems
  • Demonstrated experience building and operating data platforms or large‑scale data services in production
  • Proven track record of building services from the ground up (requirements → design → implementation → deployment → ongoing ownership)
  • Hands‑on experience with AWS, GCP (e.g., compute, storage, data, and streaming services) and cloud‑native architectures
  • Practical experience with big data technologies, such as Databricks, Apache Spark, AWS EMR, Apache Flink, or StarRocks
  • Strong programming skills in one or more of: Kotlin, Scala, Java, Python
  • Experience leading cross‑team technical initiatives and influencing senior stakeholders
  • Experience mentoring Staff/Principal engineers and lifting the technical bar for a team or org
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience
Job Responsibility
Job Responsibility
  • Design, develop and own delivery of high quality big data and analytical platform solutions aiming to solve Atlassian’s needs to support millions of users with optimal cost, minimal latency and maximum reliability
  • Improve and operate large‑scale distributed data systems in the cloud (primarily AWS, with increasing integration with GCP and Kubernetes‑based microservices)
  • Drive the evolution of our high-performance analytical databases and its integrations with products, cloud infrastructures (AWS and GCP) and isolated cloud environments
  • Help define and uplift engineering and operational standards for petabyte scale data platforms, with sub‑second analytic queries and multi‑region availability (coding guidelines, code review practices, observability, incident response, SLIs/SLOs)
  • Partner across multiple product and platform teams (including Analytics, Marketplace/Ecosystem, Core Data Platform, ML Platform, Search, and Oasis/FedRAMP) to deliver company‑wide initiatives that depend on reliable, high‑quality data
  • Act as a technical mentor and multiplier, raising the bar on design quality, code quality, and operational excellence across the broader team
  • Design and implement self‑healing, resilient data platforms with strong observability, fault tolerance, and recovery characteristics
  • Own the long‑term architecture and technical direction of Atlassian’s product data platform with projects that are directly tied to Atlassian’s company-level OKRs
  • Be accountable for the reliability, cost efficiency, and strategic direction of Atlassian’s product analytical data platform
  • Partner with executives and influence senior leaders to align engineering efforts with Atlassian’s long-term business objectives
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • Fulltime
Read More
Arrow Right

Principal Data Platform Engineer

Are you passionate about data platforms and tools? Are you an open-minded, struc...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Deep understanding of big data challenges
  • Built solutions using public cloud offerings such as Amazon Web Services
  • Experience with Big Data processing and storage technologies such as Spark, S3, DBT
  • SQL knowledge
  • Solid understanding and experience in building RESTful APIs and micro services, e.g. with Flask
  • Experience with test automation and ensuring data quality across multiple datasets used for analytical purposes
  • Experience with continuous delivery, continuous integration, and source control system such as Git
  • Expert level programming skills in OO Programming language like Java, Kotlin or Python
  • Degree in Computer Science, EE, or related STEM discipline
Job Responsibility
Job Responsibility
  • You will partner with analytical teams, data engineers and data scientists across various initiatives working with them to understand the gaps, and bring your findings back to the team to work on building these capabilities
  • In this role, you will be part of the Analytics Management Platform team under the Data Platform
  • The team focuses on building the foundation for Atlassian analytical platforms
  • We are creating frictionless data experiences for data products builders by offering different services and frameworks that help engineers to move fast and enable their users to generate valuable insights from data
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Cloud Data Platform Architect

Circle K is transforming our Data Engineering and BI platform to match our busin...
Location
Location
United States of America , Charlotte
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of professional experience in designing & architecting Data & Analytics solutions, with a focus on Azure-based platforms and enterprise-scale systems
  • Working experience in designing and architecting solutions that leverage Databricks and Snowflake
  • Hands-on experience with Azure Cloud services (Azure Synapse/SQL Server, ADF or close equivalents)
  • Working experience with Microsoft Power BI (Power BI Platform, AAS/Tabular) integration with Azure-based Platforms
  • Expert understanding of relational databases, Data Warehouse & Data Lake modeling techniques & concepts, ETL/ELT processing patterns, and Big Data technologies
  • Practical experience in designing systems to handle large data volumes
  • Practical experience in designing systems for large-scale data processing with a focus on Azure performance optimization and cost management
  • Working knowledge of Python, PySpark, SQL & T-SQL
  • Working experience in designing and architecting solutions that comply with data security industry standards and regulations, including RBAC, data encryption (GDPR, PCI, etc.), and monitoring
  • Microsoft Azure Certification required
Job Responsibility
Job Responsibility
  • Designing, building, and maintaining robust data platforms and solutions on Azure
  • Optimizing data delivery and ensuring the architecture aligns with business objectives
  • Leading architectural decisions and establishing governance standards
  • Collaborating across teams to ensure seamless data flows and scalable solutions
  • Driving the adoption and usage of Azure Databricks, Snowflake, Microsoft Fabric, and Power BI in the data platform
  • Performing architectural assessments and defining solutions to produce detailed design documents
  • Providing technical direction on Azure platform services
  • Mentoring Data Engineering and Data Science teams
  • Providing technical support for platform performance tuning and optimization activities
  • Participating in the creation and maintenance of technical roadmaps
  • Fulltime
Read More
Arrow Right