CrawlJobs Logo

AWS Big Data Architect

https://www.roberthalf.com Logo

Robert Half

Location Icon

Location:
United States , Philadelphia

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a highly skilled AWS Big Data Architect / Senior Data Engineer to design, develop, and deliver scalable Big Data Warehouse solutions. This is a hands-on role suited for someone who is passionate about technology, thrives in a collaborative environment, and can work effectively with both technical and non-technical stakeholders. The ideal candidate excels in fast-paced settings and is committed to producing high-quality, impactful results. This role offers the opportunity to collaborate with engineering teams across the enterprise and influence broader data and technology strategies.

Job Responsibility:

  • Design and develop scalable Big Data Warehouse solutions across the full data supply chain
  • Build and implement metadata management solutions
  • Create and maintain technical documentation, user documentation, data models, data dictionaries, glossaries, process flows, and architecture diagrams
  • Enhance and expand the enterprise Data Lake environment
  • Solve complex data integration challenges across multiple systems
  • Design and execute strategies for real-time data analysis and decision-making
  • Collaborate with business partners, analysts, developers, architects, and engineers to support ongoing data quality initiatives
  • Work closely with Data Science teams to improve actionable insights
  • Continuously expand knowledge of new tools, platforms, and technologies

Requirements:

  • Strong background in data management, data access, Big Data, Data Marts, and Data Warehousing
  • Proficiency in SQL, Spark SQL, and DataFrames
  • Experience with modern data warehousing concepts using technologies such as Redshift, Spark, Hadoop, and web services
  • Experience in data architecture and data assembly
  • Knowledge of Data Governance and Data Security practices
  • Hands-on experience with data integration tools (e.g., Talend preferred
  • Cascading a plus)
  • Experience with scripting languages for data manipulation
  • Experience with Business Intelligence tools, MDM, XML, SOA/Web Services
  • Exposure to Data Science technologies and toolsets
  • Bachelor’s or Master’s degree in Computer Science, Data Processing, or equivalent work experience
  • Strong background in Data Warehousing or related analytical environments
  • Proficiency in Java programming and building frameworks
  • Hands-on experience with Hadoop and Spark
  • Experience with Amazon EMR/EC2 or equivalent cloud technologies
  • Minimum 2 years of experience with Python
  • Experience with Bitbucket and solid understanding of Git fundamentals
  • Familiarity with Linux environments
  • Experience with Jenkins and CI/CD pipelines
  • Strong understanding of core computer science fundamentals
  • Experience with AWS services such as Aurora, Athena, EMR, Redshift, and S3
  • Experience with Postgres and MySQL databases
  • Excellent organizational, communication, and project management skills
What we offer:
  • medical
  • vision
  • dental
  • life and disability insurance
  • eligible to enroll in our company 401(k) plan

Additional Information:

Job Posted:
March 01, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for AWS Big Data Architect

Solution Architect - Big Data

The Solution Architect Big Data is a strategic professional who stays abreast of...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years' experience in Big Data and/or Public Cloud
  • 8 years' experience working on Big Data technologies: Hadoop, HDFS, Hive, Spark, Impala, etc.
  • Technical Expertise in financial services industry and/or regulatory environments
  • Excellent knowledge and experience on solutioning Cloud native solutions
  • Experience with migrating on-prem applications to Cloud Architectures or developing cloud native applications for any of the following: AWS, Azure, GCP, OpenShift
  • Ability to work across technology stacks and perform R&D on new technologies
  • Proficiency in one or more programming languages like Java, Python etc.
  • Consistently demonstrates clear and concise written and verbal communication
  • Management and prioritization skills
  • Ability to develop working relationships
Job Responsibility
Job Responsibility
  • Executes the architectural vision for all IT systems through major, complex IT architecture projects
  • ensures that architecture conforms to enterprise blueprints
  • Develops technology road maps, while keeping up-to-date with emerging technologies, and recommends business directions based on these technologies
  • Provides technical leadership and is responsible for developing components of, or the overall systems design
  • Translates complex business problems into sound technical solutions
  • Applies hardware engineering and software design theories and principles in researching, designing, and developing product hardware and software interfaces
  • Provides integrated systems planning and recommends innovative technologies that will enhance the current system
  • Recommends appropriate desktop, computer platform, and communication links required to support IT goals and strategy
  • Exhibits good knowledge of how own specialism contributes to the business and good understanding of competitors products and services
  • Acts as an advisor or mentor to junior team members
What we offer
What we offer
  • Global Benefits
  • We bring the best to our people
  • We put our employees first and provide the best-in-class benefits they need to be well, live well and save well
  • Fulltime
Read More
Arrow Right

Data Engineer (AWS)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in AWS, such as AWS Certified Solutions Architect, AWS Certified Developer, or AWS Certified Data Analytics
  • Hands-on experience with AWS services, especially those related to Big Data and data analytics, such as Amazon Redshift, Amazon EMR, Amazon Athena, Amazon Kinesis, Amazon Glue, among others
  • Familiarity with data storage and processing services on AWS, including Amazon S3, Amazon RDS, Amazon DynamoDB, and AWS Lambda
  • Proficiency in programming languages such as Python, Scala, or Java for developing data pipelines and automation scripts
  • Knowledge of distributed data processing frameworks, such as Apache Spark or Apache Flink
  • Experience in data modeling, cleansing, transformation, and preparation for analysis
  • Ability to work with different types of data, including structured, unstructured, and semi-structured data
  • Familiarity with data architecture concepts such as data lakes, data warehouses, and data pipelines (not mandatory)
  • Knowledge of security and compliance practices on AWS, including access control, data encryption, and regulatory compliance
  • Fulltime
Read More
Arrow Right

Data Architect

Delivery Centric is seeking a highly skilled Data Architect to design cloud-read...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
deliverycentric.com Logo
Delivery Centric Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience in data architecture, data modelling, and enterprise data platform design
  • Strong expertise in SQL, NoSQL, data warehousing, and major cloud platforms (Azure, AWS, GCP)
  • Hands-on experience with ETL/ELT tooling and big data technologies (Spark, Hadoop)
  • Experience building data pipelines and event-driven workflows
  • Certifications ideal for this role: Azure Data Engineer, AWS Developer, Databricks Data Engineer
  • Exposure to AI/ML environments and advanced analytical use cases
  • Strong analytical and problem-solving capabilities with excellent stakeholder engagement skills
Job Responsibility
Job Responsibility
  • Design scalable, secure, and high-performing data architectures aligned to business objectives
  • Develop conceptual, logical, and physical data models for enterprise data platforms
  • Drive data governance practices, ensuring compliance, quality, and security across all data assets
  • Lead integration initiatives and build reliable data pipelines across cloud and on-prem ecosystems
  • Optimize existing data platforms, improving performance, scalability, and operational efficiency
  • Collaborate with business stakeholders to translate requirements into technical solutions
  • Maintain architecture documentation, standards, data dictionaries, and solution diagrams
  • Support big data, analytics, and AI/ML initiatives through scalable data foundations
  • Fulltime
Read More
Arrow Right

Data Architect - Enterprise Data & AI Solutions

We are looking for a visionary Data Architect who can translate enterprise data ...
Location
Location
India , Chennai; Madurai; Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong background in RDBMS design, data modeling, and schema optimization
  • Advanced SQL skills, including performance tuning and analytics functions
  • Proven expertise in data warehouses, data lakes, and lakehouse architectures
  • Proficiency in ETL/ELT tools (Informatica, Talend, dbt, Glue)
  • Hands-on with cloud platforms (AWS Redshift, Azure Synapse, GCP BigQuery, Snowflake)
  • Familiarity with GenAI frameworks (OpenAI, Vertex AI, Bedrock, Azure OpenAI)
  • Experience with real-time streaming (Kafka, Kinesis, Flink) and big data ecosystems (Hadoop, Spark)
  • Strong communication skills with the ability to present data insights to executives
  • 8+ years in data architecture, enterprise data strategy, or modernization programs
  • Hands-on with AI-driven analytics and GenAI adoption
Job Responsibility
Job Responsibility
  • Design scalable data models, warehouses, lakes, and lakehouse solutions
  • Build data pipelines to support advanced analytics, reporting, and predictive insights
  • Integrate GenAI frameworks to enhance data generation, automation, and summarization
  • Define and enforce enterprise-wide data governance, standards, and security practices
  • Drive data modernization initiatives, including cloud migrations
  • Collaborate with stakeholders, engineers, and AI/ML teams to align solutions with business goals
  • Enable real-time and batch insights through dashboards, AI-driven recommendations, and predictive reporting
  • Mentor teams on best practices in data and AI adoption
What we offer
What we offer
  • Opportunity to design next-generation enterprise data & AI architectures
  • Exposure to cutting-edge GenAI platforms to accelerate innovation
  • Collaborate with experts across cloud, data engineering, and AI practices
  • Access to learning, certifications, and leadership mentoring
  • Competitive pay with opportunities for career growth and leadership visibility
  • Fulltime
Read More
Arrow Right

Data & AI Architect

We are seeking a highly skilled and experienced Data & AI Architect to join our ...
Location
Location
United States
Salary
Salary:
130000.00 - 170000.00 USD / Year
mojotech.com Logo
MojoTech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of hands-on experience in data engineering, with a strong focus on cloud-native solutions
  • Expert-level proficiency with AWS data services and best practices. AWS and Databricks certifications are a significant plus
  • Exceptional Python programming skills for data engineering and automation
  • Experience with Databricks or Apache Spark for big data processing
  • Experience with at least one major cloud ML platform, eg AWS Sagemaker
  • Advanced SQL capabilities for data manipulation, querying, and optimization
  • Proven experience with workflow orchestration tools (e.g., Airflow, AWS Step Functions)
  • Solid understanding of data modeling principles and pipeline design patterns
  • Familiarity with modern DevOps practices and CI/CD for data solutions
  • Excellent problem-solving skills and the ability to translate complex technical concepts into clear explanations for non-technical audiences
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable data pipelines using AWS services (S3, Glue, Lambda, Kinesis, Redshift, Step Functions, etc.) and Python
  • Design, develop, and implement end-to-end models and AI agents, from data preprocessing to deployment and monitoring
  • Implement and optimize large-scale data processing solutions using modern data platforms, for example Databricks, Snowflake, AWS for various client use cases
  • Collaborate closely with product managers, engineers, and business stakeholders to understand data consumption requirements and deliver high-quality data products
  • Ensure data quality, reliability, and security throughout the data lifecycle
  • Apply Generative AI concepts to data platforms for integration with LLMs, RAG architectures, and their practical application in enterprise solutions
  • Implement CI/CD pipelines and integrate data solutions into broader DevOps practices
  • Provide technical leadership and mentorship within project teams
What we offer
What we offer
  • Medical, Dental, FSA
  • 401k with 4% match
  • Trust-based time off
  • Catered lunches when in office
  • 5 hours a week of self-directed, non-client work
  • Dog Friendly Offices
  • Paid conference attendance/yearly education stipend
  • 6 weeks parental leave
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
  • Proficiency in Python, PySpark, SQL
  • Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment
  • Knowledge of data governance principles and best practices, including data privacy and security regulations
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
  • Grow as we grow as a company. 76% of our managers are internal promotions
Read More
Arrow Right

Cloud Technical Architect / Data DevOps Engineer

The role involves designing, implementing, and optimizing scalable Big Data and ...
Location
Location
United Kingdom , Bristol
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An organised and methodical approach
  • Excellent time keeping and task prioritisation skills
  • An ability to provide clear and concise updates
  • An ability to convey technical concepts to all levels of audience
  • Data engineering skills – ETL/ELT
  • Technical implementation skills – application of industry best practices & designs patterns
  • Technical advisory skills – experience in researching technological products / services with the intent to provide advice on system improvements
  • Experience of working in hybrid environments with both classical and DevOps
  • Excellent written & spoken English skills
  • Excellent knowledge of Linux operating system administration and implementation
Job Responsibility
Job Responsibility
  • Detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems
  • Participating in the full lifecycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between
  • Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies
  • Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader
  • Assisting with solution improvement activities driven either by the project or service
  • Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation
  • Cloud Engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code
  • Provide technical challenge and assurance throughout development and delivery of work
  • Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership
  • Work independently and/or within a team using a DevOps way of working
What we offer
What we offer
  • Extensive social benefits
  • Flexible working hours
  • Competitive salary
  • Shared values
  • Equal opportunities
  • Work-life balance
  • Evolving career opportunities
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

Checkr is hiring an experienced Staff Data Engineer to join their Data Platform ...
Location
Location
United States , San Francisco; Denver
Salary
Salary:
166000.00 - 230000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of designing, implementing and delivering highly scalable and performant data platform
  • experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
  • experience with ETL/ELT, stream and batch processing of data at scale
  • expert level proficiency in PySpark, Python, and SQL
  • expertise in data modeling, relational databases, NoSQL (such as MongoDB) data stores
  • experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
  • an understanding of Graph and Vector data stores (preferred)
  • knowledge of security best practices and data privacy concerns
  • strong problem-solving skills and attention to detail
  • experience/knowledge of data processing platforms such as Databricks or Snowflake.
Job Responsibility
Job Responsibility
  • Architect, design, lead and build end-to-end performant, reliable, scalable data platform
  • monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
  • mentor, guide and work with junior engineers to deliver complex and next-generation features
  • partner with engineering, product, design, and other stakeholders in designing and architecting new features
  • create and maintain data pipelines and foundational datasets to support product/business needs
  • experiment with rapid MVPs and encourage validation of customer needs
  • design and build database architectures with massive and complex data
  • develop audits for data quality at scale
  • create scalable dashboards and reports to support business objectives and enable data-driven decision-making
  • troubleshoot and resolve complex issues in production environments.
What we offer
What we offer
  • A fast-paced and collaborative environment
  • learning and development allowance
  • competitive cash and equity compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • up to $25K reimbursement for fertility, adoption, and parental planning services
  • flexible PTO policy
  • monthly wellness stipend
  • home office stipend
  • in-office perks such as lunch four times a week, a commuter stipend, and an abundance of snacks and beverages.
  • Fulltime
Read More
Arrow Right