CrawlJobs Logo

Cloud engineer (azure databricks)

https://www.randstad.com Logo

Randstad

Location Icon

Location:
Japan , Tokyo

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

6500000.00 - 9500000.00 JPY / Year

Job Description:

Established global insurance firm from the US! Hybrid! WFH up to 4 days a week. International culture!

Requirements:

  • Bachelor’s degree in Computer Science, Engineering, or a related technical discipline
  • 3+ years of hands-on experience with Microsoft Azure in a cloud engineering or platform role
  • Practical experience supporting Azure Databricks in production or enterprise environments
  • Strong understanding of cloud networking, identity, and security fundamentals
  • Experience with Infrastructure as Code and automation (Terraform, ARM, Bicep, PowerShell, or Python)
  • Familiarity with monitoring and logging tools such as Azure Monitor, Log Analytics, or Application Insights
  • Strong troubleshooting skills and ability to operate missioncritical platforms

Nice to have:

  • Experience with additional Azure analytics services such as Azure Data Factory, Synapse Analytics, Event Hub, or Azure ML
  • Understanding of modern data architectures (data lake, lakehouse, data warehouse)
  • Experience with DevOps practices for platform engineering and analytics workloads ...
  • Azure certifications (e.g., Azure Administrator, Azure Data Engineer, Azure Solutions Architect)
  • Experience working in regulated or enterprise environments with strong governance requirements
  • Working proficiency in both English and Japanese (business level Japanese is a plus)
What we offer:
  • 健康保険
  • 厚生年金保険
  • 雇用保険
  • 土曜日
  • 日曜日
  • 祝日

Additional Information:

Job Posted:
April 12, 2026

Expiration:
February 27, 2027

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Cloud engineer (azure databricks)

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Azure Data Engineer

At LeverX, we have had the privilege of delivering over 1,500 projects for vario...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with strong expertise in Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Synapse, Microsoft Fabric, and Azure Cosmos DB
  • Advanced SQL skills, including complex query development, optimization, and troubleshooting
  • Strong knowledge of indexing, partitioning, and query execution plans to ensure scalability and performance
  • Proven expertise in database modeling, schema design, and normalization/denormalization strategies
  • Ability to design and optimize data architectures to support both transactional and analytical workloads
  • Proficiency in at least one programming language such as Python, C#, or Scala
  • Strong background in cloud-based data storage and processing (e.g., Azure Data Lake, Databricks, or equivalent) and data warehouse platforms (e.g., Snowflake)
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and maintain efficient and scalable data architectures and workflows
  • Build and optimize SQL-based solutions for data transformation, extraction, and loading (ETL) processes
  • Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions
  • Manage and optimize data storage platforms, including databases, data lakes, and data warehouses
  • Troubleshoot and resolve data-related issues, ensuring accuracy, integrity, and performance across all systems
What we offer
What we offer
  • Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc
  • Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay with the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

Experience: 3-6+ Years Location: Noida/Gurugram/Remote Skills: PYTHON, PYSPARK...
Location
Location
India , Noida; Gurugram
Salary
Salary:
Not provided
nexgentechsolutions.com Logo
NexGen Tech Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-6+ Years experience
  • PYTHON
  • PYSPARK
  • SQL
  • AZURE DATA FACTORY
  • DATABRICKS
  • DATA LAKE
  • AZURE FUNCTION
  • DATA PIPELINE
Job Responsibility
Job Responsibility
  • Design and engineer the cloud/big data solutions, develop a modern data analytics lake
  • Develop & maintain data pipelines for batch & stream processing using modern cloud or open source ETL/ELT tools
  • Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT
  • Implement continuous integration, continuous deployment, DevOps practice
  • Create, document, and manage data guidelines, governance, and lineage metrics
  • Technically lead, design and develop distributed, high-throughput, low-latency, highly available data processing and data systems
  • Build monitoring tools for server-side components
  • work cohesively in India-wide distributed team
  • Identify, design, and implement internal process improvements and tools to automate data processing and ensure data integrity while meeting data security standards
  • Build tools for better discovery and consumption of data for various consumption models in the organization – DataMarts, Warehouses, APIs, Ad Hoc Data explorations
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Azure Data Architect

We are offering an exciting opportunity for an Azure Data Architect within the O...
Location
Location
United States , Houston
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Comprehensive understanding of Azure products and Data platforms such as Databricks
  • Strong collaboration skills with Enterprise Architects, Cloud Engineering, Business Architects, and Product teams
  • Proficient in Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Data Lake
  • Experienced in scripting languages like Python, Bash, or Powershell
  • Knowledge of cloud security principles and best practices
  • Familiarity with Azure Resource Manager, Virtual Networks, Azure Blob Storage, Azure Automation, Azure Active Directory, and Azure Site Recovery
Job Responsibility
Job Responsibility
  • Collaborate with multiple teams to design and implement solutions
  • Ensure solutions are optimized for performance, cost, and compliance
  • Operate hands-on with Azure products and Data platforms
  • Develop and deploy Cloud Native Applications using Azure PaaS Capabilities
  • Manage cloud deployment, technical and security architecture, database architecture, virtualization, software design, networking, DevOps, and DevSecOps
  • Employ Azure data services
  • Utilize scripting languages to automate routine tasks
  • Implement IAM, Authentication and Authorization of applications
  • Utilize knowledge of cloud security principles and best practices
  • Handle Azure Resource Manager, Virtual Networks, Azure Blob Storage, Azure Automation, Azure Active Directory, and Azure Site Recovery
What we offer
What we offer
  • Medical, vision, dental, life and disability insurance
  • Eligibility to enroll in company 401(k) plan
  • Fulltime
Read More
Arrow Right

Databricks Engineer

We are seeking a Databricks Engineer to design, build, and operate a Data & AI p...
Location
Location
United States , Leesburg
Salary
Salary:
Not provided
wintrio.com Logo
WINTrio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with Databricks, Delta Lake, and Apache Spark
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms
  • Familiarity with data governance, lineage tracking, and metadata management tools
Job Responsibility
Job Responsibility
  • Data & AI Platform Engineering (Databricks-Centric): Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation
  • Apply schema evolution and data versioning to support agile data development
  • Platform Integration & Data Ingestion: Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data
  • Design standardized data ingestion processes with automated error handling, retries, and alerting
  • Data Quality, Monitoring, and Governance: Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement
Read More
Arrow Right

Senior Sales Engineer

The Senior Sales Engineer role at Infinite Lambda is designed for a technical le...
Location
Location
Salary
Salary:
Not provided
infinitelambda.com Logo
Infinite Lambda
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Engineering, Computer Science, or a related field is required
  • an advanced degree is preferred
  • A proven track record in sales engineering or technical sales, preferably within the data, cloud, or technology sectors, with significant leadership experience
  • experience in data engineering, analytics engineering and data scientist roles and related technologies, such as AWS, Azure, Snowflake, Redshift, Databricks, dbt and machine learning frameworks is essential
  • Stream processing and batch data warehouse processing, event-driven architecture, micro-services, DataOps and MLOps practices
  • Experience in international markets or expanding into new regions is a plus, particularly knowledge of South American markets and business practices, which is desirable for this role
  • Exceptional verbal and written communication skills, with the ability to translate complex technical concepts into business benefits for clients and stakeholders
  • Experience in leading sales engineering teams or functions, with a strategic mindset to influence company direction and market expansion
  • Strong ability to work effectively across different departments and levels, fostering collaboration with Product & Services, Customer Success, and Sales & Partnerships.
Job Responsibility
Job Responsibility
  • Initiate and manage engagements with prospective clients from the early stages of the sales process, ensuring a strong start to the sales cycle
  • Understand and document client needs, articulating how Infinite Lambda’s products and services can address these needs effectively
  • Collaborate with the sales team to drive a collaborative approach, creating compelling proposals that align with client expectations
  • Conduct and facilitate technical deep-dive workshops with prospective clients to showcase capabilities and gather insights, using workshop outputs to strengthen proposals
  • Identify relevant Infinite Lambda products and services, working with the sales team to position them strategically for each opportunity
  • Assist both the sales and delivery teams in estimating project scope, effort, and resources, producing realistic and commercially viable account strategies. This includes defining initial team composition, tools to utilise, and a high-level roadmap
  • Work with departments such as Product & Services, Customer Success, and Sales & Partnerships to clearly define roles and responsibilities, ensuring seamless collaboration across functions
  • Collaborate with Product & Services to stay informed about existing offerings and influence product roadmaps, helping design new products and services based on client feedback and market trends
  • Enhance the positioning of Infinite Lambda’s products and services during the sales process and beyond, improving how value is communicated to clients
  • Liaise with senior leadership (CEO, CTO, CPO, COO, CSO) at Infinite Lambda to formulate a clear strategy for combining data and cloud expertise into a holistic value proposition, strengthening the company’s market positioning
What we offer
What we offer
  • private health insurance
  • work-from-home budget
  • unlimited paid holiday
  • wellness benefits
  • dedicated learning and development time
  • access to top-notch learning portals
  • coaching opportunities.
Read More
Arrow Right