This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Cloud Solution Architect (CSA) with deep expertise in Azure Data, Analytics, and AI services to lead transformative engagements for our customers. In this strategic role, you will drive architectural innovation, implementation excellence, and operational health across Microsoft Azure solutions.
Job Responsibility:
Developing reusable intellectual property (IP) and content assets that scale across field teams, customers and showcase value-based deliverables
Creating world-class collateral to empower internal field teams in delivering impactful customer outcomes
Supporting the Customer Success Unit (CSU) and customers in maximizing their Microsoft investments through hands-on architecture and solution delivery
Customer-Centric Approach: Identify customers' goals for migrating to Azure, and ensure the program provides Data & Analytics solutions that deliver business value
Drive positive Customer Satisfaction & become a trusted advisor
Solicit feedback and understand their experiences with the service delivered
Ensure that solution exhibits high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment
Develop opportunities to enhance Customer Success and help customers extract value from their Microsoft investments
Leverage subject matter expertise to identify resolutions for customer blockers
Identify the opportunities to develop repeatable IPs and integrate with Cloud Accelerate Factory
Apply technical knowledge to design solutions aligned with business and IT needs
Create modernized Data & Analytics platform solutions with infused AI, lead POCs and MVPs, and ensure long-term technical viability
Share insights and best practices, collaborate with the Engineering team to address key blockers, and drive product improvements
Requirements:
Bachelor's degree in computer science, Information Technology, Engineering, Business, or related field AND experience in cloud/infrastructure technologies, information technology (IT) consulting/support, systems administration, network operations, software development/support, technology solutions, practice development, architecture, and/or consulting
Business Value: The ability to convey the business needs and value of proposed solutions, plans, and risks to stakeholders and decision makers. This includes the ability to persuade and inform based on facts and alignment with goals and strategy
Trusted Advisership: The ability to build trusted advisor status and deep relationships across stakeholders (e.g., technical decision makers, business decision makers) through an understanding of customer needs and technologies
Technical: Breadth of technical experience and knowledge in foundational security, foundational AI, architecture design, with depth / Subject Matter Expertise in one or more of the following: Enterprise-scale technical experience and depth with one or more areas of the Azure Analytics ecosystem (Microsoft Fabric, Power BI, Power BI embedded, Azure Synapse analytics, Azure Analysis Services, Azure Databricks, Azure Data Lake) (required)
Knowledge of Business Intelligence, design and build of Advanced Analytics and Big Data solutions, hands-on experience with data warehousing, Lakehouse architecture, Real time Intelligence, big data, analytics workloads (Azure or equivalent), semantic model development with knowledge of key differentiation to determine best fit for use cases and applications
Experience creating Data & Analytics Proof of Concepts (PoC), Minimum Viable Products (MVPs) for customers that lead to production deployments
Experience with Microsoft Fabric, DAX, T-SQL, PowerShell, C# (in the context of PBI/AS programming), Python (preferable)
Competitive Landscape: Knowledge of key Data & Analytics platforms such as AWS, GCP, Snowflake, etc
Experience in shipping PySpark enabled data pipelines
Understanding batch, streaming ETL/ELT processing
Preferred experience/understanding on data lake house, data mesh architectural principles
Nice to have:
Preferred experience/understanding on data lake house, data mesh architectural principles