This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Springboard is a six-week internal accelerator run by the AI Studio, designed to help leaders apply AI to one specific, high-impact business challenge within their function. Participants walk away with increased AI fluency, concrete artifacts, and a credible path forward tied to real business impact. Each cohort selects 3 to 5 senior business leaders to participate. Every Springboarder is assigned a dedicated AI Strategist and Technical Partner, forming a focused pod for the duration of the program. While Springboard currently works with senior leaders, it is designed to scale across the organization over time. Week 1: AI Opportunity Mapping. Deeply understand the business challenge, map end-to-end workflows, and identify where AI could create meaningful impact. Develop 3 to 5 validated AI use cases tied to real operational or strategic needs. Weeks 2 to 3: Rapid Prototyping and Iteration. Build and iterate 1 to 2 AI-driven solution concepts in a secure Studio Sandbox using sprint methodology and rapid experimentation to move from 0 to 1 quickly while building executive-level AI fluency. Weeks 4 to 6: Validation and Proposal Readiness. Refine solutions, validate feasibility and business value, and prepare AIRB proposals and handoff materials. Assess expected impact and complete an ecosystem scan comparing build vs. third-party options in partnership with IT. Weeks 6 to 7: Demo Day. Participants present their use case, prototype, and projected impact to enterprise leaders. Presentations inform next steps including pilots, further evaluation, or resourcing recommendations
Job Responsibility:
Architect and build one to two full-stack AI prototypes per cohort inside the sandbox (AWS), selecting and configuring the appropriate AI development stack for the use case.
Translate use case requirements and executive input into a scoped technical architecture, making deliberate build-vs-configure-vs-integrate decisions that are achievable within the sprint.
Design and implement AI pipelines using available LLM APIs (e.g., Claude, OpenAI, Bedrock), including prompt engineering, tool use and function calling, RAG architectures, and agentic workflow patterns as appropriate to the use case.
Build functional front-end interfaces using AI-native rapid development platforms (e.g., Lovable, Bolt, v0) and wire back-end logic and data flows to deliver a complete, demo-ready application.
Implement workflow automation and system integration using orchestration platforms (e.g., n8n, Make, Zapier) to connect enterprise data sources, APIs, and downstream systems.
Develop prototypes that are stable, clearly scoped, and capable of running live in front of executive audiences without failure.
Iterate rapidly based on AI Strategist and executive feedback throughout Weeks 2 through 6, treating their input as product requirements.
Assess and articulate the production gap: what was built in the sandbox versus what a production deployment requires across integration, data governance, security, and scale.
Contribute technical input to the build-vs-buy recommendation, Product Requirements Document, and AIRB submission materials.
Document architecture decisions, tool selections, API configurations, and implementation details in the Handoff Package so engineering teams can continue the work after Demo Day.
Attend the weekly AI Strategist sync meeting to stay aligned with the pod, program team, and any shifts in use case direction.
Requirements:
Full-stack AI development proficiency: own the complete solution across data, AI pipeline, API, and UI layers without requiring additional engineering support.
LLM integration and orchestration: hands-on experience building production-grade AI pipelines including tool use and function calling, retrieval-augmented generation (RAG), structured output handling, streaming, and multi-step agentic workflows.
AI-native application development: proficiency with rapid development platforms (e.g., Lovable, Bolt, v0) for fast front-end prototyping and end-to-end application assembly.
Workflow automation and integration: experience with event-driven orchestration and low-code/no-code integration platforms (e.g., n8n, Make, Zapier) for connecting enterprise systems, transforming data, and building automated pipelines without custom middleware.
Cloud infrastructure: proficient deploying and operating services in AWS sandbox environments including compute, managed AI services, object storage, and serverless functions (e.g., Lambda, API Gateway, Bedrock).
API and systems integration: strong REST and webhook fluency, OAuth and API key authentication patterns, and JSON/data transformation for connecting to enterprise systems.
Ability to evaluate and adopt new AI tooling rapidly
this stack evolves and you are expected to keep pace.