This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
In this position, you will develop AI Storage Solutions based advanced system architectures and complex simulation models for Sandisk’s next generation products. You will need to initiate and analyze changes to the architecture of the product. Typical activities include designing, programming, debugging, and modifying simulation models to evaluate these changes and assess the performance, power, and endurance of the product. You will work closely with excellent colleague engineers, cope with complex challenges, innovate, and develop products that will change the data centric architecture paradigm.
Job Responsibility:
Build SystemC performance models for AI Storage Solutions based products covering end-to-end from GPU/TPU/NPU/xPU, host interface, memory hierarchy, basedie controller, and AI Storage Solutions using various packaging technolgies
Responsible for improving the AI/ML ASIC Architecture performance through hardware & software co-optimization, post-silicon performance analysis, and influencing the strategic product roadmap
Workload analysis and characterization of ASIC and competitive datacenter and AI solutions to identify opportunities for performance improvement in our products
Collaboration with Architecture team to resolve performance issues and optimize the performance and TCO of their AI Storage Solutions based datacenter technologies
Experience modeling one or some components of AI/ML accelerator ASICs such as AI Storage Solutions, PCIe/UCIe/CXL, NoC, DMA, Firmware Interactions, NAND, xPU, fabrics, etc
Performance modeling and optimization for multi-trillion parameter LLM training/inference including Dense, Mixture of Experts (MoE) with multiple modalities (text, vision, speech)
Model/optimize novel parallelization strategies across tensor, pipeline, context, expert and data parallel dimensions
Architect memory-efficient training systems utilizing techniques like structured pruning, quantization (MX formats), continuous batching/chunked prefill, speculative decoding
Incorporate and extend SOTA models such as GPT-4, Reasoning models like Deepseek-R1, and multi-modal architectures
Collaborate with internal and external stakeholders/ML researchers to disseminate results and iterate at rapid pace
Requirements:
Bachelors or Masters or PhD in Computer/Electrical Engineering with 5+ years of relevant experience in Performance Modeling, Simulation, and Analysis using SystemC
At least 5+ years of experience with SystemC modeling
Good understanding of computer/graphics architecture, ML, LLM
Experience of simulation using System C and TLM, behavioral modeling and performance analysis
Nice to have:
Previous experience with storage systems, protocols, and NAND flash
Deep experience optimizing large-scale ML systems, GPU architectures
Strong track record of technical leadership in GPU performance and workload analysis
Expert knowledge of transformer architectures, attention mechanisms, and model parallelism techniques
Experience with GPU or TPU and system microarchitecture
Proficiency in principles and methods of microarchitecture, software, and hardware relevant to performance engineering
Capable of developing wide system view for complex AI/ML Accelerator ASIC systems
Proficiency with SoC and system performance analysis fundamentals, tools, and techniques including hardware performance monitors and PERF profiling
Familiar with IO subsystem microarchitecture performance modeling and background in NVMe/PCIe//UCIe/CXL/NVLink microarchitecture and protocols
Multi-disciplinary experience, including familiarity with Firmware and ASIC design