This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Build the systems that expand human capability. At Blackrock Neurotech, we've spent decades making the impossible possible – helping people move, speak, and reconnect with the world when they otherwise could not. We've seen that restoring function restores more than ability. It restores independence, identity, and agency. Today, we are building the next generation of human capability: brain-computer interfaces that are designed to be safe, scalable, and trusted in the real world. Our work is not only about reconnecting people to what was lost, but about expanding what is possible – creating a seamless interface between human intent and technology. This is foundational work in a category-defining field. You will help build the infrastructure for a future where neural interfaces are invisible, reliable, and deeply human-centered.
Job Responsibility:
Own substantial pieces of our core modeling work end-to-end, from preparing and curating large neural datasets to designing and running training experiments to analyzing results and turning findings into the next round of model improvements
Write and review model and pipeline code, launch and monitor training runs, debug issues that surface at scale, and analyze results to understand not just whether a model works but why
Shape initiatives spanning dataset curation, training infrastructure, model architecture, and evaluation methodology, with room to lead specific experimental threads as you build context
Requirements:
5+ years of hands-on experience building and training deep learning models, or a PhD in Machine Learning, Computer Science, Computational Neuroscience, or related field with applied industry experience
Strong experience with PyTorch (or similar modern ML frameworks) and fluency in Python
Solid software engineering practices including version control, testing, code review, and reproducibility
Experience designing model architectures and understanding training dynamics, optimization, and compute tradeoffs at scale
Ability to design clean experiments, analyze results rigorously, and make data-driven decisions
Comfortable working in ambiguous, research-oriented environments with imperfect or evolving datasets
Strong written and verbal communication skills across technical and non-technical stakeholders
Demonstrated ownership, follow-through, and intellectual honesty in problem solving
Nice to have:
Experience with neural signal processing, brain-computer interfaces, electrophysiology, or other biosignal domains
Relevant adjacent experience in speech, audio, time-series modeling, or multimodal learning
Experience with self-supervised learning, representation learning, transfer learning, or multi-task learning
Hands-on experience training models at scale using distributed systems, multi-GPU or multi-node environments
Familiarity with mixed precision training, gradient checkpointing, and managing long-running training jobs
Knowledge of model efficiency techniques such as distillation, quantization, pruning, or edge deployment
Experience in regulated or safety-critical environments such as medical devices, healthcare AI, or other deep-tech industries
Experience in fast-moving or early-stage environments balancing research ambition with execution discipline
Open-source contributions, published research, or other evidence of strong technical work shared publicly
Experience partnering with neuroscientists, clinicians, or other domain experts and translating across disciplines