Saw this fascinating research from Stanford University using an AI foundation model to create a ‘digital twin’ of the mouse visual cortex. It was trained on large datasets of neural activity recorded while mice watched movies.
The impressive part: the model accurately predicts neural responses to new, unseen visual inputs, effectively capturing system dynamics and generalizing beyond its training data. This could massively accelerate neuroscience research via simulation (like a ‘flight simulator’ for the brain).
I put together this short animation visualizing the core concept (attached).
What are your thoughts on using foundation models for complex biological simulation like this? What are the challenges and potential?
Stanford Report article covering the research: https://news.stanford.edu/stories/2025/04/digital-twin
The original study is in Nature: https://www.nature.com/articles/s41586-025-08790-w
submitted by /u/Michael_Lorenz_AI to r/learnmachinelearning
[link] [comments]
Laisser un commentaire