A software platform for real-time and adaptive neuroscience experiments – Nature Communications

Neuroscience is shifting from passive data collection to adaptive, closed-loop experiments that learn as they run. Improv, a real-time, Python-based platform, makes that leap practical. It fuses live data streams with on-the-fly models, supports robust visualization, and orchestrates experimental control—helping labs test hypotheses faster and intervene at precisely the right moments.

How improv works

Improv follows a streamlined actor-model architecture: each task (for example, camera acquisition, preprocessing, inference, visualization) is an independent “actor” running in its own process. Actors communicate via message passing that references objects stored in a shared in-memory datastore built on Apache Arrow’s Plasma. Instead of copying large arrays between processes, actors exchange lightweight pointers to data, cutting latency and memory overhead. Pipelines are defined as directed graphs of actors and queues, and the system is designed to isolate failures so a misbehaving actor doesn’t bring the session down. The result: long-running, concurrent processing with minimal lag.

Real-time analysis in zebrafish: from pixels to models and feedback

To validate end-to-end performance, the team replayed two-photon calcium imaging data from larval zebrafish exposed to whole-field motion stimuli. Improv ingested raw frames at acquisition rate (3.6 Hz), synchronized stimulus and imaging timelines, and processed neuronal activity in real time. A CaImAn-based actor extracted regions of interest and deconvolved activity traces. In parallel, an LNP (linear–nonlinear–Poisson) model actor performed streaming inference with a sliding 100-frame window and stochastic gradient descent, estimating response properties and functional connectivity across thousands of neurons. The online fit converged rapidly toward the offline solution, unlocking the option to stop experiments early without completing all stimulus repetitions.

A PyQt-powered GUI actor refreshed plots and images up to 60 times per second, providing moment-to-moment visibility into data quality, tuning curves, and connectivity—mixing automation with operator oversight.

Predicting neural activity from behavior, live

Improv also reproduced a cornerstone result: unstructured movements strongly predict cortical activity. The platform co-streamed mouse behavioral video (30 fps) and two-photon signals, then reduced each 240×320 frame to 10 dimensions using a streaming method called proSVD. A ridge regression actor consumed the low-dimensional features to predict neural activity online. Within minutes, both the subspace and regression coefficients stabilized. The system visualized the first two proSVD dimensions alongside spatial regressors overlaid on the original video, revealing which regions of the image best explained neural fluctuations—insightful context for triggering targeted interventions in real time.

Forecasting population dynamics for future-aware control

For motor cortex spike data, the same proSVD actor was dropped into a new pipeline via configuration changes—no code rewrite needed. After streaming dimensionality reduction, a probabilistic flow model, Bubblewrap, tiled the latent space with Gaussians and learned a transition matrix to predict trajectories forward in time. One-second-ahead forecasts (100 steps) remained highly accurate, with only an 11% drop versus one-step predictions. Such forecasts make it feasible to plan causal perturbations based on where population dynamics are heading—not just where they’ve been.

Bayesian optimization for smarter stimulus selection

Exploring rich stimulus spaces is often prohibitively slow. For zebrafish motion tuning, a full 24×24 angle combination per eye ballooned to hours of recording per plane. Improv replaced exhaustive sampling with Bayesian optimization (BO): images streamed over Ethernet via ZMQ, CaImAn provided responses, and a BO actor used a Gaussian Process to estimate each neuron’s tuning function and uncertainty across conditions. The next stimulus balanced exploration and exploitation, iterating until a confidence threshold or a cap (n=30) was reached.

In tests against rigid sampling over a 144-condition subset, the online BO curves matched offline GP fits well, locating the true peak in 93% of targeted neurons. Because imaging captures populations simultaneously, every BO trial updated tuning estimates for hundreds of cells, enabling practical optimization of roughly 300 neurons with about 15 total stimuli on average. Regional differences emerged: pretectal neurons favored matched angles across eyes, whereas tectal neurons peaked for converging or diverging motion—patterns mirrored in the algorithm’s sampling choices. Information accrued across optimizations, making it especially effective when population correlations exist.

Closed-loop optogenetics guided by online tuning

Improv also powered all-optical experiments combining two-photon calcium imaging (GCaMP6s, 920 nm) with two-photon photostimulation (rsChRmine, 1045 nm). After alignment, the pipeline ran rapid characterization of visually evoked tuning, then an Adaptive Targeting actor selected neurons for stimulation based on properties like direction preference and opsin expression. Coordinates from the CaImAn actor flowed directly to a Photostimulation actor for immediate execution. The system automated multiple repetitions and then moved on to new candidates as criteria updated in real time.

Downstream effects varied: some neurons showed no response to any photostimulation; others responded selectively to specific targets; a subset exhibited consistent photostimulation-locked activity independent of the target identity. Comparing visual tuning with photostimulation responses exposed putative circuit motifs—e.g., forward-selective neurons activating each other—while also revealing cells more responsive to photostimulation than to visual input, and vice versa. This flexible, data-driven targeting opens doors to mapping functional connectivity and testing causality during the same session.

Why this platform matters

Improv brings modern software engineering to the rigors of closed-loop neuroscience: modular actors, zero-copy shared-memory messaging, fault isolation, and real-time visualization. It scales from single pipelines to complex, multi-phase experiments—fusing imaging, behavior, modeling, optimization, and control. Most importantly, it lets experiments adapt to what the brain is doing now, enabling earlier stopping, richer hypotheses, and precisely timed interventions. Future work could extend to higher-dimensional stimuli, joint sensory–optogenetic perturbations, behavior-in-the-loop designs, and rapid mapping of inter-area interactions—all within the same reusable framework.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Unlock Your Escape: Mastering Asylum Life Codes for Roblox Adventures

Asylum Life Codes (May 2025) As a tech journalist and someone who…

Challenging AI Boundaries: Yann LeCun on Limitations and Potentials of Large Language Models

Exploring the Boundaries of AI: Yann LeCun’s Perspective on the Limitations of…

Unveiling Oracle’s AI Enhancements: A Leap Forward in Logistics and Database Management

Oracle Unveils Cutting-Edge AI Enhancements at Oracle Cloud World Mumbai In an…

Charting New Terrain: Physical Reservoir Computing and the Future of AI

Beyond Electricity: Exploring AI through Physical Reservoir Computing In an era where…