Interview: AI compute startup TBC details biological computing platform linking living neurons and machine learning
As AI systems push past the practical limits of silicon, a wave of startups is rethinking what “compute” can be. Among the boldest is The Biological Computing Company (TBC), founded in 2022 by CEO Alex Ksendzovsky and COO Jon Pomeraniec. TBC’s platform couples living neurons with modern machine learning to improve performance, efficiency, and adaptability—treating biology as a complementary compute substrate rather than a silicon replacement.
In practice, TBC encodes real-world data—images, video, and other signals—into neuronal cultures, then converts the cells’ responses into machine-readable representations that plug into state-of-the-art AI models. The approach targets compute-hungry domains such as computer vision, generative video, and dynamic world models, where conventional scaling is hitting economic and energy ceilings.
In February, TBC announced a US$25 million seed round led by Primary Ventures and opened a flagship lab in San Francisco’s Mission Bay. In this interview with DIGITIMES Asia, the co-founders explain their roadmap, engineering choices, and why they believe biological systems can open a new scaling path for AI.
TBC COO Jon Pomeraniec (left) and CEO Alex Ksendzovsky (right). Credit: TBC
Vision and impact
Q: What’s TBC’s long-term vision for integrating living neurons with AI?
A: We’re creating a new class of compute that weaves biological neural networks into modern AI systems. In the next five years, our aim is to show that biology enhances silicon, improving stability, efficiency, and adaptability in areas like generative video, interactive world models, computer vision, and bio-inspired compute. Over a decade, we expect real-time biological compute to sit in the inference loop—enabling persistent, continuously updating world models; long-horizon generative video; robotics with sensorimotor control; and pattern completion for simulations, time-series forecasting, and complex systems. The goal is a practical scaling path beyond brute-force silicon.
Core technology
Q: How do you turn neural activity into useful signals for AI models?
A: We connect living neurons to frontier models to improve quality per unit of compute. Our products currently span two lines: 1) neural-based optimizers and 2) algorithm discovery. For optimizers, we encode inputs (e.g., images and video) into neuronal cultures, record the high-dimensional responses, and translate them into model-ready representations via modular adapters. In generative and world models, this boosts video length and fidelity without increasing model size or retraining cost; in computer vision, it improves classification while reducing compute. In parallel, our algorithm discovery work lifts biological principles into new AI designs, strengthening today’s architectures rather than replacing them.
Stability and reliability
Q: Neuronal cultures can be fragile. How do you maintain stability over time?
A: We’ve built controlled environments (media and culture conditions) to support cell viability for up to a year. Advances in regenerative medicine and bioengineering make reliable sourcing and passaging routine. We are pursuing partnerships to ensure sustainable supplies of rat cortical neurons and induced pluripotent stem cell (iPSC)-derived neurons, enabling effectively continuous availability.
Scalability and production
Q: What are the challenges to scaling from lab to production?
A: We’re already commercially available and focused on helping customers deploy generative models with our adapters, with key milestones planned in the first half of 2026. Today’s products don’t face significant manufacturing constraints: the biological systems are used to generate adapters once, and the deliverable to customers is software—so the software layer scales digitally. Longer-term, as we pursue tighter biology–silicon integration, larger-scale sourcing and production of neural systems will matter. We’re developing strategies leveraging stem-cell-derived neurons and scalable biological manufacturing to support future needs.
Ethics and regulation
Q: How do you address sourcing, ethics, and sentience concerns?
A: Ethics are core to our process. We use neurons sourced through lawful, well-documented biomedical supply chains with full chain-of-custody transparency—without coercion or exploitation—and we hold partners to the same standards. On sentience: there is no sentience in the dish, and our systems are explicitly designed so sentience is not possible. These controlled cultures lack the structures and context required for consciousness. We maintain control over compute, output, and use cases and operate with documented safeguards and external expert review, aligned with established biomedical ethics frameworks.
Performance and benchmarking
Q: What advantages have you measured in efficiency or quality?
A: Biological computation offers a different path: better results without simply scaling parameters and power. By integrating neural dynamics into ML pipelines, we increase quality per unit of compute and lower energy and cost at a given performance level. In tests, our model achieved a 23× compute-efficiency improvement for image reconstruction and generated interactive video twice as long as benchmarks. On the Minecraft Oasis generative model, our adapter delivered an average 19% image-quality improvement over the base model while maintaining quality over extended durations. We publish more detail on our technical blog.
Comparison with other advanced approaches
Q: How does biological computing compare to neuromorphic or quantum systems?
A: Biological and quantum computing are complementary. Both depart from rigid determinism, but biology doesn’t require extreme cooling or large-scale error correction, making it more adaptable for near-term, real-world AI. That practicality lets TBC deliver efficiency gains today across vision, generative systems, and dynamic decision architectures. We already have customers deploying our technology.
Model integration
Q: Which models can your adapters support, and what’s next?
A: Our platform translates neural dynamics into deployable software that integrates with production-grade AI. We began with diffusion transformers and now support broader generative and interactive models. The platform is inherently multimodal, and our adapters increase stability, coherence, and visual quality while reducing compute requirements—establishing a practical bridge between biological computation and modern AI infrastructure.
What’s ahead
Q: Final thoughts on potential and near-term impact?
A: We’re building a reliable platform that makes biological intelligence usable at scale, starting with real products and tangible value today. By training living neurons to process information—and encoding what we learn into software—we aim to make training and inference faster, cheaper, and more efficient. The systems we aspire to already exist in nature; our job is to integrate them responsibly and effectively with silicon to solve real problems and open a new path for scaling intelligence.