1 minute read

Evolutionary-scale enzymology enables exploration of a rugged catalytic landscape. Science (2025).

Duncan and the Pinney Lab used microfluidics to measure catalytic constants across hundreds of adenylate kinase orthologs and mutants, then combined the data with protein language models. The results revealed catalytic neighborhoods organized by domain architecture, opening new avenues for enzyme engineering.

Deep phenotypic profiling of neuroactive drugs in larval zebrafish. Nature Communications (2024).

Leo and collaborators trained deep metric learning models on zebrafish behavioral profiles of 650 neuroactive compounds, achieving robust scaffold hopping across species and chemical boundaries. Predictions were validated by a 58% hit rate in radioligand binding assays against human protein targets.

Retrieval Augmented Docking Using Hierarchical Navigable Small Worlds. J Chem Inf Model (2024).

Brendan brought retrieval-augmented search into molecular docking, recovering 95% of virtual actives while screening only 10% of libraries with 100M+ molecules.

Learning fast and fine-grained detection of amyloid neuropathologies from coarse-grained expert labels. Communications Biology (2023).

Daniel and collaborators across UC Davis, UCLA, and Emory trained amyloid detection models without human-drawn bounding boxes, matched expert neuropathologist performance, and built a model that runs in minutes on a standard workstation without a GPU.

Trans-channel fluorescence learning improves high-content screening for Alzheimer’s disease therapeutics. Nature Machine Intelligence (2022).

Daniel and collaborators used deep learning to predict fluorescent signals from existing markers, identifying novel compounds that block tau aggregation that traditional screening had missed.