Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Original language | English |
---|---|
Pages (from-to) | 824-835 |
Number of pages | 12 |
Journal | Nature Methods |
Volume | 20 |
Issue number | 6 |
DOIs | |
State | Published - Jun 2023 |
Funding
This project was supported by Allen Institute for initialization and a series of events, and a DOE Oak Ridge National Laboratory Leadership Computing award to H.P.; the cross-platform bench-test was also supported intensively at Lawrence Berkeley National Lab, Allen Institute for Brain Science, Blue Brain Project at EPFL, Southeast University at Nanjing, and various other facilities. This project also used large-scale display wall and immersive virtual reality facilities at Imperial College of London, Oak Ridge National Laboratory, Janelia Research Campus of HHMI, and Southeast University. This project was also supported by Tencent Inc. and Southeast University for interactive online analysis. Much of the work of this project reported here is a joint community effort with many sponsors over 7\u2009years. The authors acknowledge Prabhat, Z. Wan, J. Yang, H. Zhou, A. Narayanaswamy, S. Zeng, P. Glowacki, D. Jin, Z. Zheng, P. Hong, T. Zeng, R. Li, H. Ikeno, Y.-T. Ching, T. Quan, J.-F. Evers, C. Murtin, S. Gao, Y. Zhu, Y. Yang, H. Ai, S. Ishii, E. Hottendorf, T. Kawase, H. Dong and a number of other colleagues for assistance in developing and porting the automated reconstruction algorithms, and discussion. The authors thank S. Sorensen for assistance in discussing and tracing some gold standard neuron reconstructions, and R. Kanzaki (University of Tokyo), D. Miyamoto (University of Tokyo), R. Wong (University of Washington), Y. Wang (Allen Institute for Brain Science), E. Lein (Allen Institute for Brain Science), C. Bass (King\u2019s College London), S. Danzer (Cincinnati Children\u2019s Hospital) and many other colleagues for providing neuron image datasets. The authors also acknowledge A. Jones for suggesting the project name and support throughout the project, and thank R. Yuste and D. Van Essen for discussion; J. Isaac and K. Moses for support from the Wellcome Trust to organize the University of Cambridge hackathon; G. Rubin and N. Spruston for support, data sharing and event organization at the Janelia Research Campus of HHMI; INCF for financial support for organization of a series of meetings; and Beijing University of Technology, Imperial College of London and Southeast University for support of hackathons and workshops. The authors also thank X. Zhao for assistance in hosting the Shiny app in the neuroXiv website; P. Qian, Z. Zhao and X. Chen for assistance in formatting the manuscript; and A. Carpenter for discussion in organizing a tracing hackathon. The authors acknowledge the support of the National Science and Technology Innovation 2030 \u2013 \u2018Brain Science and Brain-Inspired Research\u2019 Program of China (Grant 2021ZD0204002 and 2022ZD0205200 to H.P, L.M.-G., Y. Liu, Z.R.). B.Y. was supported by the NIH grant R01 EB028159. G.A.A. was supported by NIH grants R01 NS39600, RF1 MH128693 and R01 NS86082. A.-S.C. was supported by the Higher Education Sprout Project co-funded by the Ministry of Education and the Ministry of Science and Technology in Taiwan. M.S. was supported by the European Programs JPND TransPathND (ANR-17-JPCD-0002), EuroNanoMed III MoDiaNo (ANR-18-ENM3-0002) and CNES. L.G. was supported by JST Moonshot R&D Grant Number JPMJMS2011, Japan. The authors acknowledge the National Center for High-Performance Computing in Taiwan for managing the FlyCircuit data. The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.