3D-reconstruction of living brain tissue down to individual synapse level would create opportunities for decoding the dynamics and structure-function relationships of the brain’s complex and dense information processing network. However, it has been hindered by insufficient 3D-resolution, inadequate signal-to-noise-ratio, and prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges by developing an integrated optical/machine learning technology, LIONESS (Live Information-Optimized Nanoscopy Enabling Saturated Segmentation). It leverages optical modifications to stimulated emission depletion (STED) microscopy in comprehensively, extracellularly labelled tissue and prior information on sample structure via machine learning to simultaneously achieve isotropic super-resolution, high signal-to-noise-ratio, and compatibility with living tissue. This allows dense deep-learning-based instance segmentation and 3D-reconstruction at synapse level incorporating molecular, activity, and morphodynamic information. LIONESS opens up avenues for studying the dynamic functional (nano-)architecture of living brain tissue.