Shawn Mikula on Brain Preservation Protocols and Extensions

 In Electron Microscopy

Biography: Shawn Mikula (Lifeboat Foundation Bio) is a neuroscientist devoted to comprehensively mapping mammalian whole-brain connectivity. He completed his Ph.D. in neuroscience at the Johns Hopkins University School of Medicine at the Krieger Mind-Brain Institute in Baltimore, Maryland. He subsequently worked with Ted Jones at the University of California (Davis) as the architect of the BrainMaps project, an interactive multiresolution next-generation brain atlas for various mammalian species. During subsequent post-doctoral work with Winfried Denk at the Max Planck Institute for Medical Research in Heidelberg, Germany, he developed new methods for staining and imaging whole mouse brains using serial blockface electron microscopy, and he recently was the first author on a tour de force publication on ultrastructurally mapping the whole mouse brain at single axon resolution. Longer term, he aims at pioneering high-throughput ultrastructural whole-brain mapping techniques for myriad mammalian species, including primates.


Note: this was a joint interview by Oge Nnadi and Andy McKenzie.

BPF: In broad strokes, how does your BROPA method differ from the wbPATCO protocol, and the other candidate protocols for whole brain preservation?

Shawn Mikula: The wbPATCO protocol stains myelin sheaths well and thus is useful for tracing myelinated axons in volume electron microscopy datasets. However, it does not yield high cellular membrane contrast, with the result that unmyelinated processes are largely untraceable. Also, the ultrastructural preservation is not very good, which is observed as frequent membrane ruptures. The BROPA protocol solves the problems of wbPATCO in that membrane contrast is uniformly high and ultrastructural preservation is very good throughout the brain. Importantly, the BROPA protocol is the only whole-brain preparation that appears suitable for whole-brain electron microscopic circuit reconstruction due to high neurite traceability and reliable synapse detection.

Biological preservation generally involves either chemically cross-linking proteins and lipids or, alternatively, vitrification, both of which can be considered fixing or freezing relevant biomolecules in place for long-term preservation, ideally soon after death to minimize port-mortem tissue degradation. Other candidate protocols, besides BROPA, for whole-brain preservation consist of using different combinations of heavy-metal staining, embedding (or plastination), aldehyde (generally di-aldehyde) fixation and cryonics, and will use either perfusion or incubation to deliver suitable chemical species throughout the brain. Generally the first step will involve perfusion with either aldehydes or vitrification solutions, but after that, there’s a lot of mixing and matching that can be done; so for instance, the following are all valid protocols:

a) Perfuse with vitrification solutions and gradually lower the temperature to induce vitrification. This is the standard cryonics approach.

b) Perfuse with di-aldehydes to cross-link proteins. Then perform a).

c) Perfuse with heavy-metal staining solution (this was tried by Palay in the 60s). Then remove brain and embed in epoxy.

d) Perfuse with di-aldehydes to cross-link proteins. Then remove brain and embed in epoxy (without staining).

e) Perfuse with di-aldehydes to cross-link proteins. Then remove brain, stain with heavy metals through incubation, and embed in epoxy (this is used in wbPATCO and BROPA)

BPF: How did you arrive at your approach?

Shawn Mikula: My goal at the outset was to have a brain suitable for high-throughput electron microscopic imaging, which narrowed down the search space by eliminating vitrification techniques. The two main options left were to perfuse with heavy metals or to perfuse with aldehydes, remove the brain and then incubate in heavy metals. The former was ruled out due to evidence that not all of the heavily-myelinated white matter can be stained, it was relatively unreliable and, at least for single-step osmium tetroxide perfusions, the membrane contrast was not sufficientlly high to allow for high-throughput imaging. This left one approach, to perfuse with aldehydes, remove the brain and incubate in several stain solutions prior to embedding in epoxy. For BROPA, there were still a lot of details that needed to be worked out for the staining incubations and this involved selectively sampling parts of a very large parameter space.

BPF: Your paper discusses how some ascending axons are surprisingly unmyelinated. What are the other ways, if any, in which your data set could be used by those within the myelin biology community?

Shawn Mikula: The fine architecture of myelinated axons comprising white matter pathways is an interesting topic that could be partly addressed by the cortico-striatal dataset in the paper. Quantifying frequencies and types of different myelination patterns from different types of neurons is another. Examining the “grid-like” organization of myelinated axons in cortex is a third. All of these topics can be explored using the cortico-striatal dataset, though of course a larger dataset would be preferable, in order to draw stronger conclusions.

BPF: The end of your paper highlights the need for better automated analysis methods that can reconstruct brain structures from serial EM images. What’s the best way for interested computational people to get involved in this type of research?

Shawn Mikula: In the coming weeks, I’ll make BROPA datasets available online (at www.connectomes.org) with ground truth annotations. This will provide some material for analysts to work with. From my analyses so far, it appears that automated detection of cell nuclei and synapses can be made fairly reliable. The main challenge is connecting each synapse to the appropriate pre- and post-synaptic nucleus. The main strategy that has been used for automated analysis and circuit reconstruction is machine learning using convolutional networks. However, this approach does not perform reliably and alternatives should be explored. By providing public datasets with ground truth annotation, I hope to encourage these more creative approaches to automated circuit reconstruction because we will need them when we have the whole mouse brain imaged.

BPF: How much, in time and money, would it cost to extend your protocol to a larger mammal such as a pig as quickly as possible?

Shawn Mikula: The extension of the BROPA method to a pig brain faces a financial hurdle due to the high cost of osmium tetroxide, which is about $30/g. For a typical pig brain weight of about 180 g (according to online sources), about the same weight of osmium tetroxide is required for the BROPA protocol and thus would cost $5400. For a 1.2 kg human brain, this is $36,000. Besides the cost, there is still the question of whether BROPA can scale to large brains. With proper modifications to the protocol to avoid osmium tetroxide-related ultrastructural damage and epoxy infiltration issues, it probably can but this remains to be demonstrated. Assuming BROPA would work with the pig brain, we must also factor in the time component, which is substantial since diffusion throughout the sample is required for stain and epoxy infiltration. For a mouse brain, which is about 1 cm in linear dimension, the diffusion time is about 4 days per incubation step. The pig brain is about 6 cm and thus would take 36 times as long as the mouse brain. The total BROPA protocol for a mouse brain, which consists of a series of incubation steps, takes about 3 weeks, so the pig brain would take 108 weeks or about 2 years. There are various ways to reduce this overall time; for example, if we retain only the essential steps required for ultrastructural preservation, the time could be kept under one year.

In terms of accelerating the process of extending BROPA or a related epoxy embedding-based method to large brains, this is mostly a question of money for chemical costs and having an adequate supply of well-fixed brains. I’m definitely interested in extending epoxy-based brain preservation methods to large brains and would benefit from having a source of pig, monkey or human brains that are fixed well with glutaraldehyde.

BPF: Do you foresee any particular problems in extending a large mammal protocol to human beings? For instance, would the regulatory hurdles for acquiring human brain samples to test on be a significant expense?

Shawn Mikula: If BROPA or a related embedding-based method works with pig brains, then it would be expected to work with human brains as well. Ideally, in terms of optimizing human brain preservation, di-aldehyde perfusion would commence before death, but this does not appear likely due to legal and moral considerations. Thus, post-mortem di-aldehyde perfusion must be made very reliable, which may present a challenge. In terms of regulatory hurdles, Alcor has already successfully navigated this path and a similar model could be adopted for combined aldehyde-plastination approaches for human brain preservation.

BPF: What is the ultimate goal of your brain preservation research?

Shawn Mikula: My ultimate goal is to have complete structural mappings at the nanoscale level of mammalian brains, beginning with the mouse and culminating in non-human primates or even humans. Ideally, these nanoscale mappings would include not only the complete neural circuitry but also all proteins and other biomolecules, distinguishable by their unique structure. However, current high-throughput nanoscale imaging technologies place limits on exactly what can be imaged so that, at least for the time being, we must be content with neural circuit mapping.

BPF: It is interesting to compare the percentage of biomolecules that are retained by various brain preservation protocols. For example, the 2013 paper describing CLARITY claims that 24% of proteins are lost by paraformaldehyde fixation, and 8% by CLARITY. Do you expect that your technique would be in the same ballpark as these two?

Shawn Mikula: There may be less protein extraction due to the use of glutaraldehyde, which is a superior protein cross-linking agent compared to paraformaldehyde and the acrylamide-based CLARITY.

BPF: In a 2004 discussion on the website Longecity, you note that you don’t think humans will ever be able to upload their consciousnesses, in part because “the intricacy of the brain cannot be accurately modeled by computer, only approximated and abstracted,” and therefore “sticking to “wetware” is the way to go.” Has your opinion on this changed at all over the past decade? Why or why not?

Shawn Mikula: I’m happy to speculate on mind uploading though my opinion has not fundamentally changed. The question of uploading consciousness can be broken down into two parts: 1) can you accurately simulate the mind based on complete structural or circuit maps of the brain?, and 2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious? I think the answer is probably ‘no’ to both.

With regard to 1), the circuit maps will not contain information over protein distributions and post-translational modifications (such as phosphorylation, which affects the behavior of proteins). This means that the models you want to simulate are under-determined and will consist of an astronomical number of parameters that you do not have information over. If these parameters are important to overall network function, especially if they involve particular receptor subunit types and distributions and neurochemical details, then the simulation will be inaccurate. C. elegans comes to mind as a cautionary case where simply knowing the complete neural circuit structure is not sufficient for accurate simulations.

With regard to 2), this is very speculative because we do not know the basis of consciousness. Let’s take a widely-accepted position and assume that it is a type of patterned neural activity; it does not follow that simulating this neural activity on a typical computer generates consciousness, even if you’re a functionalist. The reason for this is because simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem but we would still not be certain whether we are reproducing the correct causal structure at the requisite level of detail that is required for consciousness. And this is all assuming functionalism is correct. If it’s not and type physicalism is true, then no amount of computer hardware-based simulation will ever be conscious.

BPF: Very interesting! Some people certainly do define mind uploading based on the parameters that you have described — circuit mapping only. In the Whole Brain Emulation Roadmap [pdf], the consortium defines various levels of molecular level detail that may be required to perform what is also sometimes referred to as mind uploading, including the electrophysiologic (i.e., ion channel), metabolic, proteomic, and states of proteome complexes levels. Do you think that mind uploading might be theoretically feasible if it were defined as such (leaving aside questions of practical feasibility)?

Shawn Mikula: If by mind-uploading, you mean running realistic simulations, then yes, it is conceivable if the appropriate levels of molecular detail are included. However, there is much uncertainty around what details are needed, how to get these details for an individual brain, and how to evaluate the validity of the simulation.

BPF: Thank you for the fascinating responses, Dr. Mikula!

Recent Posts

Leave a Comment

Start typing and press Enter to search

en_USEnglish