2023

A group of scientists, including several at Harvard, have dived deeper into the mammalian brain than ever before by categorizing and mapping at the molecular level all of its thousands of different cell types.

The researchers reported their work in Nature, through a series of 10 papers — six with Harvard affiliations. It’s part of the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies initiative, which so far has focused on mice; future phases will shift to humans and other primates.

Mammal brains house billions of cells, each defined by the genes they express. This complexity is why true understanding of many brain functions, including molecular mechanisms that underlie neurological diseases, remains so elusive.

To create the first molecularly defined cell atlas of the whole mouse brain, a team led by Harvard’s Xiaowei Zhuang identified and spatially mapped thousands of unique cell types, most of which had never previously been characterized.

“We identified 5,000 transcriptionally distinct cell populations,” said Zhuang, the David B. Arnold Professor of Science and a Howard Hughes medical investigator. “Suffice it to say that the level of diversity we identified is really extraordinary.”

The brain-wide atlas of cell types cataloging cells, their distribution, and interactions could serve as a starting point for scientists studying certain brain functions or diseases. Someday the basic outlines of the atlas could be applied to the human brain, 1,000 times larger than the mouse brain.

“It gives me real excitement to see things that were not visible before. I am also thrilled when our technology is used by so many labs,” said Zhuang, referring to Multiplexed Error-Robust Fluorescence in situ Hybridization (MERFISH), a genome-scale imaging technology developed in her lab.

Xiaowei Zhuang (center) in the lab with team Aaron Halpern (from left), Won Jung, Meng Zhang, and Xingjie Pan.

Niles Singer/Harvard Staff Photographer

In collaboration with scientists at the Allen Institute for Brain Science, Zhuang and her team used MERFISH together with single-cell RNA sequencing data to not only identify each cell type, but to image them in situ. Their work provides new information about the molecular signatures of these cell types, as well as where they are located in the brain. The result is a stunningly detailed picture of the mouse brain’s full complement of cells, their gene-expression activity, and their spatial relationships.

In their Nature paper, the researchers used MERFISH to determine gene-expression profiles of approximately 10 million cells by imaging a panel of 1,100 genes, selected using single-cell RNA sequencing data provided by Allen Institute collaborators.

Retina findings could boost glaucoma research

In a separate paper in the Nature series, Joshua Sanes, the Jeff C. Tarr Professor of Molecular and Cellular Biology, co-led a team that captured new insights into the evolutionary history of the vertebrate retina.

Joshua Sanes.

Joshua Sanes.

Photo by Rick Friedman

A part of the brain encased in the eye, the retina boasts complex neural circuits that receive visual information, which they then transmit to the rest of the brain for further processing. The retina is functionally very different from species to species — for example, human hunter-gatherers evolved sharp daytime vision, whereas mice possess better night vision than humans do; some animals see in color, while others see predominantly in black and white.

But at molecular levels, how different are retinas, really? Sanes, in collaboration with researchers at the University of California, Berkeley, and the Broad Institute, performed a new comparative analysis of retinal cell types across 17 species, including humans, fish, mice, and opossums. Using single-cell RNA sequencing, which allowed them to differentiate types of retinal cells by their genetic expression profiles, the researchers’ findings upended some long-held views about how certain species’ visual systems evolved.

One striking discovery involved so-called “midget retinal ganglion cells,” which, in humans, carry 90 percent of the information from the eye to the brain. These cells give humans their fine-detail vision, and changes to them are associated with eye diseases such as glaucoma. No related cells had ever been found in mice, so they had been assumed to be unique to primates.

In their analysis, Sanes and team identified for the first time clear relatives of midget retinal ganglion cells in many other species, including mice, albeit in much smaller proportions. Since mice are a common model animal to study glaucoma, being able to pinpoint these cells is a potentially crucial insight.

“I think we can make a very compelling case that if you want to study these important human retinal ganglion cells in a mouse, these are the cells you want to be studying,” Sanes said.

Other Harvard-affiliated researchers, at Harvard Medical School, Boston Children’s Hospital, and the Broad, also contributed findings to the NIH’s cell census network, including a molecular cytoarchitecture of the adult mouse brain, and a transcriptomic taxonomy of mouse brain-wide spinal projecting neurons.



Harvard researchers have realized a key milestone in the quest for stable, scalable quantum computing, an ultra-high-speed technology that will enable game-changing advances in a variety of fields, including medicine, science, and finance.

The team, led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative, has created the first programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations, a vast improvement over prior efforts.

Published in Nature, the work was performed in collaboration with Markus Greiner, the George Vasmer Leverett Professor of Physics; colleagues from MIT; and QuEra Computing, a Boston company founded on technology from Harvard labs.

The system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, heralding the advent of early fault-tolerant, or reliably uninterrupted, quantum computation.

Lukin described the achievement as a possible inflection point akin to the early days in the field of artificial intelligence: the ideas of quantum error correction and fault tolerance, long theorized, are starting to bear fruit.

“I think this is one of the moments in which it is clear that something very special is coming,” Lukin said. “Although there are still challenges ahead, we expect that this new advance will greatly accelerate the progress toward large-scale, useful quantum computers.”

Denise Caldwell of the National Science Foundation agrees.

“This breakthrough is a tour de force of quantum engineering and design,” said Caldwell, acting assistant director of the Mathematical and Physical Sciences Directorate, which supported the research through NSF’s Physics Frontiers Centers and Quantum Leap Challenge Institutes programs. “The team has not only accelerated the development of quantum information processing by using neutral atoms, but opened a new door to explorations of large-scale logical qubit devices, which could enable transformative benefits for science and society as a whole.”

It’s been a long, complex path.

In quantum computing, a quantum bit or “qubit” is one unit of information, just like a binary bit in classical computing. For more than two decades, physicists and engineers have shown the world that quantum computing is, in principle, possible by manipulating quantum particles — be they atoms, ions, or photons — to create physical qubits.

But successfully exploiting the weirdness of quantum mechanics for computation is more complicated than simply amassing a large-enough number of qubits, which are inherently unstable and prone to collapse out of their quantum states.

The real coins of the realm are so-called logical qubits: bundles of redundant, error-corrected physical qubits, which can store information for use in a quantum algorithm. Creating logical qubits as controllable units — like classical bits — has been a fundamental obstacle for the field, and it’s generally accepted that until quantum computers can run reliably on logical qubits, the technology can’t really take off.

To date, the best computing systems have demonstrated one or two logical qubits, and one quantum gate operation — akin to just one unit of code — between them.

The Harvard team’s breakthrough builds on several years of work on a quantum computing architecture known as a neutral atom array, pioneered in Lukin’s lab. It is now being commercialized by QuEra, which recently entered into a licensing agreement with Harvard’s Office of Technology Development for a patent portfolio based on innovations developed by Lukin’s group.

The key component of the system is a block of ultra-cold, suspended rubidium atoms, in which the atoms — the system’s physical qubits — can move about and be connected into pairs — or “entangled” — mid-computation.

Entangled pairs of atoms form gates, which are units of computing power. Previously, the team had demonstrated low error rates in their entangling operations, proving the reliability of their neutral atom array system.

With their logical quantum processor, the researchers now demonstrate parallel, multiplexed control of an entire patch of logical qubits, using lasers. This result is more efficient and scalable than having to control individual physical qubits.

“We are trying to mark a transition in the field, toward starting to test algorithms with error-corrected qubits instead of physical ones, and enabling a path toward larger devices,” said paper first author Dolev Bluvstein, a Griffin School of Arts and Sciences Ph.D. student in Lukin’s lab.

The team will continue to work toward demonstrating more types of operations on their 48 logical qubits and to configure their system to run continuously, as opposed to manual cycling as it does now.

The work was supported by the Defense Advanced Research Projects Agency through the Optimization with Noisy Intermediate-Scale Quantum devices program; the Center for Ultracold Atoms, a National Science Foundation Physics Frontiers Center; the Army Research Office; and QuEra Computing.



Some dogs love to play fetch, while others watch the tennis ball roll by with little interest. Some run circles around their owners, herding them, during walks, while others stop to sniff everything in their path.

It begs the question — why do dogs behave so differently, even within their own breeds?

Erin Hecht, assistant professor of human evolutionary biology at Harvard, is seeking answers through The Canine Brains Project, part of the University’s Brain Science Initiative. She recently gave a talk on the emerging field of canine neuroscience, and what we know so far about our furry friends.

Dogs, according to Hecht, have the potential to teach us a lot about brain development, having been domesticated roughly 20,000 to 40,000 years ago — a blip on the evolutionary timeline. For context, modern humans emerged roughly 300,000 years ago. Because domestication was relatively recent, modern dog breeds live alongside ancient breeds, making comparison possible.

“Darwin saw dogs as a window on mechanisms of evolution,” Hecht said. “When we’re looking at dogs, as a natural experiment and brain behavior evolution all we have to do is look at their brains and see what evolution did in order to satisfy those selection requirements.”

Hecht’s lab performs MRI scans on nearly 100 canine brains a year and conducts owner surveys looking at dogs’ working skills, like hunting, herding, and guarding, compared to skull shape, body size, and breed.

The lab looks at domesticated breeds like Great Danes or other hunting dogs or designer dogs — a practice that took really took hold during the Victorian era — as well as ancient dogs like huskies and “village dogs.”

“About 80 percent of the dogs living on the planet today are what’s known as village dogs. These are free-ranging animals that live as human commensals. So they’re living within human society, but they’re not pets,” Hecht said.

Erin Hecht with her Australian Shepherds in her office in front of a brain model.

Erin Hecht with Australian shepherds Lefty and Izzy.

File photo by Jon Chase/Harvard Staff Photographer

Some initial findings from the lab include the discovery of neurological differences in dog breeds, including that premodern dogs on a whole have larger amygdala — the part of the brain that controls emotional processing and memory. Such heightened environmental-monitoring skills would come in handy for dogs deciding which humans to steal scraps from and which to avoid.

Modern dogs have a bigger neocortex — the part of the brain that controls motor function, perception, and reasoning. It may play a part in modern dogs’ increased behavioral flexibility, or ability to adapt to new environments.

Hecht’s lab connects personality and skill differences in dogs to six different parts of the brain: the regions controlling drive and reward; olfaction and taste; spatial navigation; social communication and coordination; fight or flight; and olfaction and vision. While breeds we see in our homes today share similarities in these pathways, Hecht’s research suggests the traits can be attributed more to selective breeding than ancestral DNA.

“There has been very strong recent specific selection in individual breeds rather than founding effects in ancestral founding populations,” Hecht said. “So then we can look at behavior and ask whether the types of behaviors that different lineages have been selected for historically … [explain] each dog’s anatomy and these six brain networks. And it seems like there are some interesting relationships here.”

More than breed itself, pathways are impacted by a dog’s head shape and size. For example, Hecht’s lab has found that bigger dogs have larger neocortices than their smaller counterparts, and therefore generally are more trainable and less anxious. Dogs bred for their narrow skulls may see that impact their behavior.

“It stands to reason that if you’re manipulating the shape of a skull, you’re going to be manipulating the shape of the brain,” Hecht said. “But this confirms that dogs with these extreme skull morphotypes have impacts on their brain anatomy that likely affects behavior.”

In conjunction with the MRI scanning, Hecht’s lab measures dogs’ behavior with an assessment called C-BARQ, the Canine Behavioral Assessment and Research Questionnaire. The survey, which is filled out by the dog’s owner, assesses behaviors such as aggression, trainability, and rivalry, to name a few.

“There was one study that collected C-BARQ data on 32,000 dogs from 82 different breeds and then performed clustering on the survey responses. And the data clustered more on the body height of the dogs than on breed relatedness. So size was a better predictor than breed in predicting temperament scores on this C-BARQ assessment,” Hecht said.

She added that just because certain dogs have brain makeups that suggest a certain disposition, it doesn’t lock them into those behaviors. That goes especially for working skills.

“Training is almost always necessary. I have yet to hear of any particular breed of working dog, where it’s just born knowing how to do its job,” Hecht said.

But whether you have a pit bull that acts like a chihuahua or a Yorkie that likes to run with the big dogs, a look inside their brain might help explain why they are the way they are.

Find out more about the work happening in the Hecht lab or see if your dog is a study candidate.



Humans have been considered unique for their ability to form relationships for mutual benefit not just within immediate kin groups but across and between them, even with total strangers.

It turns out that one of our closest living relatives, bonobos, are also able to think outside the group.

These were the findings of a Harvard study that involved two years of data collection in the deep forests of the Democratic Republic of Congo, where up to 20,000 of the endangered apes make their only home. The ability to engage in such “out-group” cooperation is the foundation on which humans have created societies and cultures through trade and knowledge-sharing.

“Our work with bonobos is showing that cooperation beyond social borders, without immediate payoff between unrelated individuals, is not uniquely human,” said senior author Martin Surbeck, assistant professor in the Department of Human Evolutionary Biology, who has researched bonobos for 20 years. The study was published in Science with lead author Liran Samuni, a former Harvard research associate who works at the German Primate Center in Göttingen.

A previous study, based on the same bonobo communities, found that the primates maintained distinct, stable social borders, so called “communities.” In their latest analysis, Samuni and Surbeck found evidence of cooperation between members of different communities, facilitated by an assortment of key individuals. These certain few consistently engaged in behaviors such as grooming and food-sharing and acted as links between groups — think ape ambassadors. Within each behavior, individuals cooperated with specific counterparts who were also good cooperators in that domain.

Members of the local community and bonobo data collection team.

Members of the local community and bonobo data collection team at the Kokolopori Bonobo Reserve: Medard Bangela (from left), Vincent Moscova, and Leonard Lolima Nkanga.

The conclusions arose from daily observations of within- and between-group interactions of 31 adult bonobos, living in two communities called Ekalakala and Kokoalongo, all within the Kokolopori Bonobo Reserve in the Congo. Surbeck initiated the project in 2016 and, together with Samuni, has been working with Congolese partners and local villagers to collect data.

For the study, Surbeck, Samuni, and a team of local trackers focused on three cooperative behaviors: food-sharing; grooming; and forming alliances, which consists of joint action against a common opponent.

They found that certain individuals exhibited these acts outside their social bounds with others who would be likely to return the favors. Although bonobos are known for being a peaceful species, compared with their more warring chimpanzee cousins, the researchers found that the bonobos were not random in their benevolence.

“They’re not similarly nice to everybody,” Samuni said, noting they formed preferences for some and not others.

“The extreme tolerance we observed between members of different bonobo groups paves the way for pro-social cooperative behaviors between them, a stark contrast to their sister species, the chimpanzees,” she added.

Humans and bonobos share 99 percent of their DNA. Observing the animals in their natural environment can offer a window into our evolutionary past, the researchers say. Work at the preserve takes years of remote coexistence with the bonobos; habituation of the animals to a human presence has been key to making the studies successful, Surbeck said.

Surbeck’s team works closely with local partners, including the conservation organization Vie Sauvage, the Congolese Conservation authorities (ICCN), and the Bonobo Conservation Initiative, to gain support and permission to work in the country and on the community reserve.

“Long-term research sites don’t always quite get the recognition they deserve, in terms of what they contribute to both basic data collection and serving as a platform for other scientists, as well as for the conservation of the species,” Surbeck said.

Bonobos are a critically endangered species, with only a few thousand left in the wild, yet they represent a key comparative model to human social systems, “a rare opportunity to reconstruct the ancestral conditions of human large-scale cooperation,” according to the paper.



When animals of two different species mate, their hybrid offspring can be unhealthy or sterile. Often, only one sex is affected.

Sexual differences in fertility follow a pattern known as Haldane’s Rule, which states that hybrids are afflicted more when they inherit two different sex chromosomes. In mammals, males have XY sex chromosomes, so male “ligers” and “tigons” (offspring between tigers and lions) are sterile, while females, which have two X chromosomes, tend to be more fertile. But in butterflies as well as birds, females have ZW sex chromosomes while males have ZZ, so according to Haldane’s Rule, it is females that are sterile.

What insights could this natural phenomenon hold for speciation, the process in which different biological lineages split? James Mallet, professor of organismic and evolutionary biology in residence and associate of population genetics in the Museum of Comparative Zoology, and senior author of a new study in Proceedings of the National Academy of Sciences, takes a stab at the “why” behind Haldane’s Rule, using butterfly genetics as a guide.

Designed and led by former graduate student Tianzhu Xiong, the study investigated hybrid sterility in butterflies, creating hybrid crosses of different species to determine which particular genes were responsible for the phenotype.

According to analysis by Xiong, now a postdoctoral researcher at Cornell, the sterility trait in hybrid butterflies is probably tied to many genes scattered across the Z chromosome. Thus understanding all the genetic mechanisms behind it will require further study.

For the research, Xiong and colleagues created hybrids of Papilio swallowtail butterflies. They found that problems associated with hybrids, such as low pupal weight and ovary malformation in the females, happened because of uneven mixing, or “introgression,” between the Z sex chromosome and all the other chromosomes. This observation points to many genes working together to produce a balance within each species.

The balance between the Z chromosome and the rest of the chromosomes is disturbed when the former is inherited from only one species, as in female ZW hybrids. The W chromosome characteristic of female butterflies carries very few genes and is not involved. Furthermore, Xiong showed that female hybrids of another butterfly, Heliconius, studied by Neil Rosser and colleagues, also followed the same multigene pattern on the Z chromosome.

“Initially, I was in a mindset of hoping to find a major gene that causes the phenotype,” Xiong said. “But it turns out that the answer is more mathematical than expected, and that a very large number of genes actually explain the pattern better.”

The analysis shows that hybrid sterility may be like height in humans — polygenic, or involving multiple genes. “Tianzhu has shown it is the fraction of the Z chromosome that matters, not whether you’ve got a particular problem on one region of the chromosome,” Mallet said.

The work was supported in part by the National Science Foundation.



The warming climate is having ripple effects across ecosystems, including plants, which have evolved clever mechanisms to conserve water when stressed by drought.

But are plants likelier to defend themselves against dry air or dry soil? This question is hotly debated among climate scientists, and the distinction matters: While there’s consensus on the trajectory of temperature rise over coming decades, less is known about how global warming will affect soil moisture. Understanding this dynamic may help decide the most effective ways to ensure the survival of robust plant life.

A team led by Kaighin McColl, assistant professor in the Department of Earth and Planetary Sciences and the John A. Paulson School of Engineering and Applied Sciences, have new research in Nature Water indicating that plant drought-defense mechanisms, which involve closing tiny pores on leaves called stomata to limit photosynthesis and conserve water, are more likely triggered by dry soil than by dry air.

Their results challenge recently held views and were derived from a place with no plants at all — the barren salt flats of Utah and Nevada.

Previous research had found that plants are likelier to have closed stomata in the presence of dry air, rather than dry soils, so it was assumed that aridity triggered the drought response. But McColl and colleagues suspected these results did not tell the whole story about plant vulnerability to drier environments.

Kaighin McColl.

Kaighin McColl went to the salt flats in the Western U.S. desert to conduct his research.

Alex Griswold/Harvard University Center for the Environment

“The problem with this argument is that correlation does not imply causation; when plants close their stomata, that could actually be causing the air to get drier, rather than the other way around,” McColl said.

To investigate their opposing hypothesis, McColl and lead author Lucas Vargas Zeppetello, a Harvard postdoctoral researcher who starts at the University of California, Berkeley, in January, used as their natural laboratory one of the only places on Earth that has a vigorous water cycle but doesn’t grow any plants — salt flats in the Western U.S. desert.

Using salt flats data provided by collaborators in Nevada and Utah, the researchers reproduced other researchers’ studies that had calculated the relationship between air dryness and moisture flux, or movement (in this case through evaporation), from the land surface and had attributed those values to plants closing their stomata to conserve water. The Harvard team found their calculations lined up almost perfectly with those previous studies, but with no plants in the salt flats, they knew there had to be another explanation.

In that plant-free environment, evaporation responds only to soil dryness. McColl and Vargas Zeppetello concluded that plant responses to lack of humidity may have been exaggerated in previous studies. They think instead that plants respond most acutely to dry soil, an environmental stressor that is known to reduce transpiration and photosynthesis.

What does this mean? Soil dryness matters more than air dryness when it comes to global plant ecosystems.

“Our findings put emphasis on projections for water in the future,” Vargas Zeppetello said. “People talk about consensus on climate change, but that really has to do with global temperatures. There’s much less of a consensus on what regional changes to the water cycle are going to look like.”

The research was supported in part by the National Science Foundation.



Nobel Prize-winning astrophysicist Kip Thorne has spent his career describing, through mathematics, some of the deepest mysteries of the universe. His latest project takes on similar material, but through poetry and paintings.

A nearly two-decade collaboration with artist Lia Halloran has extracted from Thorne’s brain the pictures that accompany those highly technical descriptions and brought them to life in their new book, “The Warped Side of the Universe: An Odyssey Through Black Holes, Wormholes, Time Travel, and Gravitational Waves.” The weighty tome blends complex science with whimsical art and features more than 300 ink-on-film paintings by Halloran alongside poetry by Thorne.

Thorne, the 2017 Nobel laureate in physics and emeritus theoretical physics professor at the California Institute of Technology, and Halloran, associate professor at Chapman University, shared the story behind their partnership at a Harvard Science Book Talk last week, moderated by MIT physicist and humanities Professor Alan Lightman (one of Thorne’s former students at CalTech).

The book, said Thorne, is not meant to teach the particulars of astrophysical concepts, but rather to “convey the essence of the ideas, the feeling, the experience of the ideas, without going into the technical details — like I do in my other life.”

Blue painting of a black hole.
Blue and red painting of black holes colliding and morphing into gravitational waves.

Paintings that accompany the poems "A Black Hole is Made from Tendices/of Stretching and Squeezing Space and/ a Chaotic Singularity" and "When Orbiting Black Holes Collide:/Spiraling Vortices/ Morph into Gravity Waves."

© 2023 by Lia Halloran

Halloran was introduced to Thorne seven years before they actually met — through his 1994 book “Black Holes and Time Warps: Einstein’s Outrageous Legacy,” a gift from her mother. Taking elective astronomy courses while an undergraduate art major at UCLA had stirred Halloran’s interest in astrophysics.

“I just loved the way these big ideas made me excited about understanding the natural world,” she said. Later, as an M.F.A. student at Yale, she based a printmaking project on the ideas from Thorne’s book.

Halloran was able to pitch Thorne on giving visual expression to his science through mutual acquaintances years later. At one of their first meetings, Thorne asked her to make a sketch depicting black holes and wormholes “for a young filmmaker.” That filmmaker was Steven Spielberg, who showed the drawings to Christopher Nolan, the eventual director of the 2014 epic space drama “Interstellar.” Thorne would serve as scientific consultant and executive producer of the movie, and he wrote a book about that process: “The Science of Interstellar.”

“Warped Side” is an outward expression of conversations between Thorne and Halloran over the last 13 years. Through poetry, Thorne distills scientific concepts like Einstein’s law of time warps, as well as innovations in astrophysics. Among things that feature prominently is the Laser Interferometer Gravitational-Wave Observatory, which Thorne co-founded 40 years ago, and for which he shared the Nobel Prize for the observation of gravitational waves.

Halloran’s wife, Felicia, is a frequent character throughout the book. In one piece, her ghostly figure is stretched and squeezed upward through a spinning black hole. “The wildly switching tendices/tear frantic Felicia apart/then rip up the atoms/from which she was made/­–if the black hole is young,” reads Thorne’s accompanying verse.

The talk was sponsored by Harvard Book Store, the Division of Science, Harvard Library, and the Center for Astrophysics.

An exhibition of the artwork in “Warped Side” is on view at the gallery Luis De Jesus Los Angeles through Dec. 22.

 



It took an ailing screech owl to teach a scientist the value of up-close-and-personal study.

In a talk Monday at the Science Center, Carl Safina, an ecologist at Stony Brook University and author of several books about humanity’s relationship with nature, recalled that the chick was found on a friend’s lawn as the pandemic was tightening its grip on the world. In the picture Safina received, the bird looked beyond saving.

“How did it die?” he asked.

“It was just a downy little, dying thing,” Safina, whose most recent book is “Alfie and Me: What Owls Know, What Humans Believe,” said in his Harvard talk, which was sponsored by the FAS Division of Science, Harvard Library, and the Harvard Book Store and included questions from Clemson University ecologist Joseph Drew Lanham.

Safina and his wife, Patricia, took in the little bird of prey. They planned to nurse it back to health and then perform a “soft release,” in which the animal is set loose but stays nearby, supported with food while it learns the ropes of wild bird life.

But the owlet’s flight feathers didn’t grow in properly, leaving it grounded for months after it should have fledged. Safina delayed the release further to ensure the bird would properly molt — critical to renew feathers that keep birds warm and enable flight. Over those extended months, Safina got to know Alfie in ways that moved and changed him and his wife.

“An owl found me and then I was watching ‘an’ owl,” he said. “It was no longer an owl after a while, it was ‘she,’ because she had a history with us. … This little owl, who was with us much longer than we thought she would be, became an individual to us by that history and all those interactions.”

The bond with Alfie strengthened to the point that, when she was finally released, she created a territory with Safina’s Long Island home at its center. Safina was able to spend hours each day observing her in the woods as she learned to take care of herself in the wild, meet two mates, and raise chicks of her own.

When he heard Alfie calling, Safina said, he’d call back and she’d land nearby. Their closeness allowed him to learn more things about screech owls than is generally known. Field guides, for example, describe two known calls but he identified six, some of which you have to be quite close to hear. The relationship also opened a window for Safina onto personality differences between Alfie and her mates.

Lanham pointed out that Safina’s approach to Alfie — including the act of naming her — ran counter to widespread scientific practice. Safina said he wasn’t concerned about violating convention, particularly if something interesting like individual personality differences among owls could be learned.

“I’m interested in knowing what really exists, which is the basic purpose of science,” Safina said, adding that field research has documented personality differences among individuals of species from chimpanzees to elephants to wolves. “Every time they look for it, they find it.”

In the end, the experience caused Safina to ponder more deeply the differences between humankind’s relationship with nature writ large versus the kind of personal connection he was able to feel with a wild individual.

“What I learned from Alfie is that all sentient beings seek a feeling of well-being and freedom of movement,” Safina said. “That’s a guide to what’s right and what’s wrong to me.”



Wondering is a series of random questions answered by experts. The Medical School’s Aleksandra Stankovic is an aerospace psychologist and spaceflight biomedical researcher who studies how to optimize human performance and behavioral health in extreme operational environments. We asked her how a person gets ready to travel to space.

The spaceflight environment presents many challenges — technical, physical, and psychological. With more people having access to space travel today than ever before, successful and safe spaceflights require varying levels of preparation before launch day.

For government astronauts, candidates undergo a rigorous two-year initial training period before qualifying for flight assignment. This training includes learning about Space Station and flight vehicle systems, studying orbital mechanics, becoming proficient in emergency procedures (like how to handle scenarios such as fire, cabin depressurization, or medical issues), conducting flight training in T-38 jets (to build quick decision-making skills in high-performance aircraft), and developing Russian language skills (since international space missions involve collaboration among astronauts from various countries).

To prepare for the microgravity environment of space, astronauts also participate in simulations of weightlessness, including parabolic flights and training in the Neutral Buoyancy Lab, a large swimming pool where astronauts practice conducting spacewalks and learn to perform tasks in their pressurized spacesuits. Astronauts complete survival training and learn to cope with extreme conditions — a crucial skill in case of an emergency landing back on Earth in the water or in very cold locations like Siberia. They are trained to operate the robotic arm that is used for tasks such as capturing cargo spacecraft.

Once they receive a flight assignment, astronauts complete an additional 18 months of mission-specific training. They simulate various mission scenarios — including launch, rendezvous, and docking — and emergency procedures. Additionally, they undergo extensive training on the scientific experiments they’ll be conducting, like how to work with equipment, collect samples, and handle data.

Since maintaining physical fitness is vital for astronauts to counteract the muscle and bone loss experienced in microgravity, they spend a lot of time preflight working out. At the same time, long-duration space missions can be mentally challenging, given the prolonged isolation, confinement, and separation from family and friends. Astronauts learn strategies to manage stress, maintain psychological well-being, and work effectively in close environments with their fellow crewmembers.

Commercial astronaut training is significantly less intensive than the training government-sponsored astronauts receive, since their missions are often of shorter duration and focus more on providing safe and enjoyable flying experiences. While commercial crews may stay in space for shorter intervals ranging from a few minutes for suborbital flight to several days or even weeks on the Space Station, government astronauts typically spend six months or more on the station. (Astronaut Frank Rubio recently set the record for longest American space mission with 371 consecutive days in space; cosmonaut Valeri Polyakov, who logged 437 continuous days in orbit on Russia’s Mir space station between 1994 and 1995, still holds the world record.)

Commercial astronauts often receive more generalized training that covers the basics of space travel and safety/emergency procedures. Anyone who spends prolonged periods in space will need to spend a lot of their day working out to keep their bodies in strong shape to be healthy when they return home. Everyday activities can be challenging without gravity, and sleeping can be difficult without the normal light cues from the sun that our bodies rely upon on Earth to regulate our circadian rhythms. A combination of technology and training help space travelers adapt.

As more people travel to space, on an expanding range of flight vehicles and for varying types of missions, spaceflight preparation too will undoubtedly continue to evolve. It’s an exciting time to be studying how to keep humans safe and healthy in space, and researchers like me are thrilled to be a part of enabling this next great wave of human space exploration!

— As told to Anna Lamb/Harvard Staff Writer



By combining noninvasive imaging techniques, investigators have created a comprehensive cellular atlas of a region of the human brain known as Broca’s area — an area critical for producing language.

The new technology will provide insights into the presence and spread of pathologic changes that occur in neurodegenerative illnesses — such as epilepsy, autism, and Alzheimer’s disease — as well as psychiatric illnesses.

Until now, scientific advances have not produced undistorted 3D images of cellular architecture that are needed to build accurate and detailed models. In new research published in Science Advances, a team led by investigators at Harvard-affiliated Massachusetts General Hospital, has overcome this challenge with detailed resolution to study brain function and health.

Using sophisticated imaging techniques — including magnetic resonance imaging, optical coherence tomography, and light-sheet fluorescence microscopy — the researchers were able to prevail over the limitations associated with any single method to create a high-resolution cell census atlas of a specific region of the human cerebral cortex, or the outer layer of the brain’s surface. The team created such an atlas for a human postmortem specimen and integrated it within a whole-brain reference atlas.

Atlas overview

Overview of the new pipeline. Human brain samples are imaged at multiple scales with multiple modalities (MRI, OCT, and light-sheet fluorescent microscopy or LSFM).

MGH

“We built the technology needed to integrate information across many orders of magnitude in spatial scale from images in which pixels are a few microns to those that image the entire brain,” says co–senior author Bruce Fischl, director of the Computational Core at the Athinoula A. Martinos Center for Biomedical Imaging at MGH and a professor in radiology at Harvard Medical School.

Ultimately, the methods in this study could be used to reconstruct undistorted 3D cellular models of particular brain areas as well as of the whole human brain, enabling investigators to assess variability between individuals and within a single individual over time.

“These advances will help us understand the mesoscopic structure of the human brain that we know little about. Structures that are too large and geometrically complicated to be analyzed by looking at 2D slices on the stage of a standard microscope, but too small to see routinely in living human brains, says Fischl.

“Currently we don’t have rigorous normative standards for brain structure at this spatial scale, making it difficult to quantify the effects of disorders that may impact it such as epilepsy, autism, and Alzheimer’s disease.”

Additional co-authors include Irene Costantini, Leah Morgan, Jiarui Yang, Yael Balbastre, Divya Varadarajan, Luca Pesce, Marina Scardigli, Giacomo Mazzamuto, Vladislav Gavryusev, Filippo Maria Castelli1, Matteo Roffilli, Ludovico Silvestri, Jessie Laffey, Sophia Raia, Merina Varghese, Bridget Wicinski, Shuaibin Chang, Anderson Chen I-Chun, Hui Wang, Devani Cordero, Matthew Vera, Jackson Nolan, Kimberly Nestor, Jocelyn Mora, Juan Eugenio Iglesias, Erendira Garcia Pallares, Kathryn Evancic, Jean Augustinack, Morgan Fogarty, Adrian V. Dalca, Matthew Frosch, Caroline Magnain, Robert Frost, Andre van der Kouwe, Shih-Chi Chen, David A. Boas, Francesco Saverio Pavone, and Patrick R. Hof.

Support for this research was provided in part by the BRAIN Initiative Cell Census Network, the National Institute for Biomedical Imaging and Bioengineering, the National Institute on Aging, the National Institute of Mental Health, the National Institute for Neurological Disorders and Stroke, Eunice Kennedy Shriver National Institute of Child Health and Human Development, Chan-Zuckerberg Initiative DAF an advised fund of Silicon Valley Community Foundation, Shared Instrumentation,  NIH Blueprint for Neuroscience Research, European Union’s Horizon 2020 research and innovation Framework Programme, European Union’s Horizon 2020 Framework Programme for Research and Innovation, Marie SkÅ‚odowska-Curie, Italian Ministry for Education in the framework of Euro-Bioimaging Italian Node, European Research Council, Alzheimers Research UK, The National Institutes of Health,  and “Fondazione CR Firenze” (private foundation) Human Brain Optical Mapping.



Having pulled themselves from the water 360 million years ago, amphibians are our ancient forebears, the first vertebrates to inhabit land. 

Now, this diverse group of animals faces existential threats from climate change, habitat destruction, and disease. Two Harvard-affiliated scientists from India are drawing on decades of study — and an enduring love for the natural world — to sound a call to action to protect amphibians, and in particular, frogs.

Sathyabhama Das Biju, a Harvard Radcliffe Institute fellow and a professor at the University of Delhi, and his former student Sonali Garg, now a biodiversity postdoctoral fellow at Harvard’s Museum of Comparative Zoology, are co-authors of a sobering new study in Nature, featured on the journal’s print cover, that assesses the global status of amphibians. It is a follow-up to a 2004 study about amphibian declines.

Biju and Garg are experts in frog biology who specialize in the discovery and description of new species. Through laborious fieldwork, they have documented more than 100 new frog species across India, Sri Lanka, and other parts of the subcontinent.

According to the Nature study, which evaluated more than 8,000 amphibian species worldwide, two out of every five amphibians are now threatened with extinction. Climate change is one of the main drivers. Habitat destruction and degradation from agriculture, infrastructure, and other industries are the most common threats to these animals.

Biju and Garg are among more than 100 scientists who contributed their data and expertise to the report, which shows that nearly 41 percent of amphibian species are threatened with extinction, compared with 26.5 percent of mammals, 21.4 percent of reptiles, and 12.9 percent of birds.

Frogs, says Biju, are excellent model organisms to study evolution and biogeography because of the extreme diversity of traits they acquired over millennia. They are also very sensitive to abrupt changes in their environment, including droughts, floods, and storms, which makes them a barometer for assessing the health of an ecosystem.

Indian Purple Frog.

The Indian Purple Frog, first described by Sathyabhama Das Biju in 2003.

“But very frankly, what drives me the most is their beauty and diversity in shapes, form, colors, as well as behaviors,” said Biju, who has dedicated 30 years to frog taxonomy across biodiversity hotspots in or near India, rising to fame through his formal description in 2003 of the Indian Purple Frog. He is known as the Frogman of India.

India is home to one of the most diverse frog populations in the world, with more than 460 documented species. Of those about 41 percent are considered threatened, according to Biju. Habitat destruction and degradation from cultivation of tea, coffee, spices, and other products pose the most danger to the animals.

As a Radcliffe Fellow, Biju is focused on “outpacing nameless extinctions” — saving frogs before they go extinct without being classified or even recognized. He is looking to understand key areas within biodiversity hotspots for effective conservation planning. He is also writing a book — filled with fieldwork photos — on amphibians of India.

“Without understanding the species themselves, and properly identifying them and their geographic distributions, no meaningful conservation planning can be undertaken,” Biju said. “Unless we know what we have, we cannot know what we need to conserve, and where we need to conserve.”

Garg remembers a time, during fieldwork in the Western Ghats mountain range, when she held a frog so small it could sit on the tip of her finger. It was a moment of striking contrast with the everyday puddle-hoppers that surrounded her in the small Indian village where she grew up. “I never thought they could be so beautiful,” she said. “There was so much to discover, and they just became a calling.”

She joined Biju’s lab at University of Delhi as a graduate student to find, identify, name, and better understand these species. She has done extensive fieldwork in India, Sri Lanka, the Western Ghats, the Himalayas, and Indo-Burma. Her research focuses on capturing the diversity of frogs in India using integrative taxonomy, or finding new ways to classify organisms, as well as elucidating their evolutionary histories. She has worked to deepen her quest by incorporating DNA sequencing and CT scanning.

Günther’s Shrub Frog
Franky's Narrow-Mouthed Frog

After 136 years from its original description, Günther’s shrub frog was recently rediscovered in the wild. Franky’s narrow-mouthed frog is among the threatened species.

Credit: S.D. Biju and Sonali Garg

Both she and Biju are using the vast herpetological collection of Harvard’s Museum of Comparative Zoology to inform their studies and provide benchmarking against potentially new species they uncover. According to MCZbase, the museum’s online specimen database, the Herpetology Department’s permanent research collection has 117,165 frog specimens, with 223 from India. The Vertebrate Paleontology and Special Collections departments hold additional specimens.

The researchers have begun a fruitful collaboration with James Hanken, Harvard’s Alexander Agassiz Professor of Zoology and curator of herpetology at the Museum of Comparative Zoology. Hanken is an expert on amphibian morphology, with a special emphasis on salamanders. He hosts Garg as a postdoctoral fellow in his lab, and recently joined the frog specialists on three field expeditions to India — two to the Himalayas, on the border with Tibet and Nepal, and another to the Western Ghats.

“In terms of amphibians that I saw, it was like going to the moon,” Hanken said. “It’s very exciting as a biologist to be immersed in a group of organisms that are completely new to you.”

Hanken and the Indian scientists plan to publish research describing frogs found in India, including their historical migration patterns, reproductive behavior, and genetic variation.

As for conserving endangered species, and the bleak picture the Nature study depicts, knowledge must lead to action, Biju said.

“Governments, individuals, and organizations need to join efforts to scale up global conservation action for amphibians to make sure they are thriving in nature,” he said. “Otherwise the ongoing amphibian crisis will have devastating effects for ecosystems and the planet.”

In some instances, conservation strategies have worked, added Garg. According to the Nature study, 63 species previously considered endangered have improved their status since 2004 due to concerted conservation efforts.

“There is hope,” Garg said. “Scaled up research and conservation efforts can play an important role in making sure amphibians are not just surviving, but also thriving in nature.”



Climate change is raising sea levels, creating stronger and wetter storms, melting ice sheets, and fostering conditions for more and worse wildfires. But as cities around the world warm, climate change’s complex global picture often comes down to this: Residents say they are just too hot.

Jane Gilbert, one of the nation’s first official “heat officers,” works in Miami-Dade County. She said South Florida may be suffering the effects of sea level rise and is in the crosshairs of stronger and more frequent hurricanes, but residents testifying at 2020 hearings on climate-change impacts on low-income neighborhoods repeatedly said the biggest one was the heat.

Panelists gathered at the Harvard Graduate School of Education’s Longfellow Hall last Friday for an event on the “Future of Cities” in a warming world said the topic is particularly relevant this year, when global temperatures soared to new records. As Gilbert spoke on the Cambridge campus on a cool fall afternoon, the heat index in Miami was 109 degrees, just the latest of more than 60 days this year that have seen heat indices higher than 105 degrees.

The summer of 2023 was Earth’s hottest since global records began in 1880, according to NASA scientists. This animated map shows monthly temperature changes from summer 1880 to summer 2023

Credit: NASA

Satchit Balsari, who conducts research among members of India’s largest labor union for women in the nation’s informal economy, did research in Gujarat among the millions of people who are already living with a global climate that has increased 1 degree Celsius. While that rise may seem a small change, that global average is experienced through much wider daily swings in some areas in the form of longer and hotter heat waves, warmer winters, higher nighttime temperatures and more extreme weather events, such as stronger storms or wildfires.

One thing that has become apparent, said Balsari, an assistant professor of global health and population at the Harvard T.H. Chan School of Public Health, is that when talking about individuals, microenvironments matter much more than global averages, because those environments are what affect people as they live and work.

Balsari shared stories of a street vendor, a weaver who works in a building whose rooftop temperature was 10 to 15 degrees above that of the surrounding area, who put up awnings to create shade from the sun, only to have them taken down because they blocked security cameras.

“It’s very hot, and it cools down a little bit at night, but in their work environment, in the lived experience in their homes, there’s this constant experience of ‘It’s too hot,’” said Balsari, who is also an assistant professor of emergency medicine at Harvard Medical School.

As hot as this year has been globally, experts who gathered for the event only expect it to get hotter in the decades to come.

“This is an issue for the long run. Yes, things are bad now. We’re at 1.3, 1.2 (degrees Celsius above preindustrial temperatures) now; we’re going to blow through 1.5. We’re going to probably blow through 2,” said James Stock, vice provost for climate and sustainability and director of the Harvard Salata Institute for Climate and Sustainability. “It gets worse nonlinearly really quickly.”

Stock offered closing remarks at the event, which wrapped up Worldwide Week at Harvard and included lectures, performances, exhibitions, and other events across campus to highlight the ways in which the University interacts and intersects with the world around it through the sciences, arts, culture, politics, and other disciplines.

Joining Stock, Balsari, and Gilbert were Spencer Glendon, founder of the nonprofit Probable Futures; Francesca Dominici, co-director of the Harvard Data Science Initiative; Zoe Davis, climate resilience project manager for the city of Boston; and moderator John Macomber, senior lecturer at Harvard Business School. Harvard Provost Alan Garber and Mark Elliott, vice provost for international affairs, offered opening remarks.

Panelists agreed that better data collection is key to adapting solutions to circumstances that vary widely even across small geographic areas. Interventions such as providing vulnerable populations with air conditioners, for example, may be valuable in low-income communities, but less so in nearby communities with wealthier residents.

In Miami-Dade County, Gilbert said, air conditioners are considered life-saving equipment to the extent that, after Hurricane Irma, the state required nursing homes to have back-up power supplies so that residents could be cooled even in a power outage. ZIP codes with the highest land temperatures — which also tend to be low-income neighborhoods — have four times the rate of hospital admissions during heat waves as other parts of the region.

Gilbert echoed other panelists in calling for better, more granular data through more widespread use of sensors, including wearable sensors that can record heat impact on individuals. With different microclimates affecting different people, different jobs — whether someone is in an office or working at a construction site — also matter, both to public health officials and business leaders. Estimates of the potential economic impact of extreme heat in the Miami metro area are around $10 billion per year in lost productivity.

Nonprofit leader Glendon said we’re entering an unprecedented climate era. Humans were nomadic, regularly moving to where conditions were best, until about 10,000 years ago, when the temperature stabilized to the narrow range that we now consider normal. Centered in the range that humans prefer, climate stability helped foster human settlement and the rise of civilizations.

In the 10,000 years since, Glendon said, everything we’ve created, from building designs to cultural practices, has been made with the unstated assumption that this stable temperature regime — averaging roughly 60 degrees Fahrenheit — will continue. Recent decades’ warming and the projected warming in the decades to come will push heat and humidity in some places beyond the range that the human body can cool itself, with unknown consequences for societies.

“Everything is built on that stability, on the assumption that those ranges are fixed,” Glendon said. “It’s in building codes, grades of asphalt, architecture. … Those ranges are embodied so they became unconscious, but we need to make them conscious, and ideally they motivate us to avoid 2, 2½, or 3 degrees.”



When an algorithm-driven microscopy technique developed in 2021 (and able to run on a fraction of the images earlier techniques required) isn’t fast enough, what do you do?

Dive DEEPer, and square it. At least, that was the solution used by Dushan Wadduwage, John Harvard Distinguished Science Fellow at the FAS Center for Advanced Imaging.

Scientists have worked for decades to image the depths of a living brain. They first tried fluorescence microscopy, a century-old technique that relies on fluorescent molecules and light. However, the wavelengths weren’t long enough and they scattered before they reached an appreciable distance.

The invention of two-photon microscopy in 1990 brought longer wavelengths of light shine onto the tissue, causing fluorescent molecules to absorb not one but two photons. The longer wavelengths used to excite the molecules scattered less and could penetrate farther.

But two-photon microscopy can typically only excite one point on the tissue at a time, which makes for a long process requiring many measurements. A faster way to image would be to illuminate multiple points at once using a wider field of view but this, too, had its drawbacks.

“If you excite multiple points at the same time, then you can’t resolve them,” Wadduwage said. “When it comes out, all the light is scattered, and you don’t know where it comes from.”

To overcome this difficulty, Wadduwage’s group began using a special type of microscopy, described in Science Advances in 2021. The team excited multiple points on the tissue in a wide-field mode, using different pre-encoded excitation patterns. This technique — called De-scattering with Excitation Patterning, or DEEP — works with the help of a computational algorithm.

“The idea is that we use multiple excitation codes, or multiple patterns to excite, and we detect multiple images,” Wadduwage said. “We can then use the information about the excitation patterns and the detected images and computationally reconstruct a clean image.”

The results are comparable in quality to images produced by point-scanning two-photon microscopy. Yet they can be produced with just hundreds of images, rather than to the hundreds of thousands typically needed for point-scanning. With the new technique, Wadduwage’s group was able to look as far as 300 microns deep into live mouse brains.

Still not good enough. Wadduwage wondered: Could DEEP produce a clear image with only tens of images?

In a recent paper published in Light: Science and Applications, he turned to machine learning to make the imaging technique even faster. He and his co-authors used AI to train a neural network-driven algorithm on multiple sets of images, eventually teaching it to reconstruct a perfectly resolved image with only 32 scattered images (rather than the 256 reported in their first paper). They named the new method DEEP-squared: Deep learning powered de-scattering with excitation patterning.

The team took images produced by typical two-photon point-scanning microscopy, providing what Wadduwage called the “ground-truth.” The DEEP microscope then used physics to make a computational model of the image formation process and put it to work simulating scattered input images. These trained their DEEP-squared AI model. Once AI produced reconstructed images that resembled Wadduwage’s ground-truth reference, the researchers used it to capture new images of blood vessels in a mouse brain.

“It is like a step-by-step process,” Wadduwage said. “In the first paper we worked on the optics side and reached a good working state, and in the second paper we worked on the algorithm side and tried to push the boundary all the way and understand the limits. We now have a better understanding that this is probably the best we can do with the current data we acquire.”

Still, Wadduwage has more ideas for boosting the capabilities of DEEP-squared, including improving instrument design to acquire data faster. He said DEEP-squared exemplifies cross-disciplinary cooperation, as will any future innovations on the technology.

“Biologists who did the animal experiments, physicists who built the optics, and computer scientists who developed the algorithms all came together to build one solution,” he said.



Evidence of the clean-energy transition abounds, with solar panels dotting rooftops, parking lots, and open spaces. In Massachusetts, future proliferation of these sunlight-soaking cells will be a high priority: About five times more solar energy will be needed to reach the state’s goal of net-zero greenhouse gas emissions by 2050.

But at what cost? Harvard Forest researchers have co-authored a landmark report detailing how many projects have required the clearing of carbon-absorbing forested areas, unnecessarily harming nature as well as undercutting environmental progress. The report, written with Mass Audubon, says stronger land-use and incentive policies would allow a smoother transition to clean energy sources without sacrificing more forests and farmlands.

Growing Solar, Protecting Nature” was co-authored by Jonathan Thompson, research director at the Harvard Forest, a 4,000-acre natural laboratory that houses research and education in forest biology, ecology, and conservation. In their analysis, Thompson and collaborator Michelle Manion of Mass Audubon outline scenarios and recommendations for smart, sustainable solar development in Massachusetts.

“We found that we can achieve the commonwealth’s clean energy targets with very little impact on natural and working lands,” said Thompson, whose team specializes in geospatial analysis and land-use impacts to forest ecosystems. Over the past year, Thompson led the land-use and carbon modeling that portrayed different future solar impacts across the state under different scenarios. The team worked closely with Evolved Energy Research, which provided energy and economic consulting.

Since 2010, more than 500 ground-mount solar projects have been developed across the state, covering 8,000 acres, of which about 60 percent are forest acres, according to the report. This illustrates a terrible irony: Deploying solar often means cutting back on tree cover and losing the climate change mitigation it provides through pulling carbon dioxide from the air, storing the carbon, and releasing oxygen into the atmosphere.

As the authors make clear, clean energy sources aren’t enough to meet climate goals. Removing carbon from the atmosphere is just as important, and Massachusetts’ famously abundant forests are a primary means to that end. Beyond the beautiful green canopies they provide, forests are a critical, natural carbon sink, and the clean energy transition can’t happen without them.

“We need to think not only about how many acres we’re using for solar development, but also which acres are being developed,” Thompson said. “Our core forests are incredibly valuable for wildlife habitat, biodiversity, and carbon storage, and we must do everything we can to protect them from further fragmentation.”

By shifting from large, ground-mount solar to more projects on rooftops, parking lots, and already-developed lands, Massachusetts can head off further, unnecessary damage to forests and farmlands while also meeting net-zero emission goals, the report states.

State policymakers expressed support for the new report’s findings. “‘Growing Solar, Protecting Nature’ provides a clear-eyed analysis of the impacts of the commonwealth’s solar policy to date and provides a roadmap for better aligning our goals of rapidly transitioning away from fossil fuels, protecting our forests that help to draw down carbon, and protecting biodiversity,” said Massachusetts climate chief Melissa Hoffer. “The joint crises of climate and biodiversity loss require fresh thinking, and this report offers just that.”

Key policy recommendations include:

  • Eliminating Solar Massachusetts Renewable Target (SMART) incentives for projects sited on core habit and critical natural landscapes while increasing incentives for solar on rooftops and developed lands.
  • Investing in approaches that will reduce costs of rooftop and canopy solar projects.
  • Prioritizing solar with the lowest impacts to nature.
  • Supporting governmental, institutional, commercial, and industrial landowners in building solar near existing transmission infrastructure to reduce costs of energy distribution.
  • Launching a statewide planning effort to integrate clean energy and transmission infrastructure into the process of land development.
  • Funding permanent protection of Massachusetts’ highest-value natural and working lands.

“One of the goals of this work is to broaden how we think about the costs and benefits of the clean energy transition and what we need to fight climate change,” said Manion, Mass Audubon’s vice president for policy and advocacy. “Our results are clear: When we place real value on nature’s contribution to the fight against climate change and protection of biodiversity, the path forward with the lowest costs is the one that solves for both clean energy and nature. And it’s right in front of us.”



Quantum computers promise to reach speeds and efficiencies impossible for even the fastest supercomputers of today. Yet the technology hasn’t seen much scale-up and commercialization largely due to its inability to self-correct. Quantum computers, unlike classical ones, cannot correct errors by copying encoded data over and over. Scientists had to find another way.

Now, a new paper in Nature illustrates a Harvard quantum computing platform’s potential to solve the longstanding problem known as quantum error correction.

Leading the Harvard team is quantum optics expert Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative. The work reported in Nature was a collaboration among Harvard, MIT, and Boston-based QuEra Computing. Also involved was the group of Markus Greiner, the George Vasmer Leverett Professor of Physics.

An effort spanning the last several years, the Harvard platform is built on an array of very cold, laser-trapped rubidium atoms. Each atom acts as a bit — or a “qubit” as it’s called in the quantum world — which can perform extremely fast calculations.

Markus Greiner and Mikhail Lukin with a quantum simulator.

Harvard physicists Mikhail Lukin (foreground) and Markus Greiner work with a quantum simulator.

File photo by Jon Chase/Harvard Staff Photographer

The team’s chief innovation is configuring their “neutral atom array” to be able to dynamically change its layout by moving and connecting atoms — this is called “entangling” in physics parlance — mid-computation. Operations that entangle pairs of atoms, called two-qubit logic gates, are units of computing power.

Running a complicated algorithm on a quantum computer requires many gates. However, these gate operations are notoriously error-prone, and a buildup of errors renders the algorithm useless.

In the new paper, the team reports near-flawless performance of its two-qubit entangling gates with extremely low error rates. For the first time, they demonstrated the ability to entangle atoms with error rates below 0.5 percent. In terms of operation quality, this puts their technology’s performance on par with other leading types of quantum computing platforms, like superconducting qubits and trapped-ion qubits.

However, Harvard’s approach has major advantages over these competitors due to its large system sizes, efficient qubit control, and ability to dynamically reconfigure the layout of atoms.

“We’ve established that this platform has low enough physical errors that you can actually envision large-scale, error-corrected devices based on neutral atoms,” said first author Simon Evered, a Harvard Griffin Graduate School of Arts and Sciences student in Lukin’s group. “Our error rates are low enough now that if we were to group atoms together into logical qubits — where information is stored non-locally among the constituent atoms — these quantum error-corrected logical qubits could have even lower errors than the individual atoms.”

The Harvard team’s advances are reported in the same issue of Nature as other innovations led by former Harvard graduate student Jeff Thompson, now at Princeton University, and former Harvard postdoctoral fellow Manuel Endres, now at California Institute of Technology. Taken together, these advances lay the groundwork for quantum error-corrected algorithms and large-scale quantum computing. All of this means quantum computing on neutral atom arrays is showing the full breadth of its promise.

“These contributions open the door for very special opportunities in scalable quantum computing and a truly exciting time for this entire field ahead,” Lukin said.

The research was supported by the U.S. Department of Energy’s Quantum Systems Accelerator Center; the Center for Ultracold Atoms; the National Science Foundation; the Army Research Office Multidisciplinary University Research Initiative; and the DARPA Optimization with Noisy Intermediate-Scale Quantum Devices program.



The COVID-19 pandemic seemed like a never-ending parade of SARS-CoV-2 variants, each equipped with new ways to evade the immune system, leaving the world bracing for what would come next.

But what if there were a way to make predictions about new viral variants before they actually emerge?

A new artificial intelligence tool named EVEscape, developed by researchers at Harvard Medical School and the University of Oxford, can do just that.

The tool has two elements: A model of evolutionary sequences that predicts changes that can occur to a virus, and detailed biological and structural information about the virus. Together, they allow EVEscape to make predictions about the variants most likely to occur as the virus evolves.

In a study published Wednesday in Nature, the researchers show that had it been deployed at the start of the COVID-19 pandemic, EVEscape would have predicted the most frequent mutations and identified the most concerning variants for SARS-CoV-2. The tool also made accurate predictions about other viruses, including HIV and influenza.

The researchers are now using EVEscape to look ahead at SARS-CoV-2 and predict future variants of concern; every two weeks, they release a ranking of new variants. Eventually, this information could help scientists develop more effective vaccines and therapies. The team is also broadening the work to include more viruses.

“We want to know if we can anticipate the variation in viruses and forecast new variants — because if we can, that’s going to be extremely important for designing vaccines and therapies,” said senior author Debora Marks, associate professor of systems biology in the Blavatnik Institute at HMS.

From EVE to EVEscape

The researchers first developed EVE, short for evolutionary model of variant effect, in a different context: gene mutations that cause human diseases. The core of EVE is a generative model that learns to predict the functionality of proteins based on large-scale evolutionary data across species.

In a previous study, EVE allowed researchers to discern disease-causing from benign mutations in genes implicated in     various conditions, including cancers and heart rhythm disorders.

“You can use these generative models to learn amazing things from evolutionary information — the data have hidden secrets that you can reveal,” Marks said.

As the COVID-19 pandemic hit and progressed, the world was caught off guard by SARS-CoV-2’s impressive ability to evolve. The virus kept morphing, changing its structure in ways subtle and substantial to slip past vaccines and therapies designed to defeat it.

“We underestimate the ability of things to mutate when they’re under pressure and have a large population in which to do so,” Marks said. “Viruses are flexible — it’s almost like they’ve evolved to evolve.”

Watching the pandemic unfold, Marks and her team saw an opportunity to help: They rebuilt EVE into a new tool called EVEscape for the purpose of predicting viral variants.

They took the generative model from EVE — which can predict mutations in viral proteins that won’t interfere with the virus’s function — and added biological and structural details about the virus, including information about regions most easily targeted by the immune system.

“We’re taking biological information about how the immune system works and layering it on our learnings from the broader evolutionary history of the virus,” explained co-lead author Nicole Thadani, a former research fellow in the Marks lab.

Such an approach, Marks emphasized, means that EVEscape has a flexible framework that can be easily adapted to any virus.

Turning back the clock

 In the new study, the team turned the clock back to January 2020, just before the COVID-19 pandemic started. Then they asked EVEscape to predict what would happen with SARS-CoV-2.

“It’s as if you have a time machine. You go back to day one, and you say, I only have that data, what am I going to say is happening?” Marks said.

EVEscape predicted which SARS-CoV-2 mutations would occur during the pandemic with accuracy similar to this of experimental approaches that test the virus’ ability to bind to antibodies made by the immune system. EVEscape outperformed experimental approaches in predicting which of those mutations would be most prevalent. More importantly, EVEscape could make its predictions more quickly and efficiently than lab-based testing since it didn’t need to wait for relevant antibodies to arise in the population and become available for testing.

Additionally, EVEscape predicted which antibody-based therapies would lose their efficacy as the pandemic progressed and the virus developed mutations to escape these treatments.

The tool was also able to sift through the tens of thousands of new SARS-CoV-2 variants produced each week and identify the ones most likely to become problematic.

“By rapidly determining the threat level of new variants, we can help inform earlier public health decisions,” said co-lead author Sarah Gurev, a graduate student in the Marks lab from the Electrical Engineering and Computer Science program at MIT.

In a final step, the team demonstrated that EVEscape could be generalized to other common viruses, including HIV and influenza.

Designing mutation-proof vaccines and therapies

The team is now applying EVEscape to SARS-CoV-2 in real time, using all of the information available to make predictions about how it might evolve next.

The researchers publish a biweekly ranking of new SARS-CoV-2 variants on their website and share this information with entities such as the World Health Organization. The complete code for EVEscape is also freely available online.

They are also testing EVEscape on understudied viruses such as Lassa and Nipah, two pathogens of pandemic potential for which relatively little information exists.

Such less-studied viruses can have a huge impact on human health across the globe, the researchers noted.

Another important application of EVEscape would be to evaluate vaccines and therapies against current and future viral variants. The ability to do so can help scientists design treatments that are able to withstand the escape mechanisms a virus acquires.

“Historically, vaccine and therapeutic design has been retrospective, slow, and tied to the exact sequences known about a given virus,” Thadani said.

Noor Youssef, a research fellow in the Marks lab, added, “We want to figure out how we can actually design vaccines and therapies that are future-proof.”

Additional authors: Pascal Notin, Nathan Rollins, Daniel Ritter, Chris Sander, and Yarin Gal.

Disclosures: Marks is an adviser for Dyno Therapeutics, Octant, Jura Bio, Tectonic Therapeutic, and Genentech, and is a co-founder of Seismic Therapeutic. Sander is an adviser for CytoReason Ltd.

Funding for the research was provided by the National Institutes of Health (GM141007-01A1), the Coalition for Epidemic Preparedness Innovations, the Chan Zuckerberg Initiative, GSK, the UK Engineering and Physical Sciences Research Council, and the Alan Turing Institute. 

 

 

 



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget