April 2022

Samantha C.W. O’Sullivan didn’t know she wanted to be a physicist when she first arrived on campus. She liked the subject, but the Adams House senior wasn’t sure of what she might be able to do with a physics degree and couldn’t quite picture herself as an academic.

Then she got to know Harvard physicist Jenny Hoffman, Clowes Professor of Science, who opened her eyes to the many possibilities in physics as her first-year adviser. Now O’Sullivan will graduate in May with a degree in physics and a plan to continue her studies at Oxford in the fall as a Rhodes Scholar. She’s grateful that those early conversations set her on a path as a researcher in condensed-matter physics, and that it was another woman who helped get her there.

“That was so powerful,” said O’Sullivan. “She was my first idea of what a physicist is. It really instilled in me an idea of ‘Oh, this can be me as well.’”

O’Sullivan, Maya Burhanpurkar, and Elizabeth Guo, also from the Department of Physics, were awarded U.S. Rhodes Scholarships earlier this academic year. All three women of color said they’d been inspired and guided by faculty like Hoffman, Assistant Professor of Physics Julia Mundy, Associate Professor Cora Dvorkin, and Melissa Franklin, the Mallinckrodt Professor of Physics, who in 1992 became the first woman to receive tenure in the department.

Maya Burhanpurkar.

“It makes the department, the field so much more welcoming and shows you what’s possible,” said Rhodes winner Maya Burhanpurkar about women faculty in the Physics Department.

File photo by Rose Lincoln/Harvard Staff Photographer

The three note that these professors not only work in a field traditionally dominated by men (at Harvard only a third of the concentrators identify as female) but have become leaders in it. And they demonstrate why representation matters.

“Having those people to not only look up to but when they also agreed to mentor you and guide you through the process of doing research makes it so much easier,” said Burhanpurkar, who graduated in November. “It makes the department, the field so much more welcoming and shows you what’s possible.”

O’Sullivan put it this way: “You can really only be what you know exists.”

The Washington, D.C., native, for example, worked in Hoffman’s lab researching the atomic structure of high-temperature superconductors and took courses in condensed-matter physics from professors like Mundy, who recently won the 2022 Sloan Research Fellowship, one of the most prestigious awards available to young researchers. Learning about the many deep questions and mysteries in these fields led to an interest in this specific area of physics that she wants to explore further.

O’Sullivan, who did a joint concentration in physics and African American studies, said she’ll be joining an experimental condensed-matter physics lab at Oxford, continuing the type of work she started with Hoffman and Mundy.

“I’m following in their footsteps professionally, in a way,” O’Sullivan said.

Burhanpurkar, a former Quincy resident from Ontario, Canada, said Dvorkin’s dark-matter research group was the first group she worked in that was run by a woman. She met Dvorkin while taking her graduate-level cosmology course during fall 2019 and later joined her lab. Burhanpurkar was no stranger to physics research, having worked in other labs at Harvard and in Canada, but this was a welcome change because in the field of theoretical physics there aren’t too many women.

The trio also spoke about the ways various professors taught them to expand beyond physics and pursue other endeavors, whether they were academic, professional, or personal (like Hoffman, who moonlights as a marathon runner). Burhanpurkar singled out computer science professor Cynthia Dwork, along with Dvorkin.

Dvorkin “encouraged me to look into applying new research ideas to problems in cosmology that hadn’t previously been applied to physics and pursue them, and she was encouraging of my entrepreneurship,” said Burhanpurkar, who went on to co-found Adventus Robotics, an autonomous robotics startup specializing in self-driving wheelchairs for hospitals and consumers in 2020. “She really just went above and beyond the requirements of supervising an undergraduate on a research project.”

Guo, a Cabot House resident from Plano, Texas, said the impact of the female faculty she worked with will have ripple effects for years. It encouraged her to pay it forward.

“I’ve been trying to do what these women have done for me,” said Guo, who worked in the Hoffman lab beginning her sophomore year, researching quantum materials and their transitions.

Guo helped restart the undergraduate chapter of the Women in Physics organization in the summer of 2019 and, during her junior and senior year, served as its chair. The organization helps support female students and postdoctoral researchers in the department through mentoring, workshops, and community building events — often partnering with Hoffman, Mundy, Dvorkin, and Franklin.

Guo has helped put together informal chats where undergraduates can meet with female physicists over pastries, study breaks so students can work together in small groups, and, during remote learning, a virtual lab fair to help students connect with research groups.

At Oxford, Guo will be seeking a way to merge her love of science with her interests in law. She said the group has become a welcoming safe space for the department’s undergraduate female students. She recalled a recent conversation she had with a younger member of the department.

“She was telling me how she was really grateful for this group just because she had people she could talk to — a group of women whom she could reach out to and see that they had made their way through this department and that she could do the same,” Guo said.

O’Sullivan and Burhanpurkar have also worked to help make the College feel more welcoming to underrepresented students. O’Sullivan started and led the Generational African American Students Association, which aims to foster community among Harvard College students who identify as Generational African Americans, the larger Black community, and Harvard, as well as raise awareness and spark change on issues surrounding the legacy of slavery.

Burhanpurkar, served as a board member for the undergraduate Women in Physics society and as Harvard’s undergraduate representative to the American Physical Society Inclusion, Diversity, and Equity Alliance. She tutored women in physics for two years through the Bureau of Study Counsel and as co-president of the Society of Physics Students she restarted the mentorship program for younger physics concentrators.

The women hope their efforts — and those of faculty who routinely take part in outreach to increase diversity in the field — continue to pay off because of the importance of growing the field.

“I want to make sure that having survived some painful experiences as a junior faculty that I can do some good and prevent other people from having some of those experiences, too,” said Hoffman.

Mundy, who studied as an undergraduate at Harvard, working in Hoffman’s lab, and Franklin, the academic adviser for all three students (and even for Hoffman when she was a College student), said there has been much headway in improving things for women in physics. This Rhodes trio is part of the proof.

“I’m just lucky that I was there,” said Franklin. “The thing about these three is that they’re just so smart and so good at physics. I kind of wish I was as good as they were. Now, I’m thinking maybe I’ll just follow these women and do whatever they’re going to do,” she said, smiling.

Dvorkin agreed.

“My advice [to young women setting out in physics] is to follow their passion, to not be afraid of doing what they like,” Dvorkin said. “The field is changing; it’s improving; and they’re part of this of this improvement, so keep going and keep doing it for yourself and for future generations.”



John Holdren took his wife and kids, both still toddlers in strollers, to the first Earth Day celebration in California in 1970. At the time, Holdren recalled, most people were worried about air and water pollution and urban smog.

“Today, of course, climate change looms very large in the constellation of environmental challenges,” said Holdren, the Teresa and John Heinz Research Professor of Environmental Policy at the Harvard Kennedy School, during a panel on Thursday, the eve of Earth Day.

Hosted by the Harvard University Center for the Environment and the Office of the Vice Provost for Climate and Sustainability, the group of experts discussed progress scientists and policy makers have made — and, more critically, failed to make — since that first Earth Day. They also debated the challenges that could undermine today’s efforts to fight climate change, including Russia’s invasion of Ukraine and a divisive political climate that includes some who either deny the existence of the problem or believe the severity is overblown.

“I don’t see grief as the opposite of hope,” said Terry Tempest Williams, writer-in-residence at the Harvard Divinity School and environmental author and advocate, referring to the sense of loss shared by some over the damage already done. “But if we acknowledge the world is dying then we can move past the depression and do something.”

In 2016 Williams and her husband tried to do something big in Utah: They paid to lease about 1,120 acres of public land reserved for oil and gas drilling, planning to keep it dormant. The U.S. Bureau of Land Management subsequently canceled the lease. Williams’ state is experiencing some of the worst consequences of climate change, including bigger and more frequent wildfires, like the 2020 East Fork Wildfire, which burned an area of close to 90,000 acres (about 3½ times the size of New York City). Floods and mudslides rage through the fires’ burn areas. Birds have fled. People aren’t concerned about policy, Williams said; they worry whether they’ll have enough water (levels of both Great Salt Lake and Colorado River are plummeting due to diversion), if their house will burn, or what the whirling dust will do to their lungs.

Even the Arctic tundra is burning, said Holdren, who was also a science adviser to President Barack Obama. Wildfires, extreme storms, droughts, vector-borne diseases — like malaria — are all spreading and getting worse. “We haven’t avoided the worst impact,” he said, thinking back to when he attended the signing of the Paris Agreement in 2015 during which 196 countries agreed to limit global warming to 1.5 degrees Celsius. Another 2015 attendee asked him whether he was happy with the agreement. “I would have been happy 20 years ago,” he responded.

Today, climate change is already affecting people where they live and work. Even for those it does not directly affect, they can watch its impacts on screens. In the 1990s, these visuals didn’t exist yet to motivate people to act — and that inaction hurts today’s most vulnerable communities, like Indigenous Alaskan tribes, Holdren said. “People who have done the least to create the problem are suffering the worst impacts.”

“Our generation has really screwed up,” said Jim Stock, vice provost for climate and sustainability, who also served in Obama’s White House. While some progress has been made, especially to reduce smog and grow the wind and solar energy industries, “We should have done far more in the 1990s,” he said. “We allowed a political system to hold the science hostage.”

Today, Stock said, individuals can still make changes, like insulating their attics or eating less meat, but sparking systemic changes, like teaching climate change in public schools and putting pressure on institutions and politicians to live up to their promises, can be more impactful. As vice provost, Stock’s responsibilities include working with faculty, students, staff, and academic leadership from across the University to guide and further develop Harvard’s strategies for advancing climate research and its global impact.

Broader changes on a national or international scale are still hard to make, especially when other pressing challenges, like Russia’s invasion of Ukraine, demand immediate global attention. The recent energy crisis, a result of Europe’s dependence on Russian natural gas, points to a pattern, Stock said, where national security concerns often follow fossil fuels. The dependence makes us vulnerable, he continued, but while some see this as an opportunity to shift to more stable renewable energy sources, like wind and solar, others say it’s a reason to start more drilling.

“I think we’re in a terrible cycle. A lack of leadership, a lack of will,” Williams said.

But Dan Schrag, the director of the Harvard University Center for the Environment who moderated the discussion, sees room for hope — if not in his own generation, then in the next, which is far more engaged in climate change activism. “It’s never too late to keep fighting. If we can’t keep it to 1.5, let’s try for 2. If not 2, then 3,” he said, referring the Paris Agreement goal.

To conclude, Schrag asked Williams to read an excerpt from her essay “Refuge: An Unnatural History of Family and Place.”

“The eyes of the future are looking back at us, and they are praying for us to see beyond our own time,” Williams read. “Wild mercy is in our hands.”



The U.N.’s Intergovernmental Panel on Climate Change suggested in a recent report that with every degree of warming, global agricultural production will be reduced by 10 to 25 percent, threatening the food supply. A team of scientists, including Harvard Forest researcher Tim Rademacher, offers a possible solution in a new article in Nature Communications Earth & Environment. They developed a map showing where the world’s major food crops could be relocated to maximize production and minimize environmental impact. The changes would yield big decreases in the carbon (71 percent), biodiversity (87 percent), and irrigation-water (100 percent) footprints of crop production. The Gazette spoke with Rademacher, who studies tree growth and vegetation at Harvard Forest, about the feasibility and potential impact of the proposal. The interview was edited for clarity and length.

Q&A

Tim Rademacher

GAZETTE: Why did you and your colleagues take on this project?

RADEMACHER: Food production is facing a complex dilemma, where the world population is increasing, so we need more food to feed the world. Agriculture already takes up more than half of the land that isn’t covered in ice. At the same time, climate change is already reducing agricultural production. There’s a real threat for the food production system when more and more food is needed, but agriculture is already such a strain on global ecosystems. It creates habitat loss, which is problematic for biodiversity. It emits loads of greenhouse gas emissions. The sector as a whole uses loads of fresh water. For us the big question was: How can we move to a system that minimizes these impacts but still produces enough food to feed the world?

 

GAZETTE: How does the map address this?

RADEMACHER: The map shows what would be the optimal way to feed the world at the moment, if we had a clean slate. We looked at data for the most important crops. We included 25 crops for which we could get consistent datasets, like wheat, barley, and soybean. The crops account for about 77 percent of the global food production, so it’s a substantial part of what humanity eats. Then we asked how much does each crop yield in a specific place and why, and what is the environmental impact in this specific place? Once we had calculated these things for all possible combinations of crops, locations, and impacts, we optimized it. We looked at how we can minimize the impact while keeping the yields sufficient to feed the world. Such an optimal configuration would capture large amounts of carbon, increase biodiversity, and cut agricultural use of fresh water to zero. For instance, in one optimized scenario, the impact of crop production on the world’s biodiversity would be reduced by 87 percent, drastically reducing the extinction risk for many species.

GAZETTE: What would be some major changes to the current agricultural map?

RADEMACHER: There are some really striking differences in terms of where production is optimal and how much land is needed. The reimagined map would have largely new farming areas for most major crops — like wheat, rice, and maize. Generally, the optimal locations for production according to our model are, for example, in the Corn Belt in the midwestern U.S., south of the Sahel in Africa, and a few other places like Ukraine and Argentina. One specific example is California. We traditionally think of California as very productive with high agricultural yields for crops, but it seems that the environmental impacts are not necessarily worth the yields for staple crops like wheat, barley, and maize compared to other areas like the western Corn Belt or the Argentinian Pampas.

Another big change is that huge areas of farmland in Europe and India would be restored to their natural habitat. This would make space for natural ecosystems to breathe from the onslaught of development and the combined climate and biodiversity crisis. The other striking feature is really that tropical forests are basically completely avoided by the model because of their value for nature.

GAZETTE: Is this plan feasible?

RADEMACHER: One of the important things to remember with this is that doing a complete relocation is kind of utopic — certainly in the short run. The idea is to use this map as a general guide and then identify hot spots or target areas, where we want to focus our efforts in terms of food production and to incentivize production in areas where they minimize environmental impacts. What are the areas that have the highest environmental impact, the highest impact on nature? Even if we relocate only 5 to 10 percent of the worst offenders in terms of impact on the natural environment, we can potentially — depending on the intensity of farming use —reduce the environmental impact by half, according to our model.

GAZETTE: Are there any other ways projects like this that embrace big ideas that can contribute to the overall effort to mitigate climate change?

RADEMACHER: It’s important to have a vision of what the ideal world would look like. These big ideas help us to better home in on what is possible and then identify workable solutions that need to integrate economic and social factors. One of the things that was important for us was we didn’t want to make this exclusively about carbon. We wanted to find a way to also include biodiversity and fresh water use because agriculture is a stressor in all of these different areas. By taking this globally consistent approach, we are able to say not only is this area really important for carbon, but it is generally really important when you consider these multiple factors. These types of ideas are also actually quite important because they can give us a little bit of hope.



How much do you know about Earth? The basics are easy, but there’s plenty about the planet most people haven’t heard. For this Earth Day, the Gazette put together a half dozen lesser-known facts with the help of Andrew Knoll, Harvard’s Fisher Research Professor of Natural History and author of the recent popular science book “A Brief History of Earth: Four Billion Years in Eight Chapters.” Now, according to Knoll …

“Earth is a planet that records its own history”

Layered in the planet’s rocks are physical, chemical, and biological inscriptions that chronicle what the world was like at different points in Earth’s history, Knoll says, citing the Grand Canyon as a prime example. While it’s spectacular to see, it’s also a giant library. Each layer of rock has its own story. Rocks at the bottom tell tales from around 800 million years ago, when there was oxygen in the atmosphere but probably not enough to oxygenate the ocean. Above those rocks are ones from 500 million years ago that tell stories about the emergence of land organisms.

“Animals only appear 85 percent of the way through Earth’s history”

All animals — from humans to dinosaurs to trilobites — were late to the party. The reason, he says, is that the history of life on Earth is largely microbial. For billions of years, bacteria and other single-celled organisms were the only life on Earth. In fact, the time interval from the first dinosaurs to today makes up only about 5 percent of the history of life on Earth. Not only do these microscopic lifeforms dominate the planet, but we should also really be thanking them for transforming the planet. Take the cyanobacteria, which ingest carbon dioxide and release oxygen. Some billions of years ago, they started producing more oxygen than the planet’s natural processes consumed, so oxygen began to build up in the atmosphere, paving the way for it to be habitable for animals.

“All the possibilities of life, air, oceans, and continents were put into position as Earth was being created more than 4.5 billion years ago” 

Knoll points out that Mars and the other planets could likely never have been like Earth because they didn’t have the right pieces from the beginning. Earth on the other hand, as our solar system formed, received all the ingredients it would need to transform into a big “blue paradise.”

“For nearly the first half of our planet’s history, air and oceans were essentially oxygen-free”

Modern life is bathed in an atmosphere rich in oxygen. It leads to the thinking that life can’t exist without oxygen, but that line of thinking is wrong, says the paleontologist. This goes back to microbes: There were plenty on the early Earth that lived without or with very little oxygen for billions of years.

“Modern Earth is not necessarily representative of our planet as it has existed through time”

Earth is a dynamic system and it’s changing all the time, according to Knoll. The change, however, is measured geologically over billions or millions of years. Five hundred million years ago during the Cambrian Period, for example, there were supercontinents, with North America and northern Europe side by side and all the southern continents gathered into one. We live at a particular moment in Earth’s history that’s a culmination of everything that came before. But here’s the kicker: What will the planet look like 500 million years from today? Would we recognize it?

“In Earth history, high rates of environmental change commonly coincide with elevated rates of extinction” 

This one clearly doesn’t bode well. Today, our planet is changing at a rate seldom observed in the geologic record, he notes. Since the 1950s, the amount of carbon dioxide in the atmosphere has increased nearly 30 percent. This has warmed the planet, made seawater more acidic (affecting many species), and diminished oxygen in deep-sea environments. Consider this: 252 million years ago, massive volcanism drove similarly strong and rapid environmental change that eliminated most animal species on Earth. Knoll says whether or not the 21st century will also end in mass extinction is up to us.



Don’t be fooled by the name. While 3D printers do print tangible objects (and quite well), how they do the job doesn’t actually happen in 3D, but rather in regular old 2D.

Working to change that is a group of former and current researchers from the Rowland Institute at Harvard.

First, here’s how 3D printing works: The printers lay down flat layers of resin, which will harden into plastic after being exposed to laser light, on top of each other, again and again from the bottom to the top. Eventually, the object, such as a skull, takes shape. But if a piece of the print overhangs, like a bridge or a wing of a plane, it requires some type of flat support structure to actually print, or the resin will fall apart.

The researchers present a method to help the printers live up to their names and deliver a “true” 3D form of printing. In a new paper in Nature, they describe a technique of volumetric 3D printing that goes beyond the bottom-up, layered approach. The process eliminates the need for support structures because the resin it creates is self-supporting.

“What we were wondering is, could we actually print entire volumes without needing to do all these complicated steps?” said Daniel N. Congreve, an assistant professor at Stanford and former fellow at the Rowland Institute, where the bulk of the research took place. “Our goal was to use simply a laser moving around to truly pattern in three dimensions and not be limited by this sort of layer-by-layer nature of things.”

The key component in their novel design is turning red light into blue light by adding what’s known as an upconversion process to the resin, the light reactive liquid used in 3D printers that hardens into plastic.

In 3D printing, resin hardens in a flat and straight line along the path of the light. Here, the researchers use nano capsules to add chemicals so that it only reacts to a certain kind of light — a blue light at the focal point of the laser that’s created by the upconversion process. This beam is scanned in three dimensions, so it prints that way without needing to be layered onto something. The resulting resin has a greater viscosity than in the traditional method, so it can stand support-free once it’s printed.

“We designed the resin, we designed the system so that the red light does nothing,” Congreve said. “But that little dot of blue light triggers a chemical reaction that makes the resin harden and turn into plastic. Basically, what that means is you have this laser passing all the way through the system and only at that little blue do you get the polymerization, [only there] do you get the printing happening. We just scan that blue dot around in three dimensions and anywhere that blue dot hits it polymerizes and you get your 3D printing.”

The researchers used their printer to produce a 3D Harvard logo, Stanford logo, and a small boat, a standard yet difficult test for 3D printers because of the boat’s small size and fine details like overhanging portholes and open cabin spaces.

The researchers, who included Christopher Stokes from the Rowland Institute, plan to continue developing the system for speed and to refine it to print even finer details. The potential of volumetric 3D printing is seen as a game changer, because it will eliminate the need for complex support structures and dramatically speed up the process when it reaches its full potential. Think of the “replicator” from “Star Trek” that materializes objects all at once.

But right now, the researchers know they have quite a ways to go.

“We’re really just starting to scratch the surface of what this new technique could do,” Congreve said.



Forgetting can be a blessing and a curse. Some who’ve experienced a traumatic event cannot seem to forget, while others seem only to forget, and all too quickly.

Dilemmas like these have led neuroscientists to question how forgetting actually works in the brain and whether it can be speeded or slowed. They are still a ways from understanding the process well enough to provide answers. But a group of Harvard-led researchers are moving a small step closer.

In a new study, the scientists using C. elegans worms, a model organism for brain research, found that forgetting doesn’t reverse changes in the brain resulting from learning or erase them, as some theories suggest.

Instead, forgetting generates a novel brain state that’s different from either the one before the learning happened or the one that exists while the learned behavior is still remembered. In other words, what is forgotten doesn’t completely go away and can be reactivated with a kind of jump start.

“After forgetting, we can often be reminded of what we learned before, and our brain is no longer in the naive state,” said Yun Zhang, professor of organismic and evolutionary biology and member of Harvard’s Center for Brain Science. “If we had a party and then several months later, we actually forgot: ‘Oh, when did I have that party? Who went to the party?’ And then your friend may say, ‘Oh, remember this and that. Remember, we actually sang a song for you.’ All of a sudden, you’ll remember, right?”

The research, published in Science Advances, sheds new light on how forgetting takes place in the brain on a systems level and on molecules the researchers found that appear to be able to speed or slow it.

The basis of the work could one day be used to understand mental health issues where forgetting goes wrong, either happening too slow or when it happens too fast. It could, for instance, hold keys to addressing disorders like post-traumatic stress, where aversive memories aggressively persist.

“The mechanisms that this study provide would give us entry points to think about what may have gone wrong with those neurological diseases,” said Zhang. “It helps us to make hypotheses on the molecules involved and the processes engaged, as well as the activity of the neurons that are important for forgetting, and to propose ways to understand the pathology of related neurological diseases.”

Forgetting is part of normal brain function due to the limited capacity of the brain. Much research has gone into how memories form, but less has gone into the nature of forgetting or how it happens in the brain. Some studies suggest that when a memory is forgotten, it’s simply erased, and the learning is lost. Another possibility is that the memory and the learning just become harder to access during the forgetting process but remain in some form.

The work from members of Zhang’s lab — led by postdoctoral scholars He Liu and Taihong Wu — and collaborators leans toward the latter theory.

The researchers taught the worms to identify by smell and avoid an infectious bacteria strain that makes them sick. But an hour later, the worms forgot. The researchers then analyzed the brain activity of these worms and the genes expressed in their nervous systems.

Comparing them to worms that had never learned the behavior or had just finished the training, the researchers saw that the neural activity and gene expression of worms that forgot the behavior neither returned to the naive state from before nor did they match the neural activity of worms that had just been trained. They were different.

The scientists also looked at whether the worms that had forgotten the training could be reminded of it, and the answer was it appears they could. Usually, it takes about three to four hours to train the worms, but those that were being retrained completed the process in about three minutes.

“There’s still memory traces in their brain that can be woken up, that can be reactivated,” Zhang said.

Zhang and her colleagues plan to use this study as a starting point for continuing to look at the mechanisms of forgetting, and how it can eventually be applied to mental health issues.

“This is a just the beginning for us to understand forgetting, a brain process essential for daily activities,” Zhang said.



Game theory is the theoretical framework embraced by many mathematicians, psychologists, and economists as a tool for analyzing rational and strategic social interactions and decision-making.

But according to two Harvard scholars, game theory can also help explain ostensibly irrational human behavior, which operates well below the surface of our daily awareness, guiding our actions. In their new book “Hidden Games: The Surprising Power of Game Theory to Explain Irrational Human Behavior,” they suggest that that subconscious process can help us understand everything from our aesthetic tastes to our altruism.

Co-authors Moshe Hoffman and Erez Yoeli spoke about their latest work last week during a virtual Harvard Science Book Talk.

Game theory, they suggested, can help explain why highly politically partisan cable news shows that only present one-sided discourse or information maintain steady viewership. It’s because those tuning in aren’t interested in the expected aim of uncovering the truth, said Yoeli, a lecturer in Harvard’s Economics Department, but rather in “signaling something about a group membership, or [persuading] others that their political side is the good side.”

Hoffman, who also lectures in Harvard’s Department of Economics, said that much of human behavior is “shaped by social pressures, by either the effect you will have on other people by signaling these things, or the effect other people will have on you by positively reinforcing those behaviors.”

In the cable news example, the way the game gets hidden, they said, is when one begins to believe the spin. “You end up becoming a motivated reasoner … the social thing of trying to persuade gets internalized and ends up affecting your own beliefs in a way that it ends up being hidden,” said Hoffman, who is also a research scientist at the Max Planck Institute for Evolutionary Biology and a research fellow at MIT’s Sloan School of Management.

Hidden games can also help explain the real motivation behind some altruistic activities, said the authors. Take recycling. While most believe that they recycle because it helps the environment, on a subconscious, hidden level, they may be doing good for the planet because it’s good for their reputations.

In one field experiment Yoeli demonstrated that when a selfless act was more public, more people took part. Instead of having a public utility send out direct letters asking people to take part in an energy efficiency/blackout program in their building, he suggested they post a sign-up sheet in the building’s lobby. The results showed that when people got public credit for their good deeds, enrollments went up. Why? Because it’s observable, which helps build your brand, suggested Hoffman, noting that for such highly social creatures, personal brand is critical. Those with stronger reputations build more trust with others, he said, which can lead to stronger relationships and greater cooperation.

“What we are saying is that humans evolved a psychology to play these games,” said Yoeli, who is also a research scientist at the Sloan School. “They have tastes and beliefs, ideologies, intuitions, that help them do that. Sometimes those games are about reputation, sometimes they are about signaling things, sometimes they are about group identity and persuasion, sometimes they are about other things.”



“I think, therefore I am” has been a kind of human slogan ever since the 17th-century French philosopher René Descartes wrote his famous phrase. But scientists like Stephen Fleming might add another layer: “I think about thinking, therefore I am.”

“We often take this for granted,” Fleming, who studies metacognition, said last week during a virtual Harvard Science Book Talk presented by the University’s Division of Science, Cabot Science Library, and Harvard Book Store. Going for a walk, playing piano, or solving a math problem is a more visible form of thinking, he continued. But metacognition — self-awareness or thinking about thinking — is the invisible judge evaluating decisions such as what to eat for lunch, whom we want as a partner, and whether our convictions are right or wrong.

Speaking with Elizabeth Phelps, the Pershing Square Professor of Human Neuroscience, Fleming discussed his new book, “Know Thyself: The Science of Self-Awareness.” The conversation spanned the origins of metacognition, who excels at it (hint: kids do not), and how this field touches everything from ancient Greece to MRIs, dolphins to self-driving cars, anxiety to the origins of human consciousness.

Plato, Socrates, and other Greek philosophers believed self-awareness, or the ability to know one’s own mind, was the key to wisdom, said Fleming, principal investigator at the Wellcome Centre for Human Neuroimaging, University College London, where he leads the Metacognition Group. Centuries later, Descartes ran experiments to try to unravel this uniquely human capability. But it wasn’t until the last 50 years — and the invention of neuroimaging technology like MRIs — that scientists could finally start measuring what this murky, abstract process looks like in the human brain.

The human brain must grapple with a constant and chaotic barrage of sights, sounds, and smells. “It’s locked in this dark skull,” Fleming said. “It doesn’t have access to the outside world. It just has noisy sensory inputs.” Metacognition helps determine which inputs to prioritize and what, despite the onslaught, you still do not know. But even this higher-level thinking has gaps, to the benefit of ventriloquists and other illusionists. Because the human brain treats vision as more reliable than hearing, Fleming said, seeing a dummy talk is more convincing than hearing the performer’s mumbles.

Metacognition also helps humans predict and track actions, automatically tipping the body forward, for example, to compensate for an escalator’s initial jerk (if that escalator is broken, most people tip anyway and stumble, a meta-miscalculation).

Stephen M. Fleming

“Metacognition promotes good decision-making,” says Stephen Fleming, during a virtual Harvard Science Book Talk.

Stephanie Mitchell/Harvard Staff Photographer

While low-level metacognition has been measured in infants just 6 months old, “kids below the age of 3 or 4 are notoriously bad at knowing whether they know the answer or not,” Fleming said. His 3-year-old son, Finn, for example, is quick to claim he knows all about something but, when pressed, admits that he doesn’t actually know. “Before that age, kids are more concrete,” Fleming said. “They think everyone sees the world the way they do.” But which comes first — the recognition of other minds or our own — is still a chicken-and-egg conundrum in the study of metacognition.

Even in “fully formed adults,” metacognition can differ from person to person and even culture to culture (a preliminary study performed by Fleming and colleagues in China found that collectivist societies, like China, have better metacognition than individualist ones, like Great Britain).

Yet, studies have found few differences across genders, except when it comes to confidence in one’s ability to perform a task, which, Fleming said, “becomes a self-fulfilling prophesy.” Confidence, whether merited or not, can boost performance. But high metacognition does not necessarily correspond with intelligence — you can be highly intelligent but unaware of how you’re performing on a specific task. In fact, studies have shown a correlation between anxiety or depression and high metacognition, perhaps because of an acute sensitivity to errors.

“Metacognition promotes good decision-making,” Fleming said. But for highly skilled athletes or musicians, this constant self-monitoring can get in the way.

“So, can you turn it off?” Phelps asked.

“That’s really hard to do voluntarily,” Fleming said. On the other hand, stress, injury, or disease can shut it down involuntarily — a potential complication for the legal system, which is founded on the idea of conscious intention.

Fleming also recently partnered with roboticists to study the overlap between metacognition and artificial intelligence. Machine learning might be a form of “thinking,” but robots tend to be overconfident when faced with a new situation, Fleming said.

“Obviously, you wouldn’t want your self-driving car to be overconfident,” he said.

Even if humans can no longer outthink today’s computers, they can still best them at thinking about thinking.



The most distant galaxy on record has been spotted by an international team of astronomers, including researchers at the Center for Astrophysics | Harvard & Smithsonian.

Named HD1, the galaxy candidate is some 13.5 billion light-years away and is described Thursday in the Astrophysical Journal. In an accompanying paper published in the Monthly Notices of the Royal Astronomical Society Letters, scientists have begun to speculate exactly what the galaxy is.

The team proposes two ideas: HD1 may be forming stars at an astounding rate and is possibly even home to Population III stars, the universe’s very first stars — which, until now, have never been observed. Alternatively, HD1 may contain a supermassive black hole about 100 million times the mass of our sun.

“Answering questions about the nature of a source so far away can be challenging,” says Fabio Pacucci, lead author of the MNRAS study, co-author in the discovery paper on ApJ, and an astronomer at the Center for Astrophysics. “It’s like guessing the nationality of a ship from the flag it flies, while being faraway ashore, with the vessel in the middle of a gale and dense fog. One can maybe see some colors and shapes of the flag, but not in their entirety. It’s ultimately a long game of analysis and exclusion of implausible scenarios.”

HD1 is extremely bright in ultraviolet light. To explain this, “some energetic processes are occurring there or, better yet, did occur some billions of years ago,” Pacucci says.

Timeline displays the earliest galaxy candidates and the history of the universe.

Timeline displays the earliest galaxy candidates and the history of the universe.

Credit: Harikane et al., NASA, EST and P. Oesch/Yale

At first, the researchers assumed HD1 was a standard starburst galaxy, a galaxy that is creating stars at a high rate. But after calculating how many stars HD1 was producing, they obtained “an incredible rate — HD1 would be forming more than 100 stars every single year. This is at least 10 times higher than what we expect for these galaxies.”

That’s when the team began suspecting that HD1 might not be forming normal, everyday stars.

“The very first population of stars that formed in the universe were more massive, more luminous and hotter than modern stars,” Pacucci says. “If we assume the stars produced in HD1 are these first, or Population III, stars, then its properties could be explained more easily. In fact, Population III stars are capable of producing more UV light than normal stars, which could clarify the extreme ultraviolet luminosity of HD1.”

A supermassive black hole, however, could also explain the extreme luminosity of HD1. As it gobbles down enormous amounts of gas, high energy photons may be emitted by the region around the black hole.

If that’s the case, it would be by far the earliest supermassive black hole known to humankind, observed much closer in time to the Big Bang compared to the current record-holder.

“HD1 would represent a giant baby in the delivery room of the early universe,” says Avi Loeb an astronomer at the Center for Astrophysics and co-author on the MNRAS study. “It breaks the highest quasar redshift on record by almost a factor of two, a remarkable feat.”

HD1 was discovered after more than 1,200 hours of observing time with the Subaru Telescope, VISTA Telescope, UK Infrared Telescope, and Spitzer Space Telescope.

“It was very hard work to find HD1 out of more than 700,000 objects,” says Yuichi Harikane, an astronomer at the University of Tokyo who discovered the galaxy. “HD1’s red color matched the expected characteristics of a galaxy 13.5 billion light-years away surprisingly well, giving me a little bit of goosebumps when I found it.”

The team then conducted follow-up observations using the Atacama Large Millimeter/submillimeter Array (ALMA) to confirm the distance, which is 100 million light years further than GN-z11, the current record-holder for the furthest galaxy.

Using the James Webb Space Telescope, the research team will once again observe HD1 to verify its distance from Earth. If current calculations prove correct, HD1 will be the most distant — and oldest — galaxy ever recorded.

The same observations will allow the team to dig deeper into HD1’s identity and confirm if one of their theories is correct.

“Forming a few hundred million years after the Big Bang, a black hole in HD1 must have grown out of a massive seed at an unprecedented rate,” Loeb says. “Once again, nature appears to be more imaginative than we are.”



As fans of March Madness cheer upsets, no-look passes, and clutch shots, Harvard biologist Hopi Hoekstra, notebook in hand, has indulged a slightly less-mainstream basketball obsession: documenting whether team mascots are really what they say they are.

Here’s a bracket-buster: Many of them aren’t.

“There are a lot that are pretty close and then there are some that are ridiculously off,” said Hoekstra, a professor of organismic and evolutionary biology and of molecular and cellular biology and curator of mammalogy in the Museum of Comparative Zoology.

Hoekstra has the stat sheets to prove it. She’s kept notes for 23 years on animal mascots that aren’t the animals they’re supposed to be. This year, she looked at the 68 teams that made the NCAA men’s basketball tourney for Sports Illustrated.

Wildcats (25 in total) are among the biggest offenders. For instance, Kentucky’s wildcat is actually more of a mountain lion, except it doesn’t seem to have a tail. The features of Arizona’s Wilbur wildcat — reddish color, facial features, and lack of a tail — suggest he’s more of a bobcat. Meanwhile, Villanova, set to play Saturday in the Final Four, has the most accurate looking wildcat of the tournament, Hoekstra says.

A lot of the biological incorrectness comes because the animal mascots use nicknames for different or many species (like wildcats), so those nicknames get carried forward, even though they’re wrong, she said.

The South Dakota jackrabbit would be accurate if not for its floppy years, which are more characteristic of domestic rabbits. The peacock of Saint Peter’s looks fine, but the coloration is not right for the tail (which Hoekstra admits is a bit nitpicky).

One of the most egregious misplays on Hoekstra’s list is the TCU Horned Frogs, who really should be called the horned lizards. Texas horned lizards have frog-shaped bodies and are nicknamed horned toads, but biologically they are lizards. “That’s a big discrepancy — amphibians versus reptiles,” Hoekstra said. “That’s millions of years of evolution between them.”

Hoekstra started as a mascot monitor at a Minnesota Golden Gophers game in 1999.

“We sat there for like eight hours and I wasn’t a particular college basketball fan so I entertained myself watching the mascots,” Hoekstra said. “And so when out comes this thing that is clearly not a gopher and instead a squirrel, I was incensed.”

Her complaint to the school’s athletic director went unanswered.

Nowadays Hoekstra doesn’t bother with official channels, but adds the offender to her list and revels in the madness.



What do quantum computers have to do with smog-filled London streets, flying submarines, waistcoats, petticoats, Sherlock Holmesian mysteries, and brass goggles?

A whole lot, according to Nicole Yunger Halpern. Last week, the theoretical physicist joined Jacob Barandes, co-director of graduate studies for physics, to discuss her new book, “Quantum Steampunk: The Physics of Yesterday’s Tomorrow.” In it, Yunger Halpern dissects a new branch of science — quantum thermodynamics, or quantum steampunk as she calls it — by fusing steampunk fiction with nonfiction and Victorian-era thermodynamics (the heat and energy that gets steam engines pumping) with quantum physics. Yunger Halpern presents a whimsical lens through which readers can watch a “scientific revolution that’s happening in real time,” Barandes said, exploring mysteries even Holmes couldn’t hope to solve, such as why time flows in only one direction.

“This fusion of old and new creates a wonderful sense of nostalgia and adventure, romance and exploration,” Yunger Halpern said during a virtual Harvard Science Book Talk presented by the University’s Division of Science, Cabot Science Library, and Harvard Book Store. In steampunk, she continued, “fans dress up in costumes full of top hats and goggles and gears and gather at conventions. What they dream, I have the immense privilege of having the opportunity to live.”

Yunger Halpern, a fellow of the Joint Center for Quantum Information and Computer Science and an adjunct assistant professor at the University of Maryland, uses steampunk, which marries Victorian style and futuristic technology, to introduce readers to the complex and fantastical world of quantum thermodynamics. The new field merges quantum physics, information science, and energy science to study new ways to power cars, charge batteries, encrypt information, and cool quantum computers.

Book cover.

“How can we extend the Victorian theory of thermodynamics from large everyday-type systems, such as steam engines, to small quantum and information-processing systems?” Yunger Halpern asked, and then answered: “We reach back to the past and head to the future.”

To guide their conversation, Barandes focused on Yunger Halpern’s ability to explain dense quantum concepts with whimsy. While “Quantum Steampunk” is mostly nonfiction, Yunger Halpern introduces each chapter with a steampunk-style fictional tale, featuring characters with names like Audrey and Baxter. These aren’t just Victorian-era scientific renegades flying around in dirigibles and tinkering with time machines; they’re the steampunk versions of Alice and Bob — aliases scientists commonly give to quantum particles to make their behaviors easier to describe.

To introduce her field’s many abstract definitions, Yunger Halpern created a menagerie of metaphors. For example, she compares weak quantum measurements — used to examine a quantum system without disturbing it — to a hummingbird “that alights very softly on your shoulder,” Yunger Halpern said.

For the integral concept of entropy, Yunger Halpern relied on yet another bird — Edgar Allan Poe’s raven, to be exact. Entropy is, very simply, a measure of uncertainty. In the very small realm of quantum physics, scientists must disentangle more entropies, or uncertainties, to exert control over quantum particles. In larger systems with more particles, like steam engines, fewer and fewer entropies matter.

“It reminded me of a portion of ‘The Raven,’” Yunger Halpern said, before leaping into a recitation of several stanzas of Poe’s famous poem. “All his dreams, nightmares, fears, and horrors collapsed onto this one raven,” she said. “That is perhaps an out-there way of saying that all of these quantum entropies collapse onto just one in conventional thermodynamics.”

Before asking his next question, Barandes leapt in to pick up where Yunger Halpern left off, reciting another stanza of “The Raven” with almost quantum speed. (“My wife is rolling her eyes,” he said when, at the discussion’s conclusion, Yunger Halpern proposed they partner up on a longer recitation for Poetry Month.)

Barandes also asked Yunger Halpern to speculate about the future of quantum computers and when they might go global. Such machines can do more for less, Yunger Halpern said, meaning they can compute far more complex problems, faster, and with fewer resources than classic computers.

Earlier in her talk, Yunger Halpern showed a photo of a modern-day quantum computer which, like a good steampunk machine, looked like a meticulous web of delicate metals. But that contraption, she explained, was just the computer’s refrigerator; the quantum computer was a tiny, vulnerable chip positioned in the center, like the crown jewels in the Tower of London.

While still a few decades away, Yunger Halpern said quantum computers could decode traffic flows or near-impenetrable encryption. But, she said, “not all problems are well-suited for quantum computers. For instance, I don’t recommend doing your taxes on one.” And while she doesn’t foresee one of these analytical behemoths in every home office, she acknowledged that today’s physicists have made discoveries that the founders of quantum theory thought impossible.

“I really hope,” Yunger Halpern said, “that quantum computers enhance our lives in ways we can’t imagine today.” Or, to borrow from Poe’s Victorian verse she’d recited minutes before: “I stood there … dreaming dreams no mortal ever dared to dream before.”



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget