February 2023

Wondering is a series of random questions answered by experts. For this entry, we asked the astrophysicist Avi Loeb, founding director of Harvard’s Black Hole Initiative, to help us picture the scariest void in the universe.

 

A black hole is a region in space where gravity is so strong that not even light can escape. It is an extreme structure of space and time. One way black holes are formed is when a star consumes its nuclear fuel and collapses because there is no energy supply to support it against gravity. When that happens, once the matter crosses the horizon of the black hole, it cannot emit any light. In the only two images captured of a black hole, there is a ring of light around the black hole because when it is being formed, the matter is moving extremely fast, close to the speed of light, and there is a huge release of energy and radiation. We call them black holes because once the matter falls into a black hole and there is no more radiation, it becomes completely dark and invisible to us.

Avi Loeb.

"I once gave a presentation about black holes to fourth-graders at my daughter’s school. A boy asked me what would happen to his body if he got inside a black hole. As I started to explain, the teacher stopped me, saying that the kids would have nightmares."

Jon Chase/Harvard Staff Photographer

Is a black hole really a hole? We don’t know the answer. What we know is that near the center of the black hole there is a point called singularity, where the density of the matter becomes infinite, and gravity is very strong. Einstein’s theory of general relativity is unable to predict what happens there. In principle, you can imagine an astronaut going into a black hole. As he moved closer to the singularity, his body would be ripped apart by gravity. I once gave a presentation about black holes to fourth-graders at my daughter’s school. A boy asked me what would happen to his body if he got inside a black hole. As I started to explain, the teacher stopped me, saying that the kids would have nightmares.

One of the most common misconceptions about black holes is that they are a portal to some other universe. We don’t know whether there is another universe. We don’t have any evidence for that. All we see is our own universe, and although we know the universe is expanding, we don’t know if there is another place that we can go to. If I had to guess, I would say that if you fall into a black hole, you end up inside a black hole. Since we don’t have a quantum theory of gravity, we can’t really calculate what exactly happens there.

The study of black holes is important for two reasons. One is environmental, which is what an astrophysicist cares about: Black holes have a huge impact on their environment because they’re very efficient at making energy out of matter, and the energy they release can affect the evolution of galaxies significantly. The second aspect is more fundamental. We don’t know what happens at the center of a black hole because Einstein’s theory is incomplete. Any theory that we develop to unify quantum mechanics, which describes the behavior of matter on a small scale, and gravity will unveil the secrets of the universe. If we understand what happens near the center of a black hole, we will also be able to figure out what happened before the Big Bang.
— As told to Liz Mineo, Harvard Staff Writer

 



Complete understanding of many of the most fundamental forces at play in our world has proven slippery. A new experiment by a group of researchers, including Daniel Jafferis of Harvard’s Department of Physics and peers from Caltech, represents a small step in advancing our view of the relationship between gravity, which shapes the universe, and quantum mechanics, the theoretical framework governing the motion and interaction of subatomic particles.

For more than 100 years, the common description of gravity has stemmed from Albert Einstein’s theory of general relativity — that gravity relates to the curvature of space-time. In the last 25 years scientists have discovered there is an intimate connection between gravity and quantum mechanics. Among these connections are wormholes, also known as bridges or tunnels of space, which Einstein described in 1935 as passages through space-time that could connect two black holes.

Jafferis’ team has for the first time conducted an experiment based in current quantum computing to understand wormhole dynamics. “It is a quantum simulation of an extraordinarily tiny wormhole,” said Jafferis. “Before this, it was not clear with the devices we have now if one could do it at all.” The research was published in Nature.

Einstein’s general relativity theory described wormholes as two black holes whose interiors are joined, where something could jump in each side, meet in the middle, but neither could get out again — the proverbial trap in a black hole.

“It’s a beautiful idea from the 1930s that wormhole interiors are joined, but it has not been known if this concept is an operationally meaningful statement,” Jafferis said. “But now we know that wormhole configuration does indeed have a physical interpretation; it corresponds to the two separate black holes in a highly entangled space.”

In recent years, scientists have built physical devices like quantum computers to create simulations in which they can manipulate the entanglements of quantum states in a controlled way. Jafferis’ team wanted to see whether they could create a simplified model that would emulate the gravitational aspects of a wormhole. Could they make a quantum system where the pattern of space entanglement is structurally of the right sort so that it looks like sending something through a wormhole?

In lab experiments, the researchers introduced a connection between the two sides, making the wormhole traversable. Signals could be sent in one side and come through the other, maybe not quickly, but without getting stuck. In the quantum language this is called “quantum teleportation,” a way of sending quantum information using shared entanglement. “The information is not sent through the direct signal, but in a more subtle way that uses entanglement,” Jafferis added.

The team started with a qubit, the simplest kind of quantum space, in one area of their device. They released other qubits in another fixed entangled space within the computer for a total of nine qubits. The two spaces were then mixed using the gate operations of the quantum computer.

Next, they used data operations that interpreted the evolving system according to certain dynamics. The final step was looking at the qubit once it reached the other side of the computer. “We asked if it was the same as the one we sent in or if it looked different,” said Jafferis. “It was the simplest possible quantum circuit we could create to see if we could simulate wormhole dynamics.”

The team’s ultimate goal is to learn all the details about the gravitational description of quantum systems. “We know how that works through theoretical mathematics in limited cases, but we don’t know all the answers,” said Jafferis. “Using this very small quantum system, we see it as a first step toward making bigger ones where we can discover more.”

This research was funded the U.S. Department of Energy Office of Science.



Five teams of Harvard researchers will work to de-risk promising ideas, with the aim of eventually launching startups, thanks to funding announced today from the University’s Grid Accelerator.

Projects emerging from this year’s competitive selection process represent a cross-section of interdisciplinary science and engineering innovation. They include technologies and approaches with the potential to yield transformative improvements in health and medicine, climate, and manufacturing.

The projects receiving support include proof-of-concept work for:

• An AI-enabled design and fabrication platform;
• An instrument for the detection of drug resistance of pathogenic bacteria;
• A greenhouse gas emission-free solid refrigerant;
• Remote rehabilitation therapies for individuals recovering from stroke;
• Novel enzymes for bioremediation, biomaterial synthesis, and other products.

In addition to funding, the researchers will have access to physical space, educational programming, and connections to alumni, investors, and the regional startup ecosystem to drive projects towards startup formation.

Harvard launched The Grid in September 2022 with the goal of smoothing the path of innovations from University labs into commercially viable products and services that address global challenges. The Grid’s programs include training, outreach, and resources to enable entrepreneurially minded Harvard researchers and students to translate their research into startups. The Grid is a collaboration between the Harvard John A. Paulson School of Engineering and Applied Sciences and the Harvard Office of Technology Development.

The funding from the Grid Accelerator builds on a track record of success. Since 2013, projects in the physical sciences and engineering advanced by Office of Technology Development accelerator support have culminated in 15 new startups that have collectively raised $172 million, as well as technology licenses to established companies and sponsored-research agreements. Harvard did not disclose the dollar value of the Grid Accelerator awards.


Jennifer Lewis.

Photo by Rick Groleau

AI-assisted design

Jennifer Lewis

A project led by materials scientist Jennifer Lewis, Hansjorg Wyss Professor of Biologically Inspired Engineering, envisions a radically new form of Computer-Aided Design. By combining machine learning, physical simulation, robotics, computer graphics, and digital fabrication methods, users would define designs via high-level specifications, and designs would be verified to be fabricable and functional in the real world at the design stage. Such a platform has the potential to revolutionize the way in which people approach design and engineering, allowing them to offload low-level design to the computer and focus more of their attention instead on the high-level design decisions for their problem.

Joost Vlassak.

Courtesy of SEAS

Fast drug-resistance test

Joost Vlassak

A team led by Joost Vlassak, Abbott and James Lawrence Professor of Materials Engineering, intends to address the growing problem of antibiotic resistance by building a novel system for fast Antimicrobial Susceptibility Testing, a technique used to confirm susceptibility of pathogenic bacteria to drugs, detect drug resistance, and guide the selection of patient therapy for difficult-to-treat infections. The Harvard team’s patented approach produces test results in minutes compared to hours or days with traditional clinical methods and unlike current tests can perform dozens of measurements simultaneously. In collaboration with colleagues at Harvard’s teaching hospitals, the team will use Grid support to demonstrate the method across on a broad range of real-world bacteria samples and anti-bacterial drugs.

Jarad Mason.

File photo by Kris Snibbe/Harvard Staff Photographer

Emission-free solid refrigerant

Jarad Mason

Virtually every refrigerator and air conditioner in use today relies on volatile fluorocarbon refrigerants, extremely potent greenhouse gases. Their release into the atmosphere is responsible for 3 percent of all global warming, a number that is rapidly increasing. A project led by Jarad Mason, Assistant Professor of Chemistry and Chemical Biology, promises a greener solution by replacing conventional volatile refrigerants with nonvolatile solid refrigerants which have zero direct emissions. The chemistry his team has developed for solid refrigerants would have significant advantages in system efficiency, cost, and safety. With funding from the Harvard Grid Accelerator, the researchers will validate the feasibility, efficiency, and robustness of the solid refrigerants in a prototype device.

Connor Walsh.

Courtesy of SEAS

Wearable to aid stroke rehab

Conor Walsh

Roboticist Conor Walsh, Paul A. Maeder Professor of Engineering and Applied Sciences, and colleagues from the Harvard Move Lab will use Grid support to develop a lightweight neuroprosthesis to help individuals recovering from stroke learn to walk again. The project employs electrical stimulation of propulsion-generating calf muscle groups with an adaptive controller for individualizing timing and magnitude of assistance. The soft wearable includes sensors that enable monitoring of walking biomechanics, an integrated electrode array to enable optimization of stimulation, and cloud-connected capabilities for remote monitoring.

Peter Girguis.

File photo by Kris Snibbe/Harvard Staff Photographer

Gene discovery startup

Peter Girguis

A team advised by Peter Girguis, Professor of Organismic and Evolutionary Biology, has already demonstrated the ability of large language models (LLMs) to decode the language of genomes. With Grid Accelerators support, they will scale up the training of LLMs to include the full spectrum of genomic sequences from the human gut, sewage treatment plants, and other systems. They anticipate the launch of a gene discovery startup that will produce engineerable gene modules and commercially relevant products such as novel enzymes for bioremediation and biomaterial synthesis.

 



Steven Pinker thinks ChatGPT is truly impressive — and will be even more so once it “stops making stuff up” and becomes less error-prone. Higher education, indeed, much of the world, was set abuzz in November when OpenAI unveiled its ChatGPT chatbot capable of instantly answering questions (in fact, composing writing in various genres) across a range of fields in a conversational and ostensibly authoritative fashion. Utilizing a type of AI called a large language model (LLM), ChatGPT is able to continuously learn and improve its responses. But just how good can it get? Pinker, the Johnstone Family Professor of Psychology, has investigated, among other things, links between the mind, language, and thought in books like the award-winning bestseller “The Language Instinct” and has a few thoughts of his own on whether we should be concerned about ChatGPT’s potential to displace humans as writers and thinkers. Interview was edited for clarity and length.

Q&A

Steven Pinker

GAZETTE: ChatGPT has gotten a great deal of attention, and a lot of it has been negative. What do you think are the important questions that it brings up?

PINKER: It certainly shows how our intuitions fail when we try to imagine what statistical patterns lurk in half a trillion words of text and can be captured in 100 billion parameters. Like most people, I would not have guessed that a system that did that would be capable of, say, writing the Gettysburg Address in the style of Donald Trump. There are patterns of patterns of patterns of patterns in the data that we humans can’t fathom. It’s impressive how ChatGPT can generate plausible prose, relevant and well-structured, without any understanding of the world — without overt goals, explicitly represented facts, or the other things we might have thought were necessary to generate intelligent-sounding prose.

And this appearance of competence makes its blunders all the more striking. It utters confident confabulations, such as that the U.S. has had four female presidents, including Luci Baines Johnson, 1973-77. And it makes elementary errors of common sense. For 25 years I’ve begun my introductory psychology course by showing how our best artificial intelligence still can’t duplicate ordinary common sense. This year I was terrified that that part of the lecture would be obsolete because the examples I gave would be aced by GPT. But I needn’t have worried. When I asked ChatGPT, “If Mabel was alive at 9 a.m. and 5 p.m., was she alive at noon?” it responded, “It was not specified whether Mabel was alive at noon. She’s known to be alive at 9 and 5, but there’s no information provided about her being alive at noon.” So, it doesn’t grasp basic facts of the world — like people live for continuous stretches of time and once you’re dead you stay dead — because it has never come across a stretch of text that made that explicit. (To its credit, it did know that goldfish don’t wear underpants.)

We’re dealing with an alien intelligence that’s capable of astonishing feats, but not in the manner of the human mind. We don’t need to be exposed to half a trillion words of text (which, at three words a second, eight hours a day, would take 15,000 years) in order to speak or to solve problems. Nonetheless, it is impressive what you can get out of very, very, very high-order statistical patterns in mammoth data sets.

GAZETTE: Open AI has said its goal is to develop artificial general intelligence. Is this advisable or even possible?

PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What we have now, and probably always will have, are devices that exceed humans in some challenges and not in others.

GAZETTE: Are you concerned about its use in your classroom?

PINKER: No more than about downloading term papers from websites. The College has asked us to remind students that the honor pledge rules out submitting work they didn’t write. I’m not  naïve; I know that some Harvard students might be barefaced liars, but I don’t think there are many. Also, at least so far, a lot of ChatGPT output is easy to unmask because it mashes up quotations and references that don’t exist.

GAZETTE: There are a range of things that people are worried about with ChatGPT, including disinformation and jobs being at stake. Is there a particular thing that worries you?

PINKER: Fear of new technologies is always driven by scenarios of the worst that can happen, without anticipating the countermeasures that would arise in the real world. For large language models, this will include the skepticism that people will cultivate for automatically generated content (journalists have already stopped using the gimmick of having GPT write their columns about GPT because readers are onto it), the development of professional and moral guardrails (like the Harvard honor pledge), and possibly technologies that watermark or detect LLM output.

Steven Pinker.
Steven Pinker. Photo by Rebecca Goldstein

There are other sources of pushback. One is that we all have deep intuitions about causal connections to people. A collector might pay $100,000 for John F. Kennedy’s golf clubs even though they’re indistinguishable from any other golf clubs from that era. The demand for authenticity is even stronger for intellectual products like stories and editorials: The awareness that there’s a real human you can connect it to changes its status and its acceptability.

Another pushback will come from the forehead-slapping blunders, like the fact that crushed glass is gaining popularity as a dietary supplement or that nine women can make a baby in one month. As the systems are improved by human feedback (often from click farms in poor countries), there will be fewer of these clangers, but given the infinite possibilities, they’ll still be there. And, crucially, there won’t be a paper trail that allows us to fact-check an assertion. With an ordinary writer, you could ask the person and track down the references, but in an LLM, a “fact” is smeared across billions of tiny adjustments to quantitative variables, and it’s impossible to trace and verify a source.

Nonetheless, there are doubtless many kinds of boilerplate that could be produced by an LLM as easily as by a human, and that might be a good thing. Perhaps we shouldn’t be paying the billable hours of an expensive lawyer to craft a will or divorce agreement that could be automatically generated.

GAZETTE: We hear a lot about potential downsides. Is there a potential upside?

PINKER: One example would be its use as a semantic search engine, as opposed to our current search engines, which are fed strings of characters. Currently, if you have an idea rather than a string of text, there’s no good way to search for it. Now, a real semantic search engine would, unlike an LLM, have a conceptual model of the world. It would have symbols for people and places and objects and events, and representations of goals and causal relations, something closer to the way the human mind works. But for just a tool, like a search engine, where you just want useful information retrieval, I can see that an LLM could be tremendously useful — as long as it stops making stuff up.

GAZETTE: If we look down the road and these things get better — potentially exponentially better — are there impacts for humans on what it means to be learned, to be knowledgeable, even to be expert?

PINKER: I doubt it will improve exponentially, but it will improve. And, as with the use of computers to supplement human intelligence in the past — all the way back to calculation and record-keeping in the ’60s, search in the ’90s, and every other step — we’ll be augmenting our own limitations. Just as we had to acknowledge our own limited memory and calculation capabilities, we’ll acknowledge that retrieving and digesting large amounts of information is something that we can do well but artificial minds can do better.

Since LLMs operate so differently from us, they might help us understand the nature of human intelligence. They might deepen our appreciation of what human understanding does consist of when we contrast it with systems that superficially seem to duplicate it, exceed it in some ways, and fall short in others.

GAZETTE: So humans won’t be supplanted by artificial general intelligence? We’ll still be on top, essentially? Or is that the wrong framing?

PINKER: It’s the wrong framing. There isn’t a one-dimensional scale of intelligence that embraces all conceivable minds. Sure, we use IQ to measure differences among humans, but that can’t be extrapolated upward to an everything-deducer, if only because its knowledge about empirical reality is limited by what it can observe. There is no omniscient and omnipotent wonder algorithm: There are as many intelligences as there are goals and worlds.



The recipients of the first grants awarded by Harvard’s Salata Institute for Climate and Sustainability will tackle a range of climate change challenges, seeking to reduce future warming and assist those whose lives already have been affected by the crisis.

The grants to five research clusters, announced Monday, will provide more than $8.1 million over three years to projects that bring together 30 faculty members from disciplines spanning Harvard Law School, Harvard Business School, the Harvard T.H. Chan School of Public Health, Harvard Medical School, Harvard Kennedy School, the Graduate School of Design, the John A. Paulson School of Engineering and Applied Sciences, and the Faculty of Arts and Sciences.

The projects will also engage with other institutions and organizations, including frontline groups in West Africa, India, and Bangladesh. Collaborators hail from the University of Lagos, the University of Ghana, South Africa’s MUHOLI Art Institute, BRAC University in Bangladesh, the University of California at Berkeley, and the All India Disaster Mitigation Institute in Gujarat, India.

Two projects focus on reducing warming, one by cutting emissions of the potent greenhouse gas methane and the other by scrutinizing corporate net-zero emissions pledges. The remaining initiatives focus on addressing the consequences of climate change: sea level rise in West Africa; shifting rainfall patterns and drought in India and Bangladesh; and impacts on U.S. communities dependent on coal, oil, and gas extraction.

The Salata Institute launched in June and is supported by a $200 million gift from Melanie and Jean Salata. During the institute’s opening symposium in October, Jean Salata said he’s confident that the world will meet the climate change challenge, though the work will be difficult and require contributions from all aspects of society.

Jim Stock, Harvard’s vice provost for climate and sustainability and the director of the institute, said the five research clusters represent Salata’s mission to tackle climate change head-on and to focus on projects that will have meaningful impact.

“It is really exciting to see these teams come together across Harvard Schools to work on important, applied climate problems,” said Stock, who is also the Harold Hitchings Burbank Professor of Political Economy and a professor of public policy. “Ultimately, the mission of the Salata Institute is to make meaningful progress on urgent climate challenges — reducing tons of emissions and saving lives, if you will. Nearly every big climate problem spans School boundaries and this program provides Harvard scholars a chance to cross those boundaries as they work to have a major practical impact.”

Ensuring “net zero” is more than words

In recent years, global commitments to “net-zero” emissions have proliferated, with more than 8,300 companies making pledges. Still, gaps in the data and insufficient research make it hard to verify whether the pledges lead to emissions reductions. The Net Zero Climate Research Cluster aims to improve our understanding of these plans and suggest ways in which they can be made more effective and transparent.

Headed by Jody Freeman, the Archibald Cox Professor of Law and founding director of the Harvard Environmental and Energy Law Program, the cluster brings together faculty from the Law School, the Kennedy School, and the Business School. Though corporate pledges have generated headlines, researchers still do not have enough information to predict whether they will produce meaningful reductions and help spur energy system change, Freeman said.

“One of our questions is: What’s driving these commitments?” she said. “Is the answer reputational risk? Is it pressure from the investment community? Or is there a real desire among these companies to make this shift? What kinds of companies are making them and which are not? And, once they commit to net-zero targets, do companies change their behavior and, if so, how? Do these pledges change how they produce goods and services? Alter internal decision-making?”

Freeman said it’s important to not just understand the incentives driving the pledges, but also potential obstacles to their implementation. What challenges do corporate leaders face when they make a net-zero pledge? Can the commitments create legal liability if companies fall short? And, importantly, can we shape the incentives to better ensure the integrity, transparency, and effectiveness of net-zero pledges, and encourage more companies to adopt them?

Plugging methane leaks as a bridge to a low-emissions world

Many people focus on carbon dioxide — which stays in the atmosphere for well more than a century — as the largest climate change threat. While true over the long term, another greenhouse gas — methane — is much more potent than CO2, though it is naturally out of the atmosphere within 20 years. In the past decade, worsening floods, droughts, and heat waves have caused some to focus on methane as a way to cut near-term warming and give humans and ecosystems time to adapt to longer-term changes.

The cluster on methane emissions, headed by the Kennedy School’s Robert Stavins, is a wide-ranging initiative involving 17 co-investigators and collaborators. The project, which in addition to the Kennedy School taps expertise from the Law School, the Business School, the Chan School, SEAS, and FAS, seeks to use data from an array of increasingly sophisticated sensors to engage with policymakers and other stakeholders to reduce global methane emissions.

“By giving attention to methane, over the short- to medium-term, we can have the effect of essentially buying time in order to develop long-term strategies to address carbon dioxide emissions,” Stavins said. “This is a soup-to-nuts project, because we’re going from scientific detection and estimation of methane emissions all the way to public policy and communicating to the public.”

Adaptation in West Africa

The Gulf of Guinea has seen some of the world’s fastest rates of sea-level rise. The cluster focused on sea-level rise, urban flooding, and coastal erosion in the region will examine implications for six communities, one rural and one urban in each of three Gulf nations: Nigeria, Ghana, and Côte d’Ivoire.

Emmanuel Akyeampong, the project’s principal investigator and Oppenheimer Faculty Director of Harvard’s Center for African Studies, said the project, which has co-investigators and collaborators at Gulf of Guinea universities, will explore climate change’s impact in a place where people already watch land wash away, deal with flooding and infrastructure damage, and taste saltwater’s intrusion into freshwater supplies.

The project will first analyze past sea-level rise and build projections of anticipated changes. Researchers will construct a coastal vulnerability index, identify communities at risk, and gauge anticipated disruption to farming, fishing, and other livelihood-centered activities. They’ll develop adaptation strategies with three goals for host communities: hardening the coastline to protect high-value buildings; protecting livelihoods and easing climate-related disruptions; and voluntary resettlement.

“These communities have been dealing with this for several decades,” said Akyeampong, the Ellen Gurney Professor of History and of African and African American Studies. “How do we help them respond better? We’re hoping — through robust climate science in partnership with local scholars — to give communities options that say, ‘If you want to see the next 20 to 30 years, this is what it looks like: You can make some adaptations and you’ll be good.’ Or, maybe, ‘Ten to 12 years from now, it might be good to start thinking about relocating.’”

Preparing for climate migration in South Asia

The changing climate in South Asia will result in more extreme weather events in the near future and in drought and sea level rise on longer time horizons. These phenomena will threaten agriculture and habitat and affect the food security and livelihoods of hundreds of millions in the region.

Working with partners in India and Bangladesh, including the world’s largest non-governmental organization, Bangladesh-based BRAC, and the world’s largest union of informal sector workers, the India-based Self-Employed Women’s Association, as well as government agencies, policy research institutes, and social entrepreneurs, the cluster on adaptation and climate-driven migration in South Asia will identify, test, and deploy adaptation strategies using financial, policy, educational, and technological interventions.

Led by Caroline Buckee, a professor of epidemiology at the Harvard Chan School, the cluster will take advantage of expertise from the Business School, FAS, the Kennedy School, SEAS, and the Medical School, as well as UC Berkeley, the James P. Grant School of Public Health at BRAC University, and the All India Disaster Mitigation Institute. Satchit Balsari, a Harvard assistant professor of emergency medicine and of global health and population, is a co-investigator.

“What struck me is how much the impact of these erratic weather events is not necessarily foremost in either public consciousness or the consciousness of policymakers,” Balsari said. “And as you think about all that needs to be done, you begin to recognize the vastness of the scope of policy changes that one will need to influence in order to mitigate the impact of these extreme weather events on vulnerable populations. You have to first generate the evidence to show that these interventions work, and then figure out what the theory of change is to empower the communities to advocate for policy change to institutionalize these changes.”

During a recent visit to India, Balsari witnessed both severe heat waves and seasonal rains that damaged growing crops. He was struck by how such events might not register in the official consciousness but can be devastating to individual growers. And current strategies like crop insurance can be inadequate, as policies tend to cover drought but not excessive or out-of-season rain, which could become more frequent as rainfall patterns shift. Listening will be an important part of the project, Buckee said, as farmers and others on the ground may have solutions that could be models elsewhere.

“In Bangladesh, where there have been cyclones and floods for decades, simple advances in storm shelters and evacuation patterns have significantly decreased mortality,” she said. “We should be leveraging these indigenous innovations across the global South.”

Extreme weather “will likely increase in frequency and severity for the foreseeable future,” she added. “We are interested in the Global South and in low-income settings because those are going to be the most vulnerable to the impacts of extreme weather events.”

Navigating shifts in the national energy system

To meet U.S. climate goals, fossil fuel emissions must fall dramatically in coming decades. That’s likely to mean a tough transition for communities and families economically tied to the fossil fuel industry. It will also alter city and town tax bases, which pay for services to support struggling families.

The Climate Research Cluster on Strengthening Communities for Changing Energy Systems will draw on expertise from the Law School, FAS, and the Graduate School of Design to probe the costs and benefits of energy transitions and propose solutions on which local businesses and governments can act.

Led by Stephen Ansolabehere, the Frank G. Thompson Professor of Government, the effort will develop a portfolio of projects in the coal regions of Appalachia, Wyoming, and Montana, and on the oil- and gas-rich Gulf Coast. The project will research culture, economy, society, and energy infrastructure and include external advisory groups of prominent stakeholders and leaders. The work will be summarized and communicated in a series of academic publications and white papers. Findings will be shared with local economic planning agencies and regulators and will be used to frame conversations at a series of convenings, where model regulations, laws, and policies for managing the transition will be drafted.

“The project is focused on place-based challenges and the goal is to bring people together, work through issues — what makes job training work and what makes training not work — and find those that can be a model,” Ansolabehere said. “People want to be listened to.”



Intercepting a fraction of the sun’s energy before it reaches Earth may be enough to reverse the planet’s rising temperatures, say scientists who are looking at dust as a potential shield.

For decades, scientists have considered using screens or other objects to block just enough of the sun’s radiation — between 1 or 2 percent — to mitigate the effects of global warming. In this new study, led by the Center for Astrophysics | Harvard & Smithsonian and the University of Utah, scientists are exploring the potential use of dust to shade the Earth.

The paper, published Wednesday in the journal PLOS Climate, notes launching dust between Earth and the sun to a way station at the “Lagrange Point” would be the most effective, however, it would also be cost prohibitive and labor intensive. As an alternative, the team proposes moondust, arguing that lunar dust launched from the moon could be a low-cost and effective shield.

“It is amazing to contemplate how moondust — which took over 4 billion years to generate — might help slow the rise in the Earth’s temperature, a problem that took us less than 300 years to produce,” says study co-author Scott Kenyon of the Center for Astrophysics.

The team of astronomers applied a technique used to study planet formation around distant stars — their usual research focus — to the lunar dust concept. Planet formation is a messy process that kicks up astronomical dust, which forms rings around host stars. These rings intercept light from the central star and re-radiate it in a way that can be detected.

“That was the seed of the idea; if we took a small amount of material and put it on a special orbit between the Earth and the sun and broke it up, we could block out a lot of sunlight with a little amount of mass,” says Ben Bromley, professor of physics and astronomy at the University of Utah and lead author for the study.

Simulated stream of dust launched between Earth and the sun.

Simulated stream of dust launched between Earth and the sun. This dust cloud is shown as it crosses the disk of the sun, viewed from Earth.

Credit: Ben Bromley/University of Utah

Casting a shadow

According to the team, a sunshield’s overall effectiveness would depend on its ability to sustain an orbit that casts a shadow on Earth. Sameer Khan, Utah undergraduate student and study co-author, led the initial exploration into which orbits could hold dust in position long enough to provide adequate shading.

“Because we know the positions and masses of the major celestial bodies in our solar system, we can simply use the laws of gravity to track the position of a simulated sunshield over time for several different orbits,” says Khan.

Two scenarios were promising. In the first scenario, the authors positioned a space station platform at the L1 Lagrange Point, the closest point between Earth and the sun where the gravitational forces are balanced. Objects at Lagrange Points tend to stay along a path between the two celestial bodies.

In computer simulations, the researchers shot particles from the platform to the L1 orbit, including the position of Earth, the sun, the moon, and other solar system planets, and tracked where the particles scattered. The authors found that when launched precisely, the dust would follow a path between Earth and the sun, effectively creating shade, at least for a while. The dust was easily blown off course by the solar winds, radiation, and gravity within the solar system. The team concludes that any L1 space station platform would need to create an endless supply of new dust batches to blast into orbit every few days after the initial spray dissipates.

“It was rather difficult to get the shield to stay at L1 long enough to cast a meaningful shadow. This shouldn’t come as a surprise, though, since L1 is an unstable equilibrium point,” Khan says. “Even the slightest deviation in the sunshield’s orbit can cause it to rapidly drift out of place, so our simulations had to be extremely precise.”

In the second scenario, the authors shot lunar dust from a platform on the surface of the moon toward the sun. They found that the inherent properties of lunar dust were just right to effectively work as a sunshield. The simulations tested how lunar dust scattered along various courses until they found excellent trajectories aimed toward L1 that served as an effective sunshield.

The results were welcome news, the team says, because much less energy is needed to launch dust from the moon than Earth. This is important because the amount of dust required for a solar shield is large, comparable to the output of a big mining operation here on Earth.

Kenyon says, “It is astounding that the sun, Earth, and moon are in just the right configuration to enable this kind of climate mitigation strategy.”

Just a moonshot?

The authors stress that their new study only explores the potential impact of this strategy, rather than evaluate whether these scenarios are logistically feasible.

“We aren’t experts in climate change, or the rocket science needed to move mass from one place to the other. We’re just exploring different kinds of dust on a variety of orbits to see how effective this approach might be. We do not want to miss a game changer for such a critical problem,” says Bromley.

One of the biggest logistical challenges — replenishing dust streams every few days — also has an advantage. The sun’s radiation naturally disperses the dust particles throughout the solar system, meaning the sunshield is temporary and particles do not fall onto Earth. The authors assure that their approach would not create a permanently cold, uninhabitable planet, as in the science fiction story, “Snowpiercer.”



Most parents have played the “Which hand has the treat?” game with their preschoolers. Kids get that the choice is binary — it’s either in one hand or the other. But what if instead of two options there were three? And suppose there were two treats, with two options grouped together and the third spaced a short distance away? How would the child decide which to pick?

It turns out preschoolers lack the ability to weigh multiple possibilities, and now Harvard psychologists say they have data to support the claim.

A paper recently published in the Proceedings of the National Academy of Sciences finds 3-year-olds struggle to keep track of competing options. “Instead what they do is find one state that is merely possible and treat it as fact,” explained lead author Brian Leahy.

Leahy, a Graduate School of Arts and Sciences student in the Department of Psychology, predicted this behavior three years ago in a separate paper authored with his adviser, psychology Professor Susan E. Carey. The researchers saw “an apparent conflict” in developmental psychology regarding how children take possibilities into account. Infants have been shown in some studies to run mental simulations and even generate hypotheses; other research finds preschoolers struggling when possibility becomes plural.

The issue is resolved, Leahy and Carey wrote in 2020, by distinguishing between “modal” and “minimal representations of possibility.” That is, they hypothesize that preschoolers don’t deploy what philosophers call “modal logic,” the ability to decipher certainty from uncertainty, necessary from unnecessary. Instead, kids find one possibility and mentally mark it as truth.

Next, Leahy and Carey set out to test the idea, with their hypothesis predicting how three sets of 3-year-olds would perform in a series of experiments. Joining the project at this stage were Michael Huemer, a postdoc researcher in Carey’s Lab, as well as research assistant Matt Steele and Stephanie Alderete, then a student at the College and now is a U.C. Berkeley graduate student.

Michael Huemer, Brian Leahy, and Susan E. Carey.

Researchers Michael Huemer (from left), Brian Leahy, and Susan E. Carey conducted a series of experiments that asked preschoolers to deduce which of three cups contained hidden prizes.

Stephanie Mitchell/Harvard Staff Photographer

In the first part of their study, researchers replicated existing data on a “Pick 1 of 3” exercise. Imagine three cups on a tiny stage — one off to its own at stage left, two grouped together at stage right. Suddenly, the curtain lowers over the single cup, and a prize is hidden there. Next, the curtain lowers over the pair, and another prize goes there. Finally, when all three cups are visible again, two dozen 3-year-olds are told to pick one cup to get a prize.

“If you really want a prize, the smart thing to do is to choose the singleton,” said Leahy, who also holds a doctorate in philosophy. But in order to understand that wisdom, children need to understand that the solo cup represents one possibility, while the pair represents two. According to the “minimal representations of possibility” hypothesis, kids instead form a belief about which member of the pair holds a prize. So they see two equally good options: the singleton, and their chosen member of the pair. They will choose randomly between these two options.

(Spoiler: The 3-year-olds ended up going for the single cup about half of the time.)

According to Leahy, the “Pick 1 of 3” exercise points to two hypotheses: Either the children rely on the “minimal” strategy or else they aren’t thinking about possibilities at all: They randomly choose one of the prizes to search for, or they randomly choose one set of cups to explore.

A second study ruled out the second hypothesis. It used the same setup, with one prize hidden in the single cup and another in the pair. This time two dozen 3-year-olds were asked to throw away one and keep the contents of the remaining two. The researchers predicted more success in the “Throw Away” task. After all, their hypothesis holds that 3-year-olds formed beliefs about which cup in the pair contained a prize. That meant a parallel belief about which was empty — always a member of the pair.

And indeed 95 percent discarded from the pair.

The final part of the study asked 3-year-olds to throw away a cup before picking one of the remaining two, again with the goal of getting a prize. Even though 3-year-olds almost always threw away from the pair, they were split 50/50 on picking the singleton or a member of the pair.

The results make a strong case for the “minimal representations of possibility” hypothesis, leaving Leahy eager to dig further into how the ability to think about what might and might not happen even develops. “This was a pretty complex pattern of data,” he noted. “And we predicted it exactly!”



Neurologist Andrew Budson and neuroscientist Elizabeth Kensinger not only explain how memory works, but also share science-based tips on how to keep it sharp as we age in their new book, “Why We Forget and How to Remember Better: The Science Behind Memory.” The book came out Wednesday. The Gazette interviewed Budson, M.D. ’93, and Kensinger ’98 about the neuroscience of memory and tips for improving our recall. This interview has been condensed and edited for length and clarity.

Q&A

Andrew Budson and Elizabeth Kensinger

GAZETTE: What are the most common misconceptions about memory?

KENSINGER: One of the most common errors is in the metaphors that we use to talk about memory that imply that there is a memory that sits somewhere in the brain, like a file that we can retrieve without effort. Memory is an active and effortful process. Every time that we’re bringing a past event to mind, we have to use effort to rebuild that memory. A second, related misconception is that there is such a thing as photographic memory, which is this ability to effortlessly remember everything that you just saw. It might be that it feels to us that we remember random things that we weren’t trying to remember, but there are reasons why we remember them; we were enjoying a song that we were listening to, or we were thinking about how bizarre something was, and those feelings or thoughts allowed that content to get into memory.

The third thing is that many people think that forgetting is bad and that an optimal memory system is one where forgetting doesn’t occur. Forgetting is important because if every time that we were trying to make a prediction about the future or understand what is going on right now, we had to sift through everything that’s ever happened to us, it would be inefficient. There’s tremendous utility in pruning because it allows us to use the pieces of our past that are most likely to be relevant for understanding what’s going on right now or what might happen tomorrow or next year.

GAZETTE: Why do we forget things?

KENSINGER: At the most basic level, we want to think about memory as having three different phases that must happen for us to have access to past content. The first is to get the information into memory, a process that is referred to as encoding. Then, you must keep that information around, and this is called storage or consolidation. It’s akin to pressing the save button on the document that you’ve just created on your computer, but unlike that analogy with a computer, you must continually re-store that content in the brain. And then finally, you must be able to bring that information to mind in the moment that you need it. Memory failures can reflect errors at any of those different stages. One of the most common times when errors arise is in that initial encoding phase, where often what happens is that we’re just not devoting enough effort or paying enough attention.

GAZETTE: How can we make sure we remember things we need to remember?

KENSINGER: Throughout the book we use the mnemonic device of FOUR, which stands for four critical things that we must do to get information encoded into memory. First, you must Focus attention; second, you must Organize the information, then you must Understand the information, and lastly, you need to Relate it to something else that your brain already knows. It’s much easier said than done. Often, when someone says, “I went to a party, and I met all these people, and I don’t remember any of their names,” the breakdown was at that first stage, not paying enough attention. At the moment of retrieval, we can also have failures. Any student has experienced this, where they know the content, but during an exam, they’re not able to come up with it. Or you’re looking at someone’s face, you know this person’s name, but right there in that moment, you’re not able to retrieve it. In those moments, you want to avoid the urge to generate possible answers and instead use general retrieval cues such as thinking about the last time you saw that person, the context, and the possible connections.

GAZETTE: How can sleep, or lack thereof, affect our memory?

KENSINGER: When we’re talking about storing the information so that we have longer-term access to it, getting enough sleep is one of the most important things that we can do. Sleep helps information to move from being briefly accessible to being stored in long-term ways, and it guides the transition from something we remember from a specific event, like remembering when the teacher said the boiling point of water was 212 degrees Fahrenheit, to a fact we just know.

BUDSON: Sleep is important to consolidate memories so that they can be retrieved later, but sleep is also at least theorized to help us flush away the amyloid beta protein, at night. This protein is thought to trigger Alzheimer’s disease dementia. It is still an active area of research, but there is some good evidence that when we sleep, our brain cells and the synapses shrink a little bit, and it allows us to flush away this protein that accumulates during the day. No one knows exactly what the normal function of the amyloid protein is, but some people believe, and I’m one of them, that it is an important part of the brain’s immune system that defends it against foreign invaders like bacteria, viruses, parasites, and fungi. Now, in addition to sleep, we know that to keep our brains healthy — and for good health in general — we need to eat right, engage in regular aerobic exercise, keep a healthy body weight, and be socially active.

GAZETTE: Sudoku or crossword puzzles? Which one helps keep our brain healthy and memory strong?

BUDSON: The short answer to that is that when you do computerized brain games or Sudoku, you get better at brain games and Sudoku, but that doesn’t generally translate into overall brain function. Having said that, we know that when one engages in novel cognitively stimulating activities, there have been shown to be benefits. One study that came out recently looked at crossword puzzles compared to computerized brain training games, and they found that the people who did crossword puzzles did better. Crosswords may be something that’s beneficial because they are always a bit different, and they require you to think about words and your knowledge in different and novel ways. But having healthy social interactions has been shown to be important. Our brains did not evolve to do crossword puzzles or computer games; they evolved in large part for social interactions. This is one of the reasons why staying socially active is so important. In summary, you must eat right, exercise, keep yourself cognitively stimulated, stay socially active, and sleep.

GAZETTE: How does memory change with aging?

KENSINGER: With aging that is not Alzheimer’s disease or other pathological aging, memory annoyances are very common. We forget things like proper nouns; we can’t think of someone’s name; we can’t think of the title of the book that we read last week. We also are more prone to forgetting some of the specifics, because with aging, there’s a transition toward the brain prioritizing the gist of what happened. The brain embraces the similarities across events rather than trying to hold on to each individualized event. That can lead to a lot of memory frustrations, and it also can make us prone to some types of memory distortions or false memories where we think something happened, but it was something slightly different. It’s also important to point out that there are some upsides to this transition. It is thought that this tendency for the brain to recognize the similarities across events may be one of the contributors to what we think of as wisdom that comes with old age.

GAZETTE: Of the many tips to improve memory you share in your book, which has been the most helpful to you?

BUDSON: The first thing I would say is that there’s nothing wrong with outsourcing your memory or using memory aids. Anybody who wants to remember a shopping list or an appointment that’s coming up, write it down, put it in your phone or a planner; use reminders and calendars.

I offload my memory as much as possible. I have all my passwords written down in a secure digital place. I use calendars, planners, and lists. In terms of trying to remember things better, day to day, I work at trying to be present and pay attention to what I’m doing and trying to multitask less. When I park my car in a parking garage, I’m trying to be conscious of exactly what I’m doing. If I’m going for a run in an unfamiliar location, I’m really going to pay attention. I also work very hard to turn things into habits and routines as much as possible so that it’ll just become automatic, a habit.

KENSINGER: I really like the FOUR mnemonic we came up with. That has helped me to think about all those different steps I need to take each time something is important for me to remember. I think the small mnemonics that we create in the moment are also helpful. For example, for passwords, I create a sentence so an alphanumeric code makes sense to me, and I can remember it over long periods of time. Those sorts of mnemonics that we’re generating ourselves are powerful because they demand that you do those four things, and especially that you invest effort in creating the memory. We’ve all had that frustrating moment where we don’t know where we left our phones. For me, that is often an encoding failure because I set my phone down somewhere when I wasn’t paying any attention. Now when I’m putting my phone down, I say it out loud: I’m putting my phone on the counter. It’s a very simple strategy, but because it’s simple, I remember to do it. You must focus your attention on those early actions to save yourself from those annoyances of forgetting later.

And lastly, one of the most important pieces of advice for students who are studying for a test is this: Do not cram. When you’re staying up to study before the exam, you don’t sleep. As we already talked about, sleep is important for consolidating memories. And, if you realize you don’t understand something as you’re pulling an all-nighter, there’s often not enough time to build that understanding. Also, you want to be studying information in lots of different ways and in lots of different contexts. You don’t just want to be studying in your dorm room or always at the same seat at the library; you want to study in lots of different places at lots of different times of day because all of that is going to help you remember the information. It is because of that variability, that need for sleep, and the time that it can take to reach understanding that it is important that students start their preparation early and keep it going ideally throughout the semester rather than cramming right before a big test.



Sean Carroll bought a green screen and a camera during the first pandemic lockdowns of early 2020. Each week, the theoretical physicist and acclaimed science writer shot two hour-long videos explaining fundamental ideas in modern physics, including conservation, quantum mechanics, gravity, and black holes.

Carroll is turning that series, “The Biggest Ideas in the Universe,” into a namesake book trilogy. This Friday he will give an in-person talk at the Harvard Book Store about volume one, “The Biggest Ideas in the Universe: Space, Time, and Motion.” Both video and book series are conceived as mash-ups of popular science romp and college-level textbook and aimed at an oft-neglected reader: one not so interested they’d want a degree but enough to crave a detailed grasp of how it all really works.

“Secretly, the real audience for this book is my 16-year-old self. I would have loved to have this,” said Carroll, who received his Ph.D. in astronomy from Harvard in 1993. Today, he is the Homewood Professor of Natural Philosophy at Johns Hopkins University. The Gazette spoke with Carroll about his project and why he dreams of a world where people can debate their favorite dark-matter candidates — and other physics mysteries — at a local pub.

Q&A

Sean Carroll

GAZETTE: You said you started your video series to help keep people connected during the surreal early days of the pandemic. Did it accomplish what you hoped?

CARROLL: Yes, they kept people connected, and the pandemic was a great pleasure. No, I was definitely a little bit less ambitious than that. Maybe it did more good for me than for the rest of the world. But the reception was pretty good. The most popular video has something like 800,000 views. And it’s an hour and a half of me writing down tensor equations for general relativity. It’s not the usual viral hit on YouTube.

GAZETTE: So then, why turn the videos into a book trilogy? Why aren’t the videos enough?

CARROLL: Like I said, 800,000 people watched the most popular video. Let me tell you, it’s not 800,000 people who are buying the book. But that doesn’t mean that the people who do buy the book, even though it’s a smaller number, have watched the videos. Some people just like reading books. And honestly, you can learn more from the books because it’s a slower process.

GAZETTE: It also seems like you’ve invented a whole new genre, something between a popular science book and a textbook. Was that intentional?

CARROLL: That was absolutely intentional. I’ve written a few popular books. But for the most part, my goal is not just to be pedagogical. My books make an argument that you can disagree with, whether it’s about the nature of reality or the interpretation of quantum mechanics. Whereas these books try very hard to stick to things that everyone agrees with. There’s a vast audience in between the ones who read popular books and the ones who read textbooks. That’s a gap I’m happy to try to fill.

GAZETTE: Throughout the book, you present a lot of abstract concepts, like infinity, that are challenging for most people to conceptualize, even if they’re important to questions about our natural world. Why are these concepts still valuable to explore?

CARROLL  I’m not just trying to explain the bare bones of the physics. I brought in little historical stories, philosophical background, mathematical niceties, and so forth. In the back of my mind, I’m always thinking like, “What is it that got me really interested?” With infinity, we use it in the continuum and our best descriptions of the natural world without really being sure whether it’s necessary. But I honestly don’t know whether we’re on the right track. So, even though I tried to stick with things that everyone agrees on, I’m happy to point out places where we don’t agree.

GAZETTE: Are there any physics concepts that are commonly misunderstood?

CARROLL: Oh, my goodness, there are so many. Within the scope of this book, I talked about space and time and space time. Classical mechanics up through general relativity. There are obviously a lot of misimpressions about general relativity, curved space time, the expanding universe, and things like that. But I don’t dwell on that. The tiny bit I’ve learned from academic research into misinformation and learning is that if you try to teach something by saying, “Here’s the wrong thing that you might think is true. And now I’m going to fix it,” people end up thinking the wrong thing because you put the idea in their heads. I try to spend most of my time thinking about the correct conceptions.

GAZETTE: For this first book, you travel all the way from Aristotle and Newton to modern research on black holes. What’s next?

CARROLL: I’m working on book two as we speak, and that’s all about quantum mechanics and quantum field theory. The third book will be complexity and emergence. Some of that is just a rubric that covers thermodynamics and cosmology, but the ideas of complex systems, criticality, and networks are hugely important to identify the ways in which the microscopic rules of the universe build up together to make this crazy complex macroscopic world we live in.

GAZETTE: You’ve said you’d love to hear people discussing physics at the local pub or in line at the grocery store. Apart from making you and other theoretical physicists happy, what might that accomplish?

CARROLL: I’m not an expert on science education. But from what little I do understand, we focus way too much on a set of facts. Science is not just a set of final answers; it’s a way of thinking about the world and learning new things. So, if people understood how and why we got to specific ideas, they would, I think, have a better idea of how to keep uncertainty in mind, judge new evidence, think about the world in a quantitative way, and be open to changing their minds. All those things are crucially important to science and certainly serve people more broadly.

GAZETTE: Which physics topic would you be most excited to hear people chatting about at the pub?

CARROLL: The cheeky response would be Hamiltonian mechanics. In the book, we learn Newtonian mechanics, then a whole other way of doing it, called Hamiltonian mechanics, and then another way, Lagrangian mechanics. Three different ways to do the same underlying physics. Is one of them more right? We don’t know. But it’s important to conceptualize the same ideas from different angles because one might suggest a different way forward. We’re not done yet. We’re still thinking about what happens next. So, in this book, I’m trying to provide both the ideas that are going to be used for the next 500 years and what to look out for as we move forward.



While each person gets dressed at least 29,000 times in the course of their lives, empirical science has paid little attention to why we select the everyday clothes that help mold our image.

In one of the first studies of its type, researchers at Massachusetts General Hospital and London College of Fashion, University of the Arts­, have explored the aesthetics of fashion to uncover preferences like style (shape and cut), color (hue, brightness, and saturation), and personality differences that motivate people to buy and wear clothing for work, leisure, and more formal occasions. The study was published in Empirical Studies of the Arts.

“Preference has been studied for ages across a wide range of art and aesthetic domains, from paintings to music, but almost never in the context of fashion, which is unquestionably a social experience and mode of self-expression,” says co-author Nancy Etcoff, director of the Program in Aesthetics and Well Being at MGH and Harvard Medical School.

“We asked in our research whether fashion aesthetics could be studied empirically, and not only discovered it can, but that it opens the door to better understanding factors that guide people in their everyday clothing preferences. This knowledge is valuable to designers and marketers of clothing as well as to consumers themselves by putting them in better touch with their aesthetic tastes and sensibilities.”

From the research team’s online survey of 307 females and 191 males in the United Kingdom, a novel preference structure emerged across four types of clothing styles: essentialcomfortablefeminine, and trendy.

Further analysis found that preferences across each of these styles were associated with the color preferences and the self-reported traits (e.g., personality) of study participants.

More specifically, prominent color preferences for those who liked and owned feminine clothing (e.g., dresses and skirts) consisted of lilac, violet, pink, turquoise, and dark red, and for people who wore essential clothing (e.g., shirts and jackets) dark blue, blue, and brown were the favored colors.

In addition, the study showed that people with a propensity for feminine clothing displayed high levels of fashion leadership, an appreciation for the importance of dressing well and, on the behavioral scale, tended to show more compassion.

On the other hand, individuals who liked and owned essential clothing tended to be sociable, exhibited higher energy, and were emotionally stable.

Wearers of comfortable clothes (e.g., hoodies, sweatpants, tracksuits) were also identified with fashion leadership and interest, while those whose taste ran to trendy articles (e.g., dungarees, polo shirts, boiler suits) tended to be young and had an appreciation for the visual arts.

“The more one knows about and appreciates the aesthetics of fashion, the more attention they will pay to what they’re wearing and what excites them when viewing clothing,” says lead author Young-Jin Hur, with London College of Fashion/UAL.

“Because aesthetic experiences seem linked with well-being, our findings may provide an important commentary on how this could impact the wearer’s self-confidence.”

Adds senior author Emmanuel Sirimal Silva with LCF/UAL, “Our findings underscore that clothing preferences are closely linked to fashion experience and to the individual’s identity.”

Etcoff, who authored the book “Survival of the Prettiest: the Science of Beauty,” believes her team’s latest research opens the door to an exciting new body of discovery around fashion behaviors and preferences, such as clothing textures and patterns as well as the impulses that drive constant change in the multi-billion-dollar clothing field.

“Each of us spends a lot of money on clothing,” she observes, “and what we’re attempting to do through our work is more fully inform the decisions we make to look better and to complement our personalities.”

Data collection for this study was funded by Fashion Business Research at LCF/UAL’s Fashion Business School.



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget