January 2023

Few would argue right now that physics doesn’t matter, barely a month after scientists at Lawrence Livermore National Laboratory achieved fusion ignition, a breakthrough step toward unlocking a new source of abundant, clean energy. Australian physicist Suzie Sheehy wants to go further, however, making the experimental side of the science accessible and reconnecting us with forgotten pioneers who helped change the ways we understand the world.

She discussed her debut book, “The Matter of Everything: How Curiosity, Physics, and Improbable Experiments Changed the World,” with Greg Kestin, Ph.D. ’14, associate director of science education and a lecturer on physics, in an online event last Wednesday presented by the Division of Science and Harvard Library with Harvard Book Store. Sheehy offered a rapid-fire overview of the history of the science, along with an introduction to some unsung heroes of the field and some peeks into where it may be heading next.

Sheehy, who oversees research groups at the universities of Oxford and Melbourne and currently focuses on medical applications, laid out five basic points. First, she said, “How we know is just as important as what we know.”

For that reason, “I celebrate experiments,” said Sheehy, whose book is organized around 12 key experiments from the last 120 years. Acknowledging that theoretical physics, practiced by such luminaries as Albert Einstein, may be better known, she described her experimental colleagues as having “a more nuanced job,” requiring “good questions, persistence, and a whole lot of luck.” As an example, she recalled the 1897 experiment into cathode rays that resulted in the discovery of electrons and “gave birth to the entire electronics industry.” Without that, she noted, “rock ’n’ roll would never have happened.”

Her second point — “The results in curiosity-driven research grow in usefulness over time” — was reflected in the 1896 discovery of the X-ray. Not only did it allow doctors to look beneath a patient’s skin, it also gave photographers a new artistic tool and has become crucial to airport security. “New discoveries make new imaginings possible,” she said.

“Science may be objective, but scientists are not,” was her next point. Even great physicists have blind spots, she noted, quoting physicist Albert Michelson, who said in 1894, “It seems probable that most of the grand underlying principles have been firmly established.” This was before the discovery of X-rays, radioactivity, and the electron — and before quantum mechanics would completely upend the field. Sheehy quipped, “It’s hard to predict the future.”

Following up on scientists’ very human failings, Sheehy delivered her fourth point in the form of a question: “Who gets to be a physicist?”

“Curiosity is a human trait,” she said. “It’s not racist or sexist, but we’ve been restricting this field.” To counter the “strong white man” narrative too often championed in her field, Sheehy briefly introduced some of the female physicists featured in her book. These include Harriet Brooks, who helped decipher how radioactive elements change, as well as Marietta Blau, whose work led to a new kind of particle detector, and Bibha Chowdhuri, an Indian particle physicist who researched cosmic rays.

Ultimately, “collaboration is the human force of nature,” said Sheehy, making her final point. Citing “the power of collaboration,” she pointed out the great strides being made at CERN, the European Organization for Nuclear Research. The organization, which has 23 member states, was designed to foster such collaboration — and invented the World Wide Web in order to do so. Currently, the Swiss-based main lab not only brings international teams together, it houses the Large Hadron Collider, allowing for the kind of experiments very few, if any, of the member countries would be able to afford on their own.

Following up Sheehy’s presentation with a discussion that included questions from audience members, Kestin asked about the future of the field. Sheehy reflected back on Michelson’s remark of more than a century ago, and how easy it is to assume that we’re reaching the end of human knowledge. “It feels like we’re done with physics, and yet we know there’s more,” she said. In particular, she pointed out that even the most recent discoveries, such as those into the nature of subatomic particles like muons, only account for roughly 4 percent of all matter. Much of what else makes up our universe, known as dark matter, remains a mystery.

“It’s exciting to think that over 90 percent of matter is not understood,” said Kestin.

 



Results from a new study by Harvard researchers just published in Cell offer insights into the relationship between inflammation and the cognitive impairment we experience as we age, and suggest the possibility that it may be a result of a kind of cellular chain reaction.

“Understanding aging is one of the most important goals in biomedicine,” said Xiaowei Zhuang, David B. Arnold Jr. Professor of Science in the department of chemistry and chemical biology, professor of physics,  Howard Hughes Medical Institute (HHMI) investigator, and one of the paper’s authors. “It is also a very challenging problem. One reason is because the brain is very complex. It contains an exceptionally high diversity of cells, with many different types of neurons and non-neuronal cells forming intricate interaction networks.”

To study such an intricate system, researchers used an imaging method known as MERFISH, developed by the Zhuang lab, which has extensive expertise in inventing novel imaging methods and applying them to study biological systems. MERFISH (for Multiplexed Error Robust Fluorescence In Situ Hybridization) is capable of simultaneously measuring not only thousands of types of RNA, or thousands of genes, in cells, it also reveals the spatial relationships between them.

MERFISH allowed researchers to generate “atlases of gene expression,” looking at “neighborhood relationships” between cells, explained Catherine Dulac, Samuel W. Morris University Professor in the department of Molecular and Cellular Biology, HHMI investigator,  and another of the paper’s authors. With MERFISH, said Dulac, “one can look not only at changes in gene expression across different ages, but also changes in gene expression, in particular cell types, related to their spatial relationship.”

Xiaowei Zhuang and Catherine Dulac.

“Understanding aging is one of the most important goals in biomedicine,” said Xiaowei Zhuang (left), who worked with Catherine Dulac to more closely investigate the brain’s aging process.

Harvard file photo

Applying this big-picture approach to mouse brains, William E.  Allen, a Harvard junior fellow, and two other researchers, Timothy R. Blosser and Zuri A. Sullivan from the Zhuang and Dulac labs, identified how neuronal — or nerve — cells and non-neuronal cells changed during aging. In particular, the study revealed that aging and inflammation affected how genes were expressed by cells in both similar and distinct ways and in a spatially dependent manner.

“The idea that part of the brain aging process is related to inflammation has been hypothesized already,” said Dulac. “The experimental strategies we used — MERFISH, in combination with single-cell RNA sequencing — enabled us to look at the aging process very specifically and with high granularity.”

“We found that non-neuronal cells, such as glial and immune cells, appear to undergo more pronounced changes in gene expression and cell states than neurons. And these changes don’t happen uniformly in the brain,” said Zhuang, noting that subcortical white matter showed more pronounced alterations than the gray matter, notably in the non-neuronal oligodendrocytes, astrocytes, and microglia. “Since these various cells ensure efficient electrical impulses across the brain by producing a myelin sheath around axons, as well as provide metabolic support for neurons, modulate synaptic functions, and provide immune surveillances,” Zhuang said, “these changes could have a direct impact on the function of neural circuits.”

“If oligodendrocytes are not healthy and start to shed myelin” this can start a chain reaction affecting neurons as well as non-neuronal cells that will “basically disturb many functions in the entire brain,” said Dulac.

That was striking, said Dulac, because “ultimately, aging is associated with impairment in cognition, which is directly linked to neuronal function. But if many of the changes occur in non-neuronal cells, then we may have identified a multistep process in which inflammation primarily affects non-neuronal cells, which in turn leads to impairments in neuronal function.”

Breaking down this process offers the prospect of influencing, if not stopping, it. “If there were ways through lifestyle, for example through diet, exercise, or other processes, to actually reduce the inflammatory process associated with aging, then brain aging and associated impairments could also be reduced,” said Dulac.

For now, these findings provide a roadmap for ongoing research, studies made possible by techniques such as MERFISH. Being able to “comprehensively sample all different kinds of cells and a very large number of genes in intact tissue provides a rich resource of information that can allow us to generate new hypotheses for future studies,” said Dulac.

The collaboration between the two researchers began more than a decade ago. “A number of years ago Xiaowei contacted me because her lab had just developed a super-high-resolution microscopy technology, STORM, and she was very interested in looking at brain function using this,” recalled Dulac. “This led us to initiate a wonderful collaboration where we could, for the first time, observe brain synapses at super-high resolution.” Their first joint paper, published 12 years ago, led to a string of ongoing projects incorporating new techniques and the researchers’ complementary skills.

Zhuang also talks about her collaboration with Dulac with great enthusiasm. “In our collaborations, one plus one is much bigger than two,” said Zhuang. “While many of our collaborations are initiated by conversations between Catherine and me, in this study of mouse brain aging, the collaboration was initiated by Will Allen, a top-notch Junior Fellow who is interested in research in both of our labs.”



Projections created internally by ExxonMobil starting in the late 1970s on the impact of fossil fuels on climate change were very accurate, even surpassing those of some academic and governmental scientists, according to an analysis published Thursday in Science by a team of Harvard-led researchers. Despite those forecasts, team leaders say, the multinational energy giant continued to sow doubt about the gathering crisis.

In “Assessing ExxonMobil’s Global Warming Projections,” researchers from Harvard and the Potsdam Institute for Climate Impact Research show for the first time the accuracy of previously unreported forecasts created by company scientists from 1977 through 2003. The Harvard team discovered that Exxon researchers created a series of remarkably reliable models and analyses projecting global warming from carbon dioxide emissions over the coming decades. Specifically, Exxon projected that fossil fuel emissions would lead to 0.20 degrees Celsius of global warming per decade, with a margin of error of 0.04 degrees — a trend that has been proven largely accurate.

“This paper is the first ever systematic assessment of a fossil fuel company’s climate projections, the first time we’ve been able to put a number on what they knew,” said Geoffrey Supran, lead author and former research fellow in the History of Science at Harvard. “What we found is that between 1977 and 2003, excellent scientists within Exxon modeled and predicted global warming with, frankly, shocking skill and accuracy only for the company to then spend the next couple of decades denying that very climate science.”

Geoffrey Supran,

“This paper is the first ever systematic assessment of a fossil fuel company’s climate projections, the first time we’ve been able to put a number on what they knew,” said Geoffrey Supran, lead author.

File photo by Stephanie Mitchell/Harvard Staff Photographer

“We thought this was a unique opportunity to understand what Exxon knew about this issue and what level of scientific understanding they had at the time,” added co-author Naomi Oreskes, Henry Charles Lea Professor of the History of Science whose work looks at the causes and effects of climate change denial. “We found that not only were their forecasts extremely skillful, but they were also often more skillful than forecasts made by independent academic and government scientists at the exact same time.”

Allegations that oil company executives sought to mislead the public about the industry’s role in climate change have drawn increasing scrutiny in recent years, including lawsuits by several states and cities and a recent high profile U.S. House committee investigation.

Harvard’s scientists used established Intergovernmental Panel on Climate Change (IPCC) statistical techniques to test the performance of Exxon’s models. They found that, depending on the metric used, 63-83 percent of the global warming projections reported by Exxon scientists were consistent with actual temperatures over time. Moreover, the corporation’s own projections had an average “skill score” of 72 percent, plus or minus 6 percent, with the highest scoring 99 percent. A skill score relates to how well a forecast compares to what happens in real life. For comparison, NASA scientist James Hansen’s global warming predictions presented to the U.S. Congress in 1988 had scores from 38 to 66 percent.

Graphic of Exxon predictions.
Summary of all global warming projections reported by ExxonMobil scientists in internal documents between 1977 and 2003 (gray lines), superimposed on historically observed temperature change (red). Solid gray lines indicate global warming projections modeled by ExxonMobil scientists themselves; dashed gray lines indicate projections internally reproduced by ExxonMobil scientists from third-party sources. Shades of gray scale with model start dates, from earliest (1977: lightest) to latest (2003: darkest).

The researchers report that Exxon scientists correctly dismissed the possibility of a coming ice age, accurately predicted that human-caused global warming would first be detectable in the year 2000, plus or minus five years, and reasonably estimated how much CO2 would lead to dangerous warming.

The current debate about when Exxon knew about the impact on climate change carbon emissions began in 2015 following news reports of internal company documents describing the multinational’s early knowledge of climate science.  Exxon disagreed with the reports, even providing a link to internal studies and memos from their own scientists and suggesting that interested parties should read them and make up their own minds.

“That’s exactly what we did,” said Supran, who is now at the University of Miami. Together, he and Oreskes spent a year researching those documents and in 2017 published a series of three papers analyzing Exxon’s 40-year history of climate communications. They were able to show there was a systematic discrepancy between what Exxon was saying internally and in academic circles versus what they were telling the public. “That led us to conclude that they had quantifiably misled the public, by essentially contributing quietly to climate science and yet loudly promoting doubt about that science,” said Supran.

Naomi Oreskes

“I think this new study is the smoking gun, the proof, because it shows the degree of understanding ... this really deep, really sophisticated, really skillful understanding that was obscured by what came next,” said Harvard Professor Naomi Oreskes.

Harvard file photo

In 2021, the team published a new study in One Earth using algorithmic techniques to identify ways in which ExxonMobil used increasingly subtle but systematic language to shape the way the public talks and thinks about climate change — often in misleading ways.

These findings were hardly a surprise to Oreskes, given her long history of studying climate communications from fossil fuel companies, work that drew national attention with her 2010 bestseller, “Merchants of Doubt.” In it she and co-author, Caltech researcher Erik Conway, argued that Exxon was aware of the threat of carbon emissions on climate change yet waged a disinformation campaign about the problem.  Despite the book’s popularity and the peer-reviewed papers with Supran, however, some continued to wonder whether she could prove the effect these campaigns had, if they indeed made a difference.

“I think this new study is the smoking gun, the proof, because it shows the degree of understanding … this really deep, really sophisticated, really skillful understanding that was obscured by what came next,” Oreskes said. “It proves a point I’ve argued for years that ExxonMobil scientists knew about this problem to a shockingly fine degree as far back as the 1980s, but company spokesmen denied, challenged, and obscured this science, starting in the late 1980s/early 1990s.”

Added Supran: “Our analysis here I think seals the deal on that matter. We now have totally unimpeachable evidence that Exxon accurately predicted global warming years before it turned around and publicly attacked climate science and scientists.”

The authors of this research were supported by a Rockefeller Family Fund grant and Harvard University Faculty Development funds.



What can take the place of real-life, tactile experience when that would mean allowing whole classes of students to paw through an illustrated manuscript nearly 250 years old?

The International Image Interoperability Framework, or IIIF.

Just ask art history Professor Jinah Kim, who last semester took students in her “Painting of India” class to Houghton Library to view a Persian text from 1778. The nearly 500-page book is filled with vibrant full-color images and handwritten adventure stories.

Students couldn’t examine the entire manuscript during class: There was too little time, and it would have been hard on the document. Instead, they were able to engage more deeply with it online, flipping through its pages and zooming in so close on illustrations they could make out brush strokes. Kim told them to “play” with the manuscript, compare its pages or place it side by side with similar works. They could add their own annotations on top of the illustrations and share their analyses with Kim in a shared workspace.

Gif showing details technology’s zooming/panning capabilities.
An example of the technology’s zooming/panning capabilities is shown in the detail of Emily Dickinson’s poem, dated 1859. Credit: Houghton Library

Engaging with images has come a long way since Kim, the George P. Bickford Professor of Indian and South Asian Art, was a student herself. At that time, more than 20 years ago, students could only view images of the art they were studying printed in books or projected as slides in a classroom.

Today, Kim said, IIIF gives students as-good-as, or better-than, in-person examination of the works they’re studying. IIIF is an open-source technological framework that has become the universal standard for online, high-resolution viewing of cultural heritage images like paintings or manuscripts.

Developed in 2011, IIIF was originally intended for viewing digitized medieval manuscripts, Harvard Associate University Librarian and Managing Director of Library Technology Services Stuart Snydman explained. But it was soon clear that its applications went far beyond this one use.

Since its inception, the database of cultural heritage images available for free online with IIIF capability has continued to grow. In 2022, the IIIF community estimated that between all their participating cultural heritage institutions, they’ve made available more than 1 billion items available.

“With IIIF, we’re investing in the cultural heritage image community,” Snydman said. “Our goal is global, universal, as open as possible. It’s not just about Harvard’s images; it’s about enabling students and faculty to interact in the very same way with images at Oxford, the Library of Congress, or the Vatican that they do with images held at Harvard. The code word for this is interoperability.”

Of the 1 billion IIIF-compatible items, about 6 million are held in Harvard’s library collections. Everything from 500-year-old maps to modern photographs are viewable in high resolution by anyone with an internet connection. Emily Dickinson’s pencil strokes can be magnified and examined, and Persian manuscripts like the one studied by Kim’s class can be compared with illustrations from the same region and period held at the Library of Congress.

Rashmi Singhal, at HUIT Academic Technology, has been working on IIIF at Harvard for 10 years. She was hired in 2013 as a software engineer to build the 2.0 version of Mirador, the IIIF image-viewing application Harvard uses, and to make Harvard’s existing digital collections IIIF-compatible.

Her unit and Harvard Library Technology Services worked closely with Harvard faculty and with staff at the Harvard Art Museums to build out Harvard’s IIIF compatibility. Library chief of staff Franziska Frey said the coalition around IIIF was collaborative across the University.

“From the beginning, we were adamant about it being a true collaboration,” said Frey.

With continued cooperation, Harvard has built out its IIIF capabilities over the last decade and created several applications for scholars to make the most of IIIF.

Singhal said she sees some researchers using the Mirador image viewer’s “deep zoom” feature to transcribe difficult-to-read handwritten manuscripts, while others use its “workspace” feature to pull together pages of the same manuscript held at different institutions to examine them at once.

“You’re not going to be able to see with the same level of detail in person as you can with deep zoom,” she said, “and if a manuscript’s pages are held separately, you can’t ‘page through’ it physically, but you can digitally. IIIF has really revolutionized scholarly examination of some of these materials.”

IIIF also enables interactive virtual exhibitions and collaborative online research projects, like Kim’s “Mapping Color in History.” A key part of the project, which is a searchable database of pigment analysis in Asian paintings, was possible because of her ability to annotate on IIIF images, Kim said.

“With digital access and IIIF, you can see an image, manipulate it, and even add your own things to it,” she said. “It really opens a door for interesting possibilities to engage with images.”

While IIIF-savvy scholars like Kim think creatively about future uses for IIIF, those on the technical side also appreciate its implications in terms of educational access.

“The fact that IIIF has been able to become a universal standard, and that it’s all open-source — that has exciting implications for democratized learning,” said Snydman. “Students and scholars of all ages have the opportunity to learn with images — not just in a physical classroom or library, not just during certain hours, and not just on Harvard’s campus. This is a great example of how technology can be used to minimize inequalities in education and give open access to knowledge.”

 



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget