Hi this is Eswar prasad presenting you a site that makes you to know something better about Technology and Using a personal computer in a better way.So help me to promote some more interesting and Amazing things that can be done on a P.C.Write your suggestions on the comment section so as to improve the content.
My Name is Eswar prasad From Srikalahasthi and You can learn Many things like:Android tips,Ubuntu and windows tutorials,Gaming ,Tech Hacks and Latest Updates from this Blog
facebookgpluslinkedinpinteresttwitter
For the first time, astronomers have spotted an exoplanet whose orbit is decaying around an evolved, or older, host star. The stricken world appears destined to spiral closer and closer to its maturing star until collision and ultimate obliteration.
The discovery offers new insights into the long-winded process of planetary orbital decay by providing the first look at a system at this late stage of evolution.
Death-by-star is a fate thought to await many worlds and could be the Earth’s ultimate adios billions of years from now as our sun grows older.
“We’ve previously detected evidence for exoplanets inspiraling toward their stars, but we have never before seen such a planet around an evolved star,” says Shreyas Vissapragada, a 51 Pegasi b Fellow at the Center for Astrophysics | Harvard & Smithsonian and lead author of a new study describing the results. “Theory predicts that evolved stars are very effective at sapping energy from their planets’ orbits, and now we can test those theories with observations.”
The findings were published Monday in The Astrophysical Journal Letters.
The ill-fated exoplanet is designated Kepler-1658b. As its name indicates, astronomers discovered the exoplanet with the Kepler space telescope, a pioneering planet-hunting mission that launched in 2009. Oddly enough, the world was the very first new exoplanet candidate Kepler ever observed. Yet it took nearly a decade to confirm the planet’s existence, at which time the object entered Kepler’s catalogue officially as the 1,658th entry.
Kepler-1658b is a so-called “hot Jupiter,” the nickname given to exoplanets on par with Jupiter’s mass and size but in scorchingly ultra-close orbits about their host stars. For Kepler-1658b, that distance is merely an eighth of the space between our sun and its tightest orbiting planet, Mercury. For hot Jupiters and other planets like Kepler-1658b that are already very close to their stars, orbital decay looks certain to culminate in destruction.
Measuring the orbital decay of exoplanets has challenged researchers because the process is very slow and gradual. In the case of Kepler-1658b, according to the new study, its orbital period is decreasing at the miniscule rate of about 131 milliseconds (thousandths of a second) per year, with a shorter orbit indicating the planet has moved closer to its star.
Detecting this decline required multiple years of careful observation. The watch started with Kepler and then was picked up by the Palomar Observatory’s Hale Telescope in Southern California and finally the Transiting Exoplanet Survey Telescope, or TESS, which launched in 2018. All three instruments captured transits, the term for when an exoplanet crosses the face of its star and causes a very slight dimming of the star’s brightness. Over the past 13 years, the interval between Kepler-1658b’s transits has slightly but steadily decreased.
The root cause of the orbital decay experienced by Kepler-1658b is tides — the same phenomenon responsible for the daily rise and fall in Earth’s oceans. Tides are generated by gravitational interactions between two orbiting bodies, such as between our world and the moon or Kepler-1658b and its star. The bodies’ gravities distort each other’s shapes, and as the bodies respond to these changes, energy is released. Depending on the distances between, sizes, and rotation rates of the bodies involved, these tidal interactions can result in bodies pushing each other away — the case for the Earth and the slowly outward-spiraling moon — or inward, as with Kepler-1658b toward its star.
There is still a lot researchers do not understand about these dynamics, particularly in star-planet scenarios. Accordingly, further study of the Kepler-1658 system should prove instructive.
The star has evolved to the point in its stellar life cycle where it has started to expand, just as our sun is expected to, and has entered into what astronomers call a subgiant phase. The internal structure of evolved stars should more readily lead to dissipation of tidal energy taken from hosted planets’ orbits compared to unevolved stars like our sun. This accelerates the orbital decay process, making it easier to study on human timescales.
The results further help in explaining an intrinsic oddity about Kepler-1658b, which appears brighter and hotter than expected. The tidal interactions shrinking the planet’s orbit may also be cranking out extra energy within the planet itself, the team says.
Vissapragada points to a similar situation with Jupiter’s moon Io, the most volcanic body in the Solar System. The gravitational push-and-pull from Jupiter on Io melts the planet’s innards. This molten rock then erupts out onto the moon’s famously infernal, pizza-like surface of yellow sulfurous deposits and fresh red lava.
Stacking additional observations of Kepler-1658b should shed more light on celestial body interactions. And, with TESS slated to keep scrutinizing thousands of nearby stars, Vissapragada and colleagues expect the telescope to uncover numerous other instances of exoplanets circling down the drains of their host stars.
“Now that we have evidence of inspiraling of a planet around an evolved star, we can really start to refine our models of tidal physics,” Vissapragada says. “The Kepler-1658 system can serve as a celestial laboratory in this way for years to come, and with any luck, there will soon be many more of these labs.”
The Lawrence Livermore National Lab in California last week achieved fusion with a net energy gain, the U.S. Department of Energy reported on Thursday. That is, by focusing 192 giant lasers on a bit of frozen deuterium and tritium, the lab’s National Ignition Facility created a reaction that produced more energy than it used, a threshold called “ignition.” The long-sought result is a major breakthrough in nuclear fusion, with exciting, if still very far off, implications for renewable energy. We asked Adam E. Cohen, a professor of chemistry, chemical biology, and physics, to explain what happened and why it matters. The interview has been edited for clarity and length.
Q&A
Adam E. Cohen
GAZETTE: What is fusion?
COHEN: Fusion is the process of colliding light nuclei with each other to form heavier nuclei. This process can release huge amounts of energy as the nuclei combine.
GAZETTE: It sounds like what the scientists did was smash two hydrogen isotopes together to make helium, which has slightly less mass. But how does that create energy?
COHEN: Einstein taught us more than a century ago, in his famous formula e = mc2, that you can convert mass into energy. So a little bit of the mass of the hydrogen isotopes that are getting fused together goes into energy, which comes out of this reaction.
GAZETTE: Why doesn’t it just stay as mass? Why aren’t there just extra bits of mass flying around?
COHEN: Mass comes in discrete chunks, and if you add up the mass of a helium and the neutron that comes flying out too in this process, there’s a little bit of a difference. Another way of thinking about it is that helium has two protons and two neutrons, and those protons and neutrons are bound to each other. They stick to each other very hard, very strongly. And when the hydrogen isotopes fuse to make that helium nucleus in the process of them sticking to each other, that releases a lot of energy. They attract each other, just the way the north and south pole of a magnet might attract each other. And as they smash into each other, they release a lot of energy.
GAZETTE: Is this how the sun works?
COHEN: It’s a slightly different reaction in the sun, but that’s also a fusion reaction. It’s the basis of the sun, and it’s the basis of thermonuclear weapons — hydrogen bombs. The hope is that we can make this reaction happen on a scale which is controllable enough to be useful for people.
GAZETTE: Aren’t there dangers involved in this kind of experiment?
COHEN: The experiment itself is not really dangerous. Unlike nuclear fission, fusion is hard to do. If you get enough radioactive material together in one place, it will spontaneously undergo fission and you can get runaway reactions. Fusion is very hard to get going: the atomic nuclei of the hydrogen isotopes are positively charged, and we know that like charges repel each other. And so it’s very hard to get those nuclei close enough together that the attractive interactions can take over, and that they can actually undergo this reaction. You really have to squeeze them very, very hard at high pressures and get them moving really fast at high temperatures in order for there to be any chance of fusion starting. If you take away the drive, if you take away the lasers, the whole thing stops.
On the other hand, the neutrons that come out of a fusion reaction can react with materials they hit to produce low-level residual radioactivity. It’s not a completely clean technology. There is some amount of radioactive waste produced.
GAZETTE: But this is still an overwhelmingly positive development, correct?
COHEN: It’s a great one. It’s an amazing piece of physics, and it shows that, as a community, our understanding of the physics of fusion is sufficiently advanced that we can predict and achieve these reactions under the controlled conditions of the National Ignition Facility.
The hopes that this will somehow be relevant for the human energy supply on Earth are still a really long way off. The primary purpose of the National Ignition Facility is not actually renewable energy; it’s around stockpile stewardship. It’s around how we maintain and keep track of our arsenal of hydrogen bombs. Given that we are no longer testing them, we have to make sure that they’re safe and that we understand how they work. So the primary purpose of the facility is really for simulating the conditions in those bombs and understanding the physics there.
GAZETTE: Are we anywhere near this research being applicable?
COHEN: The short answer is no. It’s not scalable as currently implemented. So just to give you a sense of the scale — the energy released in this shot was about three megajoules. Three megajoules of energy is about the energy you would get from eating a jelly doughnut, about 500 kilocalories. So, 500 kilocalories is a lot, but this is a multi-billion-dollar facility and it can fire one of these shots every eight hours. This is nowhere near what you would need in terms of throughput for actually producing useful quantities of energy.
GAZETTE: This is the first time that scientists have produced a net energy gain — aka “ignition” — from a fusion reaction, right?
COHEN: People talk about how this reaction for the first time produced more energy than went into it. But this really depends on how you do the accounting. It’s a little bit like passing water from hand to hand — at every step along the way, you lose a little bit. In this case, there was more energy released from the reaction than in the photons in the light that went into compressing and heating this capsule.
But if you look at the electrical energy that was used to drive the lasers to produce that light, that was vastly more than the energy that was released in the reaction, and if you imagine trying to build an actual power plant, then you would have to take the heat and the neutrons that were released from this reaction and use them to make steam and use that steam to power turbines. You’d get losses along the way there, too. So you would need several hundred-fold larger increases in output of the reaction in order for it to be actually net positive when you look at the whole system as opposed to just locally around the reaction.
New research offers insights that someday may help scientists create the kind of stem cells capable of reversing all manner of human ills, the same powerful structures that generate all the different types of cells needed in our embryonic development.
Early on our bodies are chock full of these pluripotent stem cells, capable of generating any other type of cell. The problem is we lose them at birth. Though our fully formed bodies do have stem cells, they are found only in a few select locations and have very specific functions (skin stem cells can only make skin cells; hair stem cells only produce hair cells).
Harvard researcher Mansi Srivastava, of the Department of Organismic & Evolutionary Biology (OEB) has long been struck by the fact that adults of many other species keep pluripotent populations. Wouldn’t it be great, she wondered, if we could treat disease and injury by making and replacing any type of damaged or injured cell on demand with pluripotent stem cells that kept generating into adulthood. In a new paper published in Cell, Srivastava describes the discovery of a specific pair of cells in an animal embryo that gives rise to these unique cells, and what genes are flipped on and off in their creation.
Adult pluripotent stem cells allow many animals, including hydras, flatworms, and the like, to be able to replace virtually any tissue they lose to injury. It’s a phenomenon found widely across the tree of animal life, but researchers haven’t known how they were made.
“This raised the cool idea that having adult pluripotent stem cells could be a fundamental feature of animal biology that we as humans don’t have anymore,” said Srivastava, who has been studying one of those remarkably regenerative animals, the three-banded panther worm, a marine acoel worm whose scientific name is Hofstenia miamia. Described in a previous paper, Srivastava developed a glow in the dark technique capable of marking specific cells in the embryos and adults of this worm.
In this new research, Srivastava used the same technique to follow the worm embryo’s path through maturation. Her team studied individual cells of an embryo through the 16-cell stage. At the beginning all the cells were glowing green. Using a laser, Julian Kimura, a Ph.D. student in the Graduate School of Arts and Sciences working in the lab, was able to convert one green cell to red, and the embryo continued on its developmental path. “After the worm was fully developed, we could then see what the red cell wound up producing in the adult worm — a brain cell, skin cell, anything,” she said.
After repeating the experiment on different cells with many worm embryos, the team homed in on one pair of cells “that, when the worms hatched, produced this stem cell population,” Srivastava recalled. “We found what we believe are the embryonic origins of adult pluripotent stem cells.”
The team also discovered which genes are associated with the formation of these stem cells. “Now that we have isolated the stem cells, and the genes that control them, we can really start asking how the genes are working to keep the cells pluripotent,” said Srivastava.
The three-banded Hofstenia worms have already proven that an animal can keep cells in a pluripotent state. “Looking ahead, we want to understand how, in a natural context like an animal’s body, can any organism keep cells in a pluripotent state all the time and control them perfectly to undergo differentiation whenever needed,” said Srivastava, “something that our bodies just don’t know how to do.”
This work was supported bythe SearleScholars Program, Smith Family Foundation, the National Institutes of Health, the Department of Organismic and Evolutionary Biology, the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard, and the Harvard Quantitative Biology Initiative.
Excerpted from “Minding the Climate: How Neuroscience Can Help Solve Our Environmental Crisis” by Ann-Christine Duhaime, the Nicholas T. Zervas Distinguished Professor of Neurosurgery at Harvard Medical School and former director of pediatric neurosurgery at the Massachusetts General Hospital.
From cave paintings to “Beowulf” to classical symphonies to the Mars Rover to decoding the human genome, the human brain has an extraordinary track record of creativity and problem-solving. We evolved with unparalleled ability to identify and successfully tackle challenges to our survival, an advantage that underlies our unique ability to inhabit and exponentially populate every corner of the earth.
But our extraordinary brains don’t solve all problems equally well. Our evolutionary history equipped us to perceive, prioritize, and find solutions for some kinds of problems more easily than for others, and there are some for which we are ill-suited novices. In all our pursuits, we are guided by an extraordinary internal mechanism that evaluates, second to second, our actions in relation to a shifting panoply of human rewards. This complex mechanism, honed by millions of years of history but flexible by nature, assigns value to our choices and guides us with electrochemical currencies that are exquisitely designed under the influence of evolutionary pressure to be fleeting. It is the understanding of that mechanism and how it intersects with our human decisions relevant to climate change that we will pursue in this exploration.
Science, industry, technology, politics, economics, extractavism — climate change is about all these things and many more. It is as wide as the world and as deep as history in its causes and in its scope. But ultimately, climate change is about human behavior. The human brain represents both the cause and the potential solution to this “grand challenge.” Scientific details of predictive models can be debated in this unprecedented arena, but the reality of climate change and the pre-eminent role of human activity are widely accepted around the world. Environmental decline gravely affects society’s most vulnerable populations already; it is well along in its inexorable dismantling of ecosystems and populations worldwide.
If our brains are so able and adaptable, why, then, is it that on average we struggle to acknowledge and respond effectively to an accelerating environmental dismantling, one that comes with critical time limits, and has been recognized as steadily worsening for over half a century? The cause of the problem is not obscure: High-income industrialized countries contribute more than anyone else in the world to global greenhouse gas accumulation and other aspects of environmental decline via ever-increasing consumption. Despite this, our individual and collective behaviors have been slow to change in response to the increasingly urgent consequences of the way we live and the decisions we make individually, institutionally, and politically. To understand the paradox of our inactivity in this outward-facing, global-scale problem of climate change we need to look inward, at how our brains work. Within these insights — about how the human brain perceives and approaches specific types of challenges, particularly those requiring a re-evaluation of choices guided by the human reward system — lies potential for change, and some cause for hope.
Technological fixes, including new energy sources and engineered mitigation approaches, clearly are essential to slow the acceleration of greenhouse gas accumulation; these are the focus of intense efforts in research laboratories around the world. Prioritizing and adopting these novel technologies will require overt and large-scale changes in behavior, including revolutions in infrastructure, institutions, and economies. But changing large-scale institutions will take time, and based on the current best climate change predictions, unlimited time is what we don’t have before we make essential changes in our behavior.
During the critical decades in the first half of the 21st century, before we can overhaul our physical and economic infrastructure so that technology can bail us out, keeping climate change within a range that has a chance of averting the worst catastrophic synergies requires a bridge to decreased consumption that can be adopted quickly, widely, cheaply, and easily. A significant body of research suggests that relatively straightforward measures that already exist to reduce waste, to substitute different behaviors for accomplishing tasks in the residential and workplace spheres, and to consume less in specific categories by those in high-income countries may be the most effective — and perhaps the only realistic — way to bridge this carbon output gap. It won’t solve the problem, say these scholars, but instituting such measures can result in sufficient climate stabilization to maintain a relatively resilient and recognizable world. Furthermore, these changes don’t need to drastically reduce our quality of life. But they do require change.
Climate change in particular plays to our weaknesses. Much has been written about our difficulties in perceiving climate change, due in part to inconsistencies in the information we receive and also stemming from the well-studied tendency to “discount” events that are perceived to occur far in the future or in geographically distant places. These heuristic errors, when we “shortcut” complex decisions involving uncertainty to make them easier to address, have played an important role in our quandary. There are additional key features of climate change for which our inherited neural equipment has limited perception. This occurs because, in short, from the brain’s point of view, the behaviors required for this first-of-its-kind problem just aren’t very rewarding.
A strong case can be made that we have faced other big challenges that required major overhauls of the social order and behavioral norms. In the history of the United States, attitude and behavior changes in response to industrialization, racial inequality, and women’s rights spread socially in fits and starts, stumbling gradually toward a critical mass supporting a new normal. Global pandemics have spurred dramatic changes in daily life, as well as remarkable pivots in targeted science and technology. Though these problems are difficult and still nowhere near approaching remission, they have features that we are equipped to recognize, and we generally can link responsive individual, moral, and political actions to potential solutions. For climate change action, discounting and other psychological shortcuts slow our progress. But barriers also arise from a discordance between how our brains were designed by evolution to weigh decisions based on survival pressures during a different time in history, compared to the behavior changes required for this unique crisis today. We can take action to avert the worst possible outcomes, but the solutions, especially in the time frame required, do not come naturally to us. Our brains may be more readily equipped to make us feel effective and positive by sending money to flood victims than we feel by engaging in the kinds of behavior changes that would help prevent the cause of their suffering. Still there may be ways to pick up the pace if we know why these changes are especially difficult and can implement strategies proven to make them easier.
But change is not purely rational. Why don’t people seatbelt their kids? Why don’t all motorcyclists wear helmets? Why don’t addicts just quit? Why can’t we do what we need to do to stop destroying our planet while we still have the chance?
Many fields of study have tried to answer questions about difficult behavior change, from public health, economics, and psychology to government and policy. For climate change, making different decisions with differently weighted priorities is required not just at the level of individuals in their private and work lives but also in their leadership and political roles and as influencers of contagious social movements. It requires change in prioritization for people making decisions as managers of companies, planners in industry, financiers and economists, media influencers, voters, officeholders, and policymakers. But regardless of the scale of influence, the basic unit of behavior change happens person by person.
So here we focus our attention on the neural mechanisms by which the recipient of new information or new circumstances changes the calculations by which decisions are made, to better understand what elements go into change, and which tend to have the greatest influence, at the individual or group scale. While many researchers have chronicled the reasons people have trouble perceiving the importance of climate change, a smaller number have studied what works best to actually change behavior to facilitate choices with a more direct impact on the climate problem itself. Even fewer have applied a neuroscience lens to understand whether the behavior changes needed may be facilitated by working with, rather than against, the brain’s functional design. What is it about how the brain is designed to work that makes this problem difficult for us, and how can we best use that information to help move us in a more effective direction?
Much of the research on behavior change relevant to the environment comes from the field of psychology, with investigators often working in concert with economists and researchers from other disciplines. While classic psychology experiments study behavior observed within a specific time and circumstance, related and overlapping approaches in neuroscience investigate how the nervous system works at the level of cells, molecules, and genes. Psychology describes behavior in specific situations, while neuroscience provides complementary insights into how consistent or malleable behavior may be, based on the plasticity and adaptability inherent in the brain’s very design. Whether shifts in behavior are likely to be an effective tool in the climate change battle can be answered only by knowing the environmental impact of specific behaviors, the flexibility of people to make different choices, and the likelihood that enough people might be influenced to change their behavior in a particular direction.
Neuroscience historically has not turned much of its attention to climate change, but the field is steeped in the study of behaviors that are relevant to this problem. Clinicians practicing in neuroscience-based specialties deal routinely with disorders that involve adaptive and abnormal goal-directed behaviors, the influence of experience and neural plasticity on brain function, and other manifestations of the intersection of brain and behavior. Building on painstaking work in basic neuroscience, they treat drives that are “out of balance” — excessive in addictions, dysfunctional after damage to motivation and reward networks — and require strategies for behavior change. As one striking example, patients with Parkinson’s disease whose medication doses or deep brain stimulators to control tremors are turned up too high may become compulsive gamblers or shoppers. These disorders shed light on the circuitry and modulation of healthy and “pathologic” brain networks that influence the drive to consume.
In other contexts, clinicians observe daily the amazing resiliency of the human complement of drives and motivations honed also by millions of years of nervous system evolution. Humans are rewarded by agency — the sense of accomplishment afforded by successfully completing a task. But perceiving agency from climate change action is a more difficult neural challenge.
Other brain-mediated rewards are similarly critical to the human story. Social rewards are among the most powerful ever identified. And children are especially rewarded when fulfilling their innate drive to explore, learn, and experience. Even after major surgery, what children want most is to go to the playroom and seek out toys; distraction by novelty (most recently, by iPads in the recovery room) has been shown to be more effective than narcotics for reducing pain. There are data demonstrating that exposure to nature is rewarding but that this reward differs in fundamental ways from that of acquisition and consumption. We will explore these neural traits in more detail to learn what factors facilitate different types of behavior change, including those that might have environmental consequences.
This book arose from a particular journey through the topic of environmentally relevant behavior from the perspective of a brain-focused clinician-scientist, specifically in the field of neurosurgery. Not surprisingly, people in this field tend to think everything is about the brain — but is that useful? From a brain-centric point of view, behavior-related problems like climate change reflect the design of the equipment we use to interact with and influence the world. Solutions may be enhanced by taking into account an explosion of new insights from neuroscience on how this equipment works at a fundamental level, and how it is or isn’t suited to the various responses at hand.
From the lens of neuroscience, decisions are arbitrated by the brain’s reward system, with inputs from a wide variety of internal and external influences. The brain’s decisions and priorities are not predetermined by genes or some unalterable program, and they differ from person to person. Our neural equipment is exquisitely designed to respond to changing conditions — but with certain predispositions and limitations. Understanding the evolutionary design and workings of the brain’s reward system in decision-making can help us understand the choices humans tend to make in the environmental realm — and most importantly, how malleable these choices may be. While we have some common tendencies to behave in certain ways, we also are engineered to be different from one another by design, as this works best for societal problem-solving and survival. In addition, our neural design includes the trait of being highly adaptable to specific types of new circumstances — though there are some strategies that can be called on to make us adapt more easily. Which behaviors contribute the most to environmental harm? What works and doesn’t work to change behavior, and why? How fixed or flexible is the decision-making apparatus of the human brain? How does the way we live in current times intersect with our inherited equipment to make things even worse? And finally, if the reward system is an important mediator for behavior affecting climate change, we should be able to create a test case for this hypothesis. Specifically, can we successfully influence decision-makers at an institutional level to make pro-environmental behavior more likely, by making it more rewarding?
We have gotten out of fixes in the past with ingenuity and technology. In pandemics, epidemiologists talk of “flattening the curve” — slowing things down enough so that worst-case scenarios don’t overwhelm our capacity to cope. We appear to be caught in a similar time crunch to change our behavior over decades, in the hope of giving science and technology, politics, and economics some breathing room to find more long-range solutions, considering the time it will take to get individuals and governments to cooperate and institute large-scale changes. But even on smaller scales, behavior change is hard, and we need all the insights we can bring to the problem.
Humans share a very old, stocky, and likely slimy ancestor, one that Stephanie E. Pierce, professor of organismic and evolutionary biology, describes as a “short, chunky, croco-salamander.”
More than 330 million years ago, when fish-like creatures started growing legs and taking their first steps on swampy land, they started on a new Darwinian path in the general direction of humanity. But even across that chronological gulf, the fossils of these early animals, called tetrapods, can teach us about our own human bodies, specifically why we grow relatively quickly while other animals, like today’s amphibians, plod along at a slow and steady pace. Salamanders, for example, live as fish-like tadpoles for years before going through metamorphosis to emerge as juveniles where they’ll stay for another 2-3 years.
“Our strategy, not just as humans, but as mammals, is to grow really fast and then reach adult size and slow down,” said Megan Whitney, a former postdoc in Pierce’s lab who is now an assistant professor of vertebrate paleontology and paleohistology at Loyola University. “That strategy was thought to be specialized.” Now, according to new research, humans might not be as special as once thought and the resulting insights might help begin to explain how and why some animals developed a distinct evolutionary advantage on land.
The study was published in Communications Biology by Pierce and Whitney with help from Benjamin Otoo and Kenneth Angielczyk from Chicago’s Field Museum of Natural History. The museum is host to a “giant treasure trove,” as Pierce put it, of fossils from a specific kind of tetrapod known as Whatcheeria. That species was previously thought to be among those that grew slowly and steadily. But, by studying the bones of these ancient creatures, the team discovered that at least some of these croco-salamanders were growing much faster than expected, which may have helped them fulfill the role of an apex predator or even use their stocky legs to make that legendary crawl out of the sea and onto land.
“The water/land transition in tetrapods is one of the icons of evolution,” said Angielczyk. But fish didn’t just sprout legs and crawl on shore; that transition was far from smooth. Some of these early creatures might have evolved sturdier legs that let them both plow through swamps and swim. “There was this burst of experimentation that happened, and some of those experiments worked, and some didn’t,” Angielczyk said.
Judging from Whatcheeria bones, this evolutionary experimentation looks like a mosaic, Pierce said, with some parts of the skeleton seemingly better suited for land and other bits designed for water. That hodgepodge can make it challenging — but also incredibly valuable — to study these animals, which were perched midway through one of the most important evolutionary moments in our lineage. Unlike biologists studying modern animals, Pierce and her team cannot observe Whatcheeria in their natural habitat. To understand how these creatures lived and adapted to a new land-based environment, they search for clues in their bones. And one clue — growth rate — suggests much about how an animal lived and survived.
Today, mammals, including humans, birds, and reptiles, all grow relatively quickly from a young age, careening toward adulthood as if their lives depend on it — and in the wild, they often do. This is a super-specialized trait, honed over millions of years of evolutionary tinkering, which is why Whatcheeria, one of the earliest tetrapods and so closer to that moment when fish sprouted legs, were assumed to be far older than that trait.
“We have this idea that these animals should be growing slow and sluggish,” said Whitney. “We think of that as being the ancestral state for animals. So, to find it so far back in our family tree is totally surprising.”
Whitney was the team’s expert histologist; she examined the bone specimens for evidence (or, as Pierce put it, she read them “like a storybook”). Together, the team looked at four different stages of development: juvenile, subadult, adult, and skeletally mature adult. That’s when they noticed something peculiar: Juveniles had fibrolamellar bone, a fast-depositing type present in animals with accelerated growth rates. In addition these bones had no lines, which typically indicate periods of slowed growth. So it appears that young Whatcheeria grew through seasonal changes — dark and cold to bright and hot — which often force animals to pause their growth.
“It’s a weirdo,” said Whitney.
For Whatcheeria, being weird was probably beneficial. Because it was likely the biggest predator in its ecosystem, growing quickly probably helped it thwart rivals, giving it a better shot at surviving to sexual maturity and reproducing. Pierce speculated this elevated growth rate could have helped the species survive environmental changes or even mass-extinction events.
As any human parent knows, to grow fast, animals must eat — a lot. Without a steady source of calories, the animal would struggle. And not all early tetrapods or their ancient cousins found themselves on the speedy route. “There was a diversity of animals living during the Devonian and early Carboniferous periods,” said Pierce, “and each had their own strategy for growing and living within their environment.”
That shouldn’t be surprising given the vast diversity of life on Earth today. But Pierce is still curious to learn more about this early diversity and why the fast-growing Whatcheeria took the road less traveled.
“Is Whatcheeria the exception?” Pierce asked. “Or was this a strategy that many more animals were using?” Because 330-million-year-old fossils are hard to come by, that question is difficult, if not impossible, to answer. Even the Field Museum’s Whatcheeria cornucopia is limited. Pierce and Whitney can gather evidence from synchrotron and microcomputed tomography scanning, which preserve the specimens. But some bones hide their secrets unless sliced and examined under a microscope.
“It’s an extremely valuable source of data for animals that have been dead for a very long time,” said Whitney. “A way to bring them back to life.” And, in so doing, to better understand our own.
Before the 1970s, about 25,000 premature babies died each year from a disorder called respiratory distress syndrome. Patrick Bouvier Kennedy, the infant child of President John F. Kennedy and Jacqueline Kennedy, was one of them. But in the decades since, that number plummeted to just 400 deaths per year.
“So how did this massive decline happen?” Raghuveer Parthasarathy asked last week during a virtual Harvard Science Book Talk presented by the University’s Division of Science, Cabot Science Library, and Harvard Book Store. “The answer,” he continued, “has to do with lungs and liquids.”
Or really, the answer came from merging the science of lungs and liquids through the relatively young field of biophysics, the child of biology and physics and the subject of Parthasarathy’s new book, “So Simple a Beginning: How Four Physical Principles Shape Our Living World.” In it, Parthasarathy explores how universal physical rules shape all life on Earth, answering questions like why elephants need such big bones, how birds fly and bacteria wriggle, and why respiratory distress syndrome has declined so rapidly.
Think of lungs as being sort of like balloons covered in watery mucous, said Parthasarathy, a professor of physics at the University of Oregon. They have the volume of a few tennis balls, but the surface area of an entire tennis court because of all their many airways and tiny expandable air sacs. These sacs, called alveoli, are where oxygen and carbon dioxide are exchanged when we breathe in and out.
All that tissue is steeped in water. And, because of one of the physical rules governing our universe, water molecules hate to be on surfaces — which is the problem. As water molecules all try to get as far away as possible from a pond’s surface, for example, that creates enough tension for insects, like water striders, to glide around on top. But as these molecules pull away from the surface of a premature baby’s lungs, they can collapse their tiny air sacs.
And yet, soap, Parthasarathy said, loves surfaces. To demonstrate, he floated a paper clip on top of a glass of water — a feat made possible because of the water’s surface tension. Parthasarathy then dropped in some dish soap; the surface tension disappeared; and the paper clip sank. So, he said, all the premature infants needed was a little soap — or rather, the soap-like molecule called a surfactant — that all humans make after 26 weeks of fetal development. That solution, Parthasarathy said, was not complicated. Once doctors understood the physics of liquids, they could understand the biology of breathing.
“It’s an exquisite dance to find out which details matter and which inspirations explain the problem,” said Philip Nelson, a biophysicist at the University of Pennsylvania, who joined the virtual conversation to discuss Parthasarathy’s new book.
Both lamented the lack of awareness of the field of biophysics. (Parthasarathy said he recently gave a talk called “Biophysics Exists.”) The duo also discussed the four physical principles Parthasarathy chose to include in his book. The first, self-assembly, refers to the innate ability of things — like materials, molecules, or cells — to construct intricate patterns (surfectants are an example of this). Proteins fold themselves into origami shapes. Soap bubbles and the eyes of flies both form beautiful, complex structures all on their own. “Biology is the master of self-assembly,” Parthasarathy said.
The three other principles are: regulatory circuits — or things, like DNA, that take in information and make decisions; predictable randomness — Parthasarathy’s favorite and the reason we can predict someone’s height from analyzing hundreds of genes (but can’t say exactly which genes are responsible); and scaling. Scaling is why elephants need far bigger bones than sparrows. Gravity, a physical law, pushes down on an elephant’s bulk. Big bones help the animal withstand that pressure.
These four principles might explain why life, from microscopic bacteria to bulky elephants, takes such diverse forms. But humans can also manipulate these rules to build tools. For example, scientists can now perform gene sequencing with an unraveled string of DNA.
“Sounds like total science fiction,” said Nelson, “but now you can buy it.” And, he continued, “A lot of us are alive today because of a vaccine from a category that didn’t exist a few years ago. And a lot of those insights came from nature.”
Parthasarathy, who illustrated his book with his own watercolor paintings of “exquisite” biology, also fielded questions from the virtual audience on, for example, why he chose only four principles (more felt like “too much of a hodgepodge,” he said), whether evolution could be another unifying principle (it’s more of a pathway to navigate the rules, he said), and whether free will exists if physical laws govern all life. That question used to keep him up at night, Parthasarathy said. “I’m kind of OK with everything in our consciousness having to do with physical property. In other words, not having free will.”
Another audience member asked if biophysics could explain so much of biology, whether life would still have any mystery a decade from now. “Yes,” Parthasarathy said. We still don’t understand how organs grow to different sizes, how embryos form, or why both our arms end up the same length.
“We have a lot of questions like that,” he said. “We have a lot to keep us busy.”
Climate change’s toll on human health is becoming more widely appreciated as impacts mount, including recent disasters such as floods in Pakistan and severe drought in the Horn of Africa, according to Harvard physicians who attended recent climate talks in Egypt. They noted an increased presence of health care experts at this year’s event, calls for more studies to prompt policy changes, and a move by health systems to clean up their own emissions.
“We know we can’t have healthy people without a healthy planet,” said Kimberly Humphrey, an emergency room physician and FXB Climate Change and Human Health Fellow at the Harvard Francois-Xavier Bagnoud Center for Health and Human Rights. “The thing that I’m probably most excited about over the next 12 months is cross-sectoral collaborations — how health is being integrated across all different areas — and the solutions that we will be able to come up with working with others in this space who don’t come from the health sector. It’s just incredibly promising.”
Panelists shared their experiences at COP27, the 27th Conference of the Parties to the United Nations’ Framework Convention on Climate Change, which occurred earlier this month. Attendees hailed progress made toward creating a fund financed by the world’s industrialized nations to help the most vulnerable countries deal with climate change’s impacts. But the event has also drawn criticism for a lack of progress toward more ambitious national pledges to reduce greenhouse gas emissions.
The physicians said they were pleasantly surprised at the widespread acknowledgement that health is an important consideration when talking about climate change but said it was not absolute and hasn’t translated to a more prominent role for health experts. Research can help, Wiskel said, by generating specific data on impacts that can provide the foundation for policy changes that are needed.
Just walking around the COP site, in the desert community of Sharm El Sheikh, Egypt, highlighted some of the challenges of climate change, including extreme heat — a major issue in the years to come — and the disparities between rich and poor, as seen in the differences between the air-conditioned tented pavilions of different countries. Wiskel cited the contrast between the basic pavilion of developing Niger next and the well-appointed site for oil-rich United Arab Emirates, host of next year’s COP.
Also visible was a large and energetic youth contingent, which continues to push for greater action to address climate change, panelists said. At the opposite end of the spectrum, Humphrey pointed out, were the 660 lobbyists who reportedly attended representing fossil fuel interests, which were otherwise largely invisible.
A key development during the event, Bernstein said, was the announcement by the U.S. Department of Health and Human Services that more than U.S. 100 health care organizations have pledged to reduce emissions in the coming years. The organizations signed the White House/HHS Health Sector Climate Pledge to cut emissions 50 percent by 2030 and reach net zero by 2050. HHS also announced an agreement with the National Health Service England to collaborate on procurement and supply chain reforms.
While voluntary, panelists said the steps can reduce the considerable contribution that health care makes to planet warming emissions. Health care and its supply chain account for 8.5 percent of all greenhouse gas emissions in the U.S., for example, Bernstein said. Globally, Humphrey said, if the health care industry was a nation, its emissions would make it the fifth-largest country.
“We’ve all heard the term ‘supply chains’ more in the last couple of years than in the whole previous part of our lives, unless you’re in economics,” Dresser said. “And that’s a really important thing to be learning about, because as a health care system, this is a huge piece of our carbon emissions, and I think the solutions to this have to be large-scale.”
In addition to these steps, Dresser said, it’s critical for those in health care to not just continue to offer the opinions of experts but to maintain focus on those being impacted by warming and keep bringing their personal stories to light at high-level gatherings like COP27, to personalize what’s at stake.
“I think we also need to be making an effort to elevate the voices of people, communities, nations that have been impacted by climate change,” Dresser said. “Those individual stories are the single most powerful message we have about why it matters that we take steps to address ongoing carbon emissions and set up such structures and policies that will keep people safe.”
Climate negotiators from around the world recently wrapped up talks in Egypt that were by turns frustrating and hopeful: frustrating because they did little to accelerate the slow pace of action to reduce carbon emissions, and hopeful because of a reawakened dialogue between the world’s biggest emitters and movement to address climate-related damage to the world’s most vulnerable nations. The Gazette spoke with Robert Stavins, the Harvard Kennedy School’s A.J. Meyer Professor of Energy & Economic Development, director of the Harvard Project on Climate Agreements, and a regular attendee at the annual summits, to better understand successes and failures at the 27th Conference of the Parties to the United Nations Framework Convention on Climate Change.
Q&A
Robert Stavins
GAZETTE: What stands out from COP27, this year’s climate change conference?
STAVINS: Two things stand out, one of which has not been talked about and I think is the most important development for long-term climate policy. The second one is the loss and damage issue. The first is that ever since Donald Trump was elected president in November 2016, a major question has been: When would the United States and China return to what had been a highly effective co-leadership during the Obama years? The Paris agreement would not have been achieved had it not been for the cooperation between China and the United States. That cooperation broke off with Trump and then hadn’t returned under Biden because of disagreements on international trade, human rights, South China Sea, Hong Kong, Taiwan, and other non-climate issues.
At COP27, we got the beginning of the answer, although in surprising fashion. The most important development during COP27 took place 6,000 miles away, in Bali, Indonesia, on Nov. 14. U.S. President Joe Biden and China President Xi Jinping met on the sidelines of the G-20 summit. They shook hands and engaged in a three-hour conversation in which they signaled their return to the cooperative stance that had been so crucial for international progress on climate change. That quickly trickled down to the heads of the respective negotiating teams at COP27, John Kerry for the United States and Xie Zhenhua of China. They’re friends but had not had discussions because of the problems between the two governments. Once the meeting in Bali took place, statements came from both John Kerry and Xie Zhenhua indicating that the two countries plan to resume cooperation, and that they had met and talked several times.
GAZETTE: What makes these two countries so potentially powerful when they work together? Is it just size and global influence?
STAVINS: It’s partially size and global influence, but more specifically, it’s that they’re the two largest emitters in terms of greenhouse gases. Also, the United States has traditionally been the leader of the Western world on many topics, and China — through what is called the “G-77 plus China,” which is 134 developing countries — plays a leadership role in the developing world. These are the two most important constituencies for climate negotiations, and they’re the respective leaders.
GAZETTE: Another theme from COP27 is the failure to increase pledges for emissions reductions by different nations, which was expected after Glasgow. Is that something that could happen because of this U.S.-China cooperation?
STAVINS: Essentially, on every element where there is desired progress, which is feasible and reasonable, China and the U.S. can play a crucial role. They were the first ones out of the gate with their NDCs [Nationally Determined Contributions to emission reductions under the Paris climate agreement] that became a wind at the back of the other delegations. It’s one thing for the European Union to get out in front because they always do. That doesn’t pull a lot of other countries along. It’s one thing for Costa Rica to get out in front among developing countries, but that doesn’t pull a lot of countries along. The United States and China bring a multitude of other countries along when they cooperate and play a leadership role.
GAZETTE: What can you tell me about the new fund for loss and damage?
STAVINS: The most dramatic and contentious decision within the halls of COP27 — by the negotiators from 195 countries — was the establishment of a fund for so-called loss and damage. This is an issue that’s been kicked down the road for a long time. It was first floated in 1991 when Vanuatu, a small island nation in the Pacific, suggested the creation of such a fund to pay for the consequences of rising sea levels, and for 30 years action has been delayed. In COP27, developing countries pushed to establish an explicit fund for this. China came out in favor of an explicit fund for loss and damage, though they also said they weren’t going to put any money into it. The European Union then said they supported it — that was very important — and then a few other developed countries came out in support of it. Then, in the second week of the COP, in a rather dramatic announcement, John Kerry said that the United States was reversing its position. Before the COP started, he said, “We do not support the creation of such a fund.”
GAZETTE: How large will the fund be?
STAVINS: On the demand side, such a fund could eventually amount to trillions of dollars per year. The World Bank estimate of the damages caused by this year’s floods in Pakistan alone is $40 billion. So, if you picture climate change increasing in intensity over time in countries around the world, you can see why it could easily amount to hundreds of billions — indeed, trillions — per year. On the supply side, though, there are very few quantitative pledges thus far: tens of millions of dollars — trivial compared to what the demand is going to be. So, to me, the question becomes: Is the new loss and damage fund an empty shell?
China’s announced position at COP27 was that it supports the creation of the loss and damage fund, but as a “developing country” it will not be responsible for any contributions to the fund. You’ll find a lot of quotes in which they’re self-described as a developing country and what they’re referring to is the definitions of “annex one” and “non-annex one” countries in 1992, under the United Nations Framework Convention on Climate Change. Then, China’s per capita GDP was less than $400 per year. They don’t talk about the fact that per capita GDP in China has grown by 3,330 percent since then.
There’s actually some convergence on the loss and damage fund issue between China and the United States, though not in regard to China’s self-proclaimed exemption from financial contributor status — the U.S. has been outspoken that China should be a contributor. They are not a poor, developing country any longer, they’re a major contributor to the atmospheric “stock” of greenhouse gases — damages are a function of stock, not emissions — and the largest contributor to the annual emissions.
The reason I say there’s convergence is that the United States has a story that puts the United States in a similar place: “We support the loss and damage fund, but due to the new Republican majority in the House of Representatives, it is impossible for us to make any commitment of new funding.” That’s not explicitly stated, but that’s essentially the U.S.’ position. I think that’s why Kerry could make this flip, because there’s no way that there’s going to be any money.
GAZETTE: With the COP over, how do you feel we’re doing? Predictions from the U.N.’s Intergovernmental Panel on Climate Change (IPCC) are getting more dire. People talk about timelines getting shorter. Keeping warming to 1.5 degrees Celsius seems less and less likely.
STAVINS: There have been many statements of disappointment regarding this COP, because the closing statement did not fully embrace the 1.5 degrees C target. But that would have been a nonbinding resolution versus the 2 degrees centigrade target, which is a commitment in the Paris agreement. I may be a glass-half-full guy, but I remember when the IPCC’s business-as-usual estimates of temperature change for this century were as high as 7 degrees centigrade. With the Paris NDCs, they were 3 degrees centigrade. And now, with the new commitments made by countries — including the U.S. — plus the Kigali amendments to the Montreal Protocol, we’re talking about 2.5 degrees C. I recognize that 2.5 degrees is a long way from 1.5, but it’s also a long way from where we were.
GAZETTE: Do you feel relatively certain that the trajectory will continue to improve as we go forward?
STAVINS: I wouldn’t say “relatively certain,” but I’m certainly hopeful. It’s conditional upon political developments in China, in the United States, in the European Union — the other major contributor — but we know where they’re going. China is the big question mark. We have to wait and see. The United States is also a question mark because we flip back and forth between Democratic and Republican administrations and Congresses, which drives the other countries of the world crazy. Russia is also a very large emitter, but has almost removed itself from the discussions, because it has other things to worry about.