Innovative study produces first experimental evidence linking math anxiety, math avoidance

Innovative study produces first experimental evidence linking math anxiety, math avoidance
A new study from UChicago psychologists discovered that math-anxious people will avoid more difficult math—even when harder problems promise more money. Credit: Shutterstock.com

NOVEMBER 21, 2019

by Jack Wang, University of Chicago

Math anxiety is far from uncommon, but too often, those who dread the subject simply avoid it. Research from the University of Chicago offers new evidence for the link between math anxiety and avoidance—as well as possible paths toward breaking that connection.

UChicago psychologists found that people who are math-anxious often steer away from more difficult math problems, even when solving them leads to much larger monetary rewards.

“You can be motivated to do well in something, but still make suboptimal decisions in subtle ways,” said UChicago doctoral student Jalisha Jenifer, who led the study with postdoctoral scholar Kyoung Whan Choe.

“Asking people to choose between easy, low-reward problems and hard, high-reward problems was an excellent way for us to see how individuals with math anxiety might make suboptimal decisions to avoid math in their everyday lives,” Choe said.

Published Nov. 20 in Science Advances, the innovative study produced the first experimental evidence that math anxiety can predict math avoidance—a connection that was widely suggested but not verified by prior research.

Studying nearly 500 adults aged 18-35 through a computer program called the Choose-and-Solve Task (CAST), UChicago psychologists gave participants a choice between math and word problems labeled “easy” and “hard.” The easy problems were always worth two cents, while the hard problems were worth up to six cents. They also informed participants that the computer-adaptive task would adjust to their individual abilities, enabling them to solve about 70% of the hard problems.

Although participants attempted “hard” word problems when promised higher monetary rewards, they rarely chose to do the same for math problems—that is, they behaved as if the costs of harder math outweighed its benefits, even when that wasn’t the case.

These results suggest that fear and apprehension may hamper many people trying to improve their math skills. For example, someone seeking a good grade in a math class may spend too much time studying problems that don’t challenge them.

The findings also indicate that math anxiety could affect everyday decisions such as calculating tips or change. Previous studies have suggested an anxiety-avoidance connection through the correlation of anxiety and the avoidance of math courses or STEM-related careers, rather than examining how people react directly to math problems.

The scholars hope that the CAST can be used with teens and children in order to help identify students who are likely to avoid putting effort into math. They are also conducting a follow-up study using functional MRI to examine participants who avoid hard math in the same task—which could reveal the brain mechanisms behind that suboptimal decision-making.

“Developing the CAST was only the first step toward investigating what drives people’s decisions to avoid math,” said Choe, a postdoc in UChicago’s Department of Psychology and at the Mansueto Institute for Urban Innovation.

“People often say that being anxious about math is just a byproduct of being bad at it. Our research shows that isn’t true,” Beilock said. “Even when math-anxious people are capable of doing math, they avoid it, which means that educators and parents have to think about how we can lower math anxiety in our kids or we are going to miss students who are capable of success in math and science and just stay away from it.”

Committed to making scientific research more accessible, the authors have made the task, data and analysis code from their study freely available on the Open Science Framework.

“This research is a great demonstration for how unique insights can be achieved by combining cutting-edge behavioral paradigms with novel psychological theories,” said Berman, a leading cognitive neuroscientist who studies cognitive effort through brain imaging and computational models. “Similar paradigms can also be employed to study other effort-based decision-making tasks, such as the choice to exercise or eat healthier.”

The researchers also discovered that not all participants who reported feeling math anxiety avoided difficult math problems. They hope to explore the reasons behind such behavior in future studies.

“We thought everyone who was anxious would avoid math, but math-anxious individuals don’t appear to be a monolithic group,” Jenifer said. “We are seeing individual differences in how math anxiety influences avoidance, which is an exciting finding that we hope to investigate further in later research.”

Added Choe: “By understanding how some math-anxious individuals could overcome their tendency to avoid math, we will be able to turn the vicious cycle of math anxiety into a virtuous cycle of math success.”

Telescopes and satellites combine to map entire planet’s ground movement

Earth
Credit: CC0 Public Domain

NOVEMBER 21, 2019

by Curtin University

Curtin University research has revealed how pairing satellite images with an existing global network of radio telescopes can be used to paint a previously unseen whole-of-planet picture of the geological processes that shape the Earth’s crust.

The research, published in Geophysical Research Letters, showed that satellite images capturing the movement of the Earth’s surface on different continents as a result of geological and man-made forces can be integrated using radio telescopes to deliver a global-scale view and new understanding of these processes.

Lead researcher Dr. Amy Parker, an ARC Research Fellow from Curtin’s School of Earth and Planetary Sciences, said the global network of radio telescopes was shown to be a key link to integrating satellite measurements of ground movements on a global scale.

“The height of the Earth’s surface is constantly changed by geological forces like earthquakes and the effects of human activities, such as mining or ground water extraction,” Dr. Parker said.

“Increasing numbers of scientists are measuring these changes using the global coverage of images from radar satellites, however, it has not been previously possible to link together ground movements measured on different continents because they are measured relative to an arbitrary point and not a globally consistent reference frame.

“This is the first time we have thought about how to integrate these measurements on a global scale, and the potential benefits of this approach in terms of our understanding of the processes that shape our planet’s crust are significant.”

Dr. Parker said the study, which was done in collaboration with researchers from the University of Tasmania and Chalmers University of Technology in Sweden, demonstrated that the already existing global network of radio telescopes could be the missing link to integrate these satellite measurements on a worldwide scale.

“By harnessing the power of these radio telescopes, we hope to shed new light on the processes that shape the Earth’s crust including a complete, consistent assessment of the contribution of land displacements to relative sea-level rise,” Dr. Parker said.

Study on surface damage to vehicles traveling at hypersonic speeds

Study on surface damage to vehicles traveling at hypersonic speeds
Carbon atoms are represented in teal in the smooth graphene (a) and silicon and oxygen atoms are represented in yellow and red in quartz (b), respectively. Credit: University of Illinois Department of Aerospace Engineering

NOVEMBER 19, 2019

by University of Illinois at Urbana-Champaign

Vehicles moving at hypersonic speeds are bombarded with ice crystals and dust particles in the surrounding atmosphere, making the surface material vulnerable to damage such as erosion and sputtering with each tiny collision. Researchers at the University of Illinois at Urbana-Champaign studied this interaction one molecule at a time to understand the processes, then scaled up the data to make it compatible with simulations that require a larger scale.

Doctoral student Neil Mehta working with Prof. Deborah Levin looked at two different materials that are commonly used on the exterior surfaces of slender bodies—a smooth graphene and a rougher quartz. In the model, these materials were attacked by aggregates composed of argon atoms and silicon and oxygen atoms to simulate ice and dust particles hitting the two surface materials. These molecular dynamics studies taught them what stuck to the surfaces, the damage done, and the length of time it took to cause the damage—all at the size of a single angstrom, which is basically the length of an atom.

Why so small? Mehta said it’s important to start by looking at “first principles” to thoroughly understand the erosive effects of ice and silica to graphene and quartz surfaces. But those who simulate fluid dynamics use lengths that are several milli-meters micrometer to cm—so scaling up the physics of the MD models was urgently needed. The excitement about this work is that it was the first to ever do so in this application.

“Unfortunately, you can’t just take the results from this very tiny angstrom level and use it in aerospace engineering reentry vehicle calculations,” Mehta said. “You can’t directly jump from molecular dynamics to computational fluid dynamics. It takes several more steps. Applying the rigor of kinetic Monte Carlo techniques, we took details at this very tiny scale and analyzed the dominant trends so that larger simulation techniques can use them in modeling programs that simulate the evolution of surface processes that occur in hypersonic flight, such as erosion, sputtering, pitting.

“At what rate will these processes happen and with what likelihood will these types of damages happen were the key features that no other Kinetic Monte Carlo or scale bridging has used before,” he said.

According to Mehta, the work is unique because it incorporated experimental observations of gas-surface interactions and molecular dynamics simulations to create a “first principles” rule that can be applied to all of these surfaces.

“For example, ice has a tendency to form flakes, ice crystals. It creates a fractal pattern because ice likes to stick to another ice, so it’s more likely that the water vapor will condense next to an ice particle that is already on the surface and create a trellis-like feature. Whereas sand just scatters. It doesn’t have any preference. So one rule is that ice likes to stick to other ice.

“Similarly, for degradation, the rule on graphene is that damage is more likely to occur next to pre-existing damage,” Mehta said. “There are several rules, depending on what material you’re using, that you can actually study what happens from an atomic level to a micrometer landscape, then use the results to implement in computational fluid dynamics or any long, large-scale simulation,” Mehta said.

One application for this work is for research on how to design thermal protection systems for slender vehicles and small satellites being at altitudes near 100 km.

The study, “Multiscale modeling of damaged surface topology in a hypersonic boundary,” was written by Neil A. Mehta and Deborah A. Levin. It is published in the Journal of Chemical Physics.

Technique identifies T cells primed for certain allergies or infections

Researchers develop a method to isolate and sequence the RNA of T cells that react to a specific target.

MIT researchers have developed a method to isolate T cells that bind to different targets and then sequence their RNA.
MIT researchers have developed a method to isolate T cells that bind to different targets and then sequence their RNA.
Image: SciStories LLC

Anne Trafton | MIT News Office
November 19, 2019

When your immune system is exposed to a vaccine, an allergen, or an infectious microbe, subsets of T cells that can recognize a foreign intruder leap into action. Some of these T cells are primed to kill infected cells, while others serve as memory cells that circulate throughout the body, keeping watch in case the invader reappears.

MIT researchers have now devised a way to identify T cells that share a particular target, as part of a process called high-throughput single-cell RNA sequencing. This kind of profiling can reveal the unique functions of those T cells by determining which genes they turn on at a given time. In a new study, the researchers used this technique to identify T cells that produce the inflammation seen in patients with peanut allergies.

In work that is now underway, the researchers are using this method to study how patients’ T cells respond to oral immunotherapy for peanut allergies, which could help them determine whether the therapy will work for a particular patient. Such studies could also help guide researchers in developing and testing new treatments.

“Food allergies affect about 5 percent of the population, and there’s not really a clear clinical intervention other than avoidance, which can cause a lot of stress for families and for the patients themselves,” says J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research. “Understanding the underlying biology of what drives these reactions is still a really critical question.”

Love and Alex K. Shalek, who is the Pfizer-Laubach Career Development Associate Professor at MIT, an associate professor of chemistry, a core member of MIT’s Institute for Medical Engineering and Science (IMES), and an extramural member of the Koch Institute, are the senior authors of the study, which appears today in Nature Immunology. The lead authors of the paper are graduate student Ang Andy Tu and former postdoc Todd Gierahn.

Extracting information

The researchers’ new method builds on their previous work developing techniques for rapidly performing single-cell RNA sequencing on large populations of cells. By sequencing messenger RNA, scientists can discover which genes are being expressed at a given time, giving them insight into individual cells’ functions.

Performing RNA sequencing on immune cells, such as T cells, is of great interest because T cells have so many different roles in the immune response. However, previous sequencing studies could not identify populations of T cells that respond to a particular target, or antigen, which is determined by the sequence of the T cell receptor (TCR). That’s because single-cell RNA sequencing usually tags and sequences only one end of each RNA molecule, and most of the variation in T cell receptor genes is found at the opposite end of the molecule, which doesn’t get sequenced. 

“For a long time, people have been describing T cells and their transcriptome with this method, but without information about what kind of T cell receptor the cells actually have,” Tu says. “When this project started, we were thinking about how we could try to recover that information from these libraries in a way that doesn’t obscure the single-cell resolution of these datasets, and doesn’t require us to dramatically change our sequencing workflow and platform.”

In a single T cell, RNA that encodes T cell receptors makes up less than 1 percent of the cell’s total RNA, so the MIT team came up with a way to amplify those specific RNA molecules and then pull them out of the total sample so that they could be fully sequenced. Each RNA molecule is tagged with a barcode to reveal which cell it came from, so the researchers could match up the T cells’ targets with their patterns of RNA expression. This allows them to determine which genes are active in populations of T cells that target specific antigens.

“To put the function of T cells into context, you have to understand what it is they’re trying to recognize,” Shalek says. “This method lets you take existing single-cell RNA sequencing libraries and pull out relevant sequences you might want to characterize. At its core, the approach is a straightforward strategy for extracting some of the information that’s hidden inside of genome-wide expression profiling data.”

Another advantage of this technique is that it doesn’t require expensive chemicals, relies on equipment that many labs already have, and can be applied to many previously processed samples, the researchers say.

Analyzing allergies

In the Nature Immunology paper, the researchers demonstrated that they could use this technique to pick out mouse T cells that were active against human papilloma virus, after the mice had been vaccinated against the virus. They found that even though all of these T cells reacted to the virus, the cells had different TCRs and appeared to be in different stages of development — some were very activated for killing infected cells, while others were focused on growing and dividing.

The researchers then analyzed T cells taken from four patients with peanut allergies. After exposing the cells to peanut allergens, they were able to identify T cells that were active against those allergens. They also showed which subsets of T cells were the most active, and found some that were producing the inflammatory cytokines that are usually associated with allergic reactions.

“We can now start to stratify the data to reveal what are the most important cells, which we were not able to identify before with RNA sequencing alone,” Tu says.

Love’s lab is now working with researchers at Massachusetts General Hospital to use this technique to track the immune responses of people undergoing oral immunotherapy for peanut allergies — a technique that involves consuming small amounts of the allergen, allowing the immune system to build up tolerance to it.

In clinical trials, this technique has been shown to work in some but not all patients. The MIT/MGH team hopes that their study will help identify factors that could be used to predict which patients will respond best to the treatment.

“One would certainly like to have a better sense of whether an intervention is going to be successful or not, as early as possible,” Love says.

This strategy could also be used to help develop and monitor immunotherapy treatments for cancer, such as CAR-T cell therapy, which involves programming a patient’s own T cells to target a tumor. Shalek’s lab is also actively applying this technique with collaborators at the Ragon Institute of MGH, MIT and Harvard to identify T cells that are involved in fighting infections such as HIV and tuberculosis.

The research was funded by the Koch Institute Support (core) Grant from the National Institutes of Health, the Koch Institute Dana-Farber/Harvard Cancer Center Bridge Project, the Food Allergy Science Initiative at the Broad Institute of MIT and Harvard, the Arnold and Mabel Beckman Foundation, a Searle Scholar Award, a Sloan Research Fellowship in Chemistry, the Pew-Stewart Scholars program, and the National Institutes of Health.

Phoenix cluster is cooling faster than expected

With increasingly advanced data, Michael McDonald and colleagues study a galaxy cluster bursting with new stars.

With increasing data quality (shown progressively, left to right, in images from 2012, 2015, and 2019) Assistant Professor Michael McDonald and colleagues can conclusively show that the black hole in the Phoenix galaxy cluster is not preventing star formation.
With increasing data quality (shown progressively, left to right, in images from 2012, 2015, and 2019) Assistant Professor Michael McDonald and colleagues can conclusively show that the black hole in the Phoenix galaxy cluster is not preventing star formation.
Photos (left to right): Magellan/IMACS/M.McDonald; Magellan/Megacam/M.McDonald; ESA and NASA/Hubble/M.McDonald

Fernanda Ferreira | School of Science | MIT
November 20, 2019

From the very beginning, it was clear that the Phoenix cluster was different from other galaxy clusters. When assistant professor of physics Michael McDonald looked at the first image of the Phoenix cluster taken by the Magellan Telescope in Chile, he saw an unexpected hazy circle of blue.

Galaxy clusters like Phoenix are, as the name suggests, a cluster of hundreds or even thousands of galaxies held together by gravity and permeated with dark matter and hot gas. When the hot gas cools, star formation happens. Given the amount of hot gas in galaxy clusters, astronomers expected to find large nurseries of young stars. Instead, they found older stars, which usually glow red, and a black hole at the center of the cluster pumping out energy, keeping the gas too hot for explosive star formation.

Most clusters appear the same when observed through a telescope. “When we look at the center of clusters, we see galaxies that are spherical or football-shaped and red,” says McDonald, who is also a researcher at the MIT Kavli Institute for Astrophysics and Space Research (MKI).

The blue light McDonald saw in Phoenix, however, is a telltale sign of a young star, which burns hotter and brighter. Confirming that the blue light was the result of new star formation would take advances in the quality of data and access to three powerful telescopes to figure out what makes the Phoenix cluster unique.

Telescope trifecta

McDonald hypothesized that the hazy blue light at the center of the Phoenix cluster was due to millions of newly formed stars, but there was another possible explanation. “It could be that there is a single blue point that is smeared because we don’t have the resolution to see it,” he explains. That blue point of light could be a quasar, a type of black hole that expels matter, heating the galaxy’s gas.

In 2015, three years after that first image was taken, McDonald and his colleagues published new, higher-resolution images of Phoenix collected from more powerful telescopes. “It’s like putting your glasses on,” McDonald says. This allowed them to see that the blue point at the center of Phoenix was not really a single point, but rather a group of blue filaments. “That’s what you would expect if the gas is condensing,” he explains. Hot gas doesn’t cool uniformly in a galaxy; some parts will cool more quickly, forming highways of cool gas where new stars can rapidly form.

Now, four years after the second set of images were taken, McDonald and his collaborators have released an even more detailed image of the Phoenix cluster, taken by NASA’s Hubble Space Telescope, which is orbiting the Earth. The image was published in a recent issue of The Astronomical Journal and shows the central galaxy in greater detail with wispy filaments where new stars are concentrated. McDonald’s coauthors include MKI research scientist Matthew Bayliss, assistant professor of physics Erin Kara, and physics graduate student Taweewat Somboonpanyakul.

“The Hubble data increased — by a factor of 10 — the detail we can see,” says McDonald. But the Hubble Space Telescope is just one of the telescopes used. The team also collected X-ray data with NASA’s Chandra X-ray Observatory, which orbits Earth, and radio data using the National Science Foundation’s Karl Jansky Very Large Array (VLA) in New Mexico. It was this trifecta of telescopes that allowed them to figure out why Phoenix is singular among the hundreds of known galaxy clusters.

Unlike other galaxy clusters, the hot gas in the Phoenix cluster is cooling rapidly, nearly at the expected rate for gas not affected by bursts of energy from a black hole. McDonald compares galaxy clusters to a cup of coffee. “It’s a 10-million-degree cup of coffee, but even a 10-million-degree cup of coffee is going to cool,” he explains. As the gas cools, star formation happens more rapidly. Most galaxy clusters, however, behave more like a cup of coffee on a warming plate; no matter how much time passes, the cup of coffee will never be fully cool. “The warmer is the black hole in the center of the galaxy, and every cluster observed has a warm interior.”

Phoenix is different. The black hole at its center isn’t able to keep the hot gas in the galaxy from fully cooling. It’s as if you had placed your 10-million-degree cup of coffee on the counter rather than the warmer, allowing it to cool enough to form millions of young, shimmery blue stars. “This rapid cooling of the intracluster gas is a phenomenon that was predicted nearly 40 years ago, but is apparently so uncommon that it has only now been witnessed for the first time,” says McDonald.

What makes this phenomenon so uncommon is still up in the air. “One theory is that every cluster will go through a phase like this for a short period of time,” making this phenomenon equivalent to a star growth spurt, says McDonald. But he isn’t sure about this explanation. “Even if stars stopped forming now, the blue stars hang around for 50 to 100 million years and we would see evidence that this had happened in other clusters,” he explains. Most likely, McDonald believes, this is a phenomenon that is not only rare, but also needs just right conditions to occur. When it does occur, it changes our understanding of astrophysical phenomena.

Time is money

For McDonald, these results, particularly the progression of optical images of the Phoenix cluster, are a perfect representation of how science progresses. “We start with this really low-quality data and try to imagine what could give us that image,” he explains. The hard part is getting follow-up data.

“The reason there’s a gap between 2015 and 2019 is because we had to convince three different panels of three different telescopes to invest a lot of time,” says McDonald. Their team got 167 hours of Chandra — among the largest projects they approved in 2016 — 12 orbits (20 hours) of Hubble, and for the VLA not only were they given 27.5 hours on the radio array, but the team also convinced the VLA’s panel to point the telescope at the horizon, which brings the telescope’s dishes precipitously close to scraping the ground. Convincing panels is like convincing grant reviewers to fund your research, with one difference: “Instead of awarding money, they’re awarding telescope time,” says McDonald.

Time at all three of these telescopes is a hot commodity that researchers can’t afford to spend on projects that are less than a sure bet. “So you do a pilot study, and if the results are encouraging, you invest a little bit more resources and a little more time,” says McDonald. “If the results still look good, now I’ll dump all my resources into this because I know it’s a good avenue, and then you get the big payoff.”

McDonald knows exactly how he’s going to decorate his office at MKI now that the third Phoenix cluster paper has come out. “My plan is to have those three pictures blown up and hung right there on my wall,” he says, gesturing to a corner of his office. “Because just these three images tell the story of how we do science.”

How to observe a ‘black hole symphony’ using gravitational wave astronomy

How to observe a 'black hole symphony' using gravitational wave astronomy
A snapshot of the 3D gravitational waveform from a general relativistic simulation of binary black holes. Gravitational waves from such binary mergers are routinely observed by LIGO. With space missions such as LISA, the evolution of these binaries can be monitored years in advance, allowing multi-frequency constraints on astrophysical formations and tests of general relativity. Credit: Jani, K., Kinsey, M., Clark, M. Center for Relativistic Astrophysics, Georgia Institute of Technology.

NOVEMBER 18, 2019

by Vanderbilt University

Shrouded in mystery since their discovery, the phenomenon of black holes continues to be one of the most mind-boggling enigmas in our universe.

In recent years, many researchers have made strides in understanding black holes using observational astronomy and an emerging field known as gravitational wave astronomy, first hypothesized by Albert Einstein, which directly measures the gravitational waves emitted by black holes.

Through these findings on black hole gravitational waves, which were first observed in 2015 by the Laser Interferometer Gravitational-Wave Observatories (LIGO) in Louisiana and Washington, researchers have learned exciting details about these invisible objects and developed theories and projections on everything from their sizes to their physical properties.

Still, limitations in LIGO and other observation technologies have kept scientists from grasping a more complete picture of black holes, and one of the largest gaps in knowledge concerns a certain type of black hole: those of intermediate-mass, or black holes that fall somewhere between supermassive (at least a million times greater than our sun) and stellar (think: smaller, though still 5 to 50 times greater than the mass of our sun).

That could soon change thanks to new research out of Vanderbilt on what’s next for gravitational wave astronomy. The study, led by Vanderbilt astrophysicist Karan Jani and featured today as a letter in Nature Astronomy, presents a compelling roadmap for capturing 4- to 10-year snapshots of intermediate-mass black hole activity.Play01:0801:27SettingsPIPEnter fullscreenPlayNew research led by Vanderbilt astrophysicist Karan Jani presents a compelling roadmap for capturing intermediate-mass black hole activity. Credit: Vanderbilt University

“Like a symphony orchestra emits sound across an array of frequencies, the gravitational waves emitted by black holes occur at different frequencies and times,” said Jani. “Some of these frequencies are extremely high-bandwidth, while some are low-bandwidth, and our goal in the next era of gravitational wave astronomy is to capture multiband observations of both of these frequencies in order to ‘hear the entire song,’ as it were, when it comes to black holes.”

Jani, a self-proclaimed “black hole hunter” who Forbes named to its 2017 30 Under 30 list in Science, was part of the team that detected the very first gravitational waves. He joined Vanderbilt as a GRAVITY postdoctoral fellow in 2019.

Along with collaborators at Georgia Institute of Technology, California Institute of Technology and the Jet Propulsion Laboratory at NASA, the new paper, “Detectability of Intermediate-Mass Black Holes in Multiband Gravitational Wave Astronomy,” looks at the future of LIGO detectors alongside the proposed Laser Interferometer Space Antenna (LISA) space-mission, which would help humans get a step closer to understanding what happens in and around black holes.

“The possibility that intermediate mass black holes exist but are currently hidden from our view is both tantalizing and frustrating,” said Deidre Shoemaker, co-author of the paper and professor in Georgia Tech’s School of Physics. “Fortunately, there is hope as these black holes are ideal sources for future multiband gravitational wave astronomy.”

LISA, a mission jointly led by the European Space Agency and NASA and planned for launch in the year 2034, would improve detection sensitivity for low-frequency gravitational waves. As the first dedicated space-based gravitational wave detector, LISA would provide a critical measurement of a previously unattainable frequency and enable the more complete observation of intermediate-mass black holes. In 2018, Vanderbilt physics and astronomy professor Kelly Holley-Bockelmann was appointed by NASA as the inaugural chair of the LISA Study Team.

“Inside black holes, all known understanding of our universe breaks down,” added Jani. “With the high frequency already being captured by LIGO detectors and the low frequency from future detectors and the LISA mission, we can bring these data points together to help fill in many gaps in our understanding of black holes.”

Protein imaging at the speed of life

With this capability, scientists can watch how proteins do their jobs properly—or how their shape-changing goes awry, causing disease.

Protein imaging at the speed of life
Members of the Schmit lab who worked on the paper include (from left) doctoral student Ishwor Poudyal, Professor Marius Schmidt and doctoral student and first author Suraj Pandey. Their findings mark a new age of protein research that enables enzymes involved in disease to be observed in real time for meaningful durations in unprecedented clarity. (Photo by Troye Fox) Credit: UWM /Troye Fox

NOVEMBER 18, 2019

by University of Wisconsin – Milwaukee

To study the swiftness of biology—the protein chemistry behind every life function—scientists need to see molecules changing and interacting in unimaginably rapid time increments—trillionths of a second or shorter.

Imaging equipment with that kind of speed was finally tested last year at the European X-ray Free-Electron Laser, or EuXFEL. Now, a team of physicists from the University of Wisconsin-Milwaukee has completed the facility’s first molecular movie, or “mapping,” of the ultrafast movement of proteins.

With this capability, scientists can watch how proteins do their jobs properly—or how their shape-changing goes awry, causing disease.

“Creating maps of a protein’s physical functioning opens the door to answering much bigger biological questions,” said Marius Schmidt, a UWM professor of physics who designed the experiment. “You could say that the EuXFEL can now be looked on as a tool that helps to save lives.”

Their findings mark a new age of protein research that enables enzymes involved in disease to be observed in real time for meaningful durations in unprecedented clarity. The paper is published online today in the journal Nature Methods.

The EuXFEL produces intense X-rays in extremely short pulses at a megahertz rate—a million pulses a second. The rays are aimed at crystals containing proteins, in a method called X-ray crystallography. When a crystal is hit by the X-ray pulse, it diffracts the beam, scattering in a certain pattern that reveals where the atoms are and producing a “snapshot.”

The rapid-fire X-ray pulses produce 2-D snapshots of each pattern from hundreds of thousands of angles where the beam lands on the crystal. Those are mathematically reconstructed into moving 3-D images that show changes in the arrangement of atoms over time.

The European XFEL, which opened last year, has taken this atom-mapping to a new level. Extremely powerful bursts contain X-ray pulses at a quadrillionth of a second, in “bursts” that occur at 100 millisecond intervals.

Schmidt’s experiment began with a flash of blue, visible light that induced a chemical reaction inside the protein crystal, followed immediately by a burst of intense X-rays in megahertz pulses that produce the “snapshots.”

It’s an experiment he first staged in 2014 at the U.S. Department of Energy’s SLAC National Accelerator Laboratory in California. There, he and his students were able to document atomic changes in their protein samples for the first time at an XFEL.

Subsequently, in 2016, they were able to map the rearrangement of atoms in the range of time proteins take to change their shapes—quadrillionths of a second (femtoseconds) up to 3 trillionths of a second (picoseconds). In a picosecond, which is a trillionth of a second, light travels the length of the period at the end of this sentence.

Protein imaging at the speed of life
In this illustration, microcrystals are injected (top, left) and a reaction is initiated by blue laser pulses hitting the proteins within the crystals (middle, left). The atomic structure of the protein (right) is probed during the reaction by the X-ray pulses (bottom, left). At the European XFEL, femtosecond optical laser pulses match the X-ray pulses that fire at a megahertz rate. X-ray pulses are six orders of magnitude larger than that at other X-ray sources. This makes it possible to produce diffraction patterns for nearly any protein, yielding still images recorded over unimaginably rapid time increments that form molecular movies. Credit: European XFEL / Blue Clay Studios

Previous time-resolved crystallography on their photoreactive protein had already been completed using other X-ray sources capable of imaging time scales larger than 100 picoseconds, leaving a gap of uncharted time between 3 and 100 picoseconds that the scientists were able to fill using the EuXFEL.

The exceptional brightness of the laser and the megahertz X-ray pulse rate allowed them to gather data much quicker and with greater resolution and over longer time frames.

Schmidt describes EuXFEL as “a machine of superlatives.” The largest XFEL in the world, it is 3 kilometers long, spanning the distance between the German federal states of Hamburg and Schleswig-Holstein. Superconductive technology is used to accelerate high-energy electrons, which generates the X-rays.

Schmidt, a biophysicist who has participated in more than 30 XFEL imaging projects to date, offered a taste of the medical potential of enhanced crystallography with the XFEL: Using this method, he has witnessed how multiple proteins work together, how enzymes responsible for antibiotic resistance disable a drug and how proteins change their shape in order to absorb light and enable sight.

Doctoral student Suraj Pandey, who came to UWM from his native Nepal, is first author on the paper. He now has experience with technology that few people in the world can claim, at least for now. He said he was not sure what to expect going into the experiment.

Pandey’s role was to analyze the data and calculate the maps of structural change. Of the millions of X-ray pulses that XFELs deliver, the majority don’t hit a target at all. In fact, only 1% to 2% diffract off a protein crystal, while the remaining pulses produce “noise” that must be removed from the data.

The team had other worries too, he said. It took months for Pandey to grow the protein required to produce the experiment’s crystals, but during their transport to Germany, the 5 grams of frozen protein was detained in customs for several days, during which some of it melted.

After the first day of imaging, he processed the data and could identify for the first time a strong signal in the resulting map. “This was a breakthrough,” he said. “But the signal did not correspond to the change predicted from previous experiments. I thought the experiment had failed.”

Instead, he and EuXFEL operators learned their first lesson: Optical pulses that initiate the reaction have to be exactly synchronized with the megahertz X-ray pulses. Otherwise, the protein reaction unfolds in unknown time allotments. And they had to be sure that the sample was only excited once, which turned out to be quite tricky with megahertz pulse rates.

The ultimate success of the experiment gave Pandey tremendous satisfaction.

“It’s a one-of-a-kind technology,” he said of the EuXFEL. “We pioneered the usage of European XFEL in seeing the movies of how proteins function. I’m just flying.”

A marvelous molecular machine

A marvelous molecular machine
The adaptive iridocytes in the skin of the California market squid are able tune color through most of the spectrum.  Credit: University of California – Santa Barbara

NOVEMBER 15, 2019

by Harrison Tasoff and Sonia Fernandez, University of California – Santa Barbara

Squids, octopuses and cuttlefish are undisputed masters of deception and camouflage. Their extraordinary ability to change color, texture and shape is unrivaled, even by modern technology.

Researchers in the lab of UC Santa Barbara professor Daniel Morse have long been interested in the optical properties of color-changing animals, and they are particularly intrigued by the opalescent inshore squid. Also known as the California market squid, these animals have evolved the ability to finely and continuously tune their color and sheen to a degree unrivaled in other creatures. This enables them to communicate, as well as hide in plain sight in the bright and often featureless upper ocean.

In previous work, the researchers uncovered that specialized proteins, called reflectins, control reflective pigment cells—iridocytes—which in turn contribute to changing the overall visibility and appearance of the creature. But still a mystery was how the reflectins actually worked.

“We wanted now to understand how this remarkable molecular machine works,” said Morse, a Distinguished Emeritus Professor in the Department of Molecular, Cellular and Developmental Biology, and principal author of a paper that appears in the Journal of Biological Chemistry. Understanding this mechanism, he said, would provide insight into the tunable control of emergent properties, which could open the door to the next generation of bio-inspired synthetic materials.

Light-reflecting skin

Like most cephalopods, opalescent inshore squid, practice their sorcery by way of what may be the most sophisticated skin found anywhere in nature. Tiny muscles manipulate the skin texture while pigments and iridescent cells affect its appearance. One group of cells controls their color by expanding and contracting cells in their skin that contain sacks of pigment.

Behind these pigment cells are a layer of iridescent cells—those iridocytes—that reflect light and contribute to the animals’ color across the entire visible spectrum. The squids also have leucophores, which control the reflectance of white light. Together, these layers of pigment-containing and light-reflecting cells give the squids the ability to control the brightness, color and hue of their skin over a remarkably broad palette.

Unlike the color from pigments, the highly dynamic hues of the opalescent inshore squid result from changing the iridocyte’s structure itself. Light bounces between nanometer-sized features about the same size as wavelengths in the visible part of the spectrum, producing colors. As these structures change their dimensions, the colors change. Reflectin proteins are behind these features’ ability to shapeshift, and the researchers’ task was to figure out how they do the job.

Thanks to a combination of genetic engineering and biophysical analyses, the scientists found the answer, and it turned out to be a mechanism far more elegant and powerful than previously imagined.

“The results were very surprising,” said first author Robert Levenson, a postdoctoral researcher in Morse’s lab. The group had expected to find one or two spots on the protein that controlled its activity, he said. “Instead, our evidence showed that the features of the reflectins that control its signal detection and the resulting assembly are spread across the entire protein chain.”

An Osmotic Motor

Reflectin, which is contained in closely packed layers of membrane in iridocytes, looks a bit like a series of beads on a string, the researchers found. Normally, the links between the beads are strongly positively charged, so they repel each other, straightening out the proteins like uncooked spaghetti.

Morse and his team discovered that nerve signals to the reflective cells trigger the addition of phosphate groups to the links. These negatively charged phosphate groups neutralize the links’ repulsion, allowing the proteins to fold up. The team was especially excited to discover that this folding exposed new, sticky surfaces on the bead-like portions of the reflectin, allowing them to clump together. Up to four phosphates can bind to each reflectin protein, providing the squid with a precisely tunable process: The more phosphates added, the more the proteins fold up, progressively exposing more of the emergent hydrophobic surfaces, and the larger the clumps grow.

As these clumps grow, the many, single, small proteins in solution become fewer, larger groups of multiple proteins. This changes the fluid pressure inside the membrane stacks, driving water out—a type of “osmotic motor” that responds to the slightest changes in charge generated by the neurons, to which patches of thousands of leucophores and iridocytes are connected. The resulting dehydration reduces the thickness and spacing of the membrane stacks, which shifts the wavelength of reflected light progressively from red to yellow, then to green and finally blue. The more concentrated solution also has a higher refractive index, which increases the cells’ brightness.

“We had no idea that the mechanism we would discover would turn out to be so remarkably complex yet contained and so elegantly integrated in one multifunctional molecule—the block-copolymeric reflectin—with opposing domains so delicately poised that they act like a metastable machine, continually sensing and responding to neuronal signaling by precisely adjusting the osmotic pressure of an intracellular nanostructure to precisely fine-tune the color and brightness of its reflected light,” Morse said.

What’s more, the researchers found, the whole process is reversible and cyclable, enabling the squid to continually fine-tune whatever optical properties its situation calls for.

New Design Principles

The researchers had successfully manipulated reflectin in previous experiments, but this study marks the first demonstration of the underlying mechanism. Now it could provide new ideas to scientists and engineers designing materials with tunable properties. “Our findings reveal a fundamental link between the properties of biomolecular materials produced in living systems and the highly engineered synthetic polymers that are now being developed at the frontiers of industry and technology,” Morse said.

“Because reflectin works to control osmotic pressure, I can envision applications for novel means of energy storage and conversion, pharmaceutical and industrial applications involving viscosity and other liquid properties, and medical applications,” he added.

Remarkably, some of the processes at work in these reflectin proteins are shared by the proteins that assemble pathologically in Alzheimer’s disease and other degenerative conditions, Morse observed. He plans to investigate why this mechanism is reversible, cyclable, harmless and useful in the case of reflectin, but irreversible and pathological for other proteins. Perhaps the fine-structured differences in their sequences can explain the disparity, and even point to new paths for disease prevention and treatment.

From a cloud of cold and a spark, researchers create and stabilize pure polymeric nitrogen for the first time

From a cloud of cold and a spark, researchers create and stabilize pure polymeric nitrogen for the first time
Using a concentrated beam of ions to excite nitrogen compounds in liquid nitrogen, researchers at Drexel’s C&J Nyheim Plasma Institute, have produced an energy-dense material, called polymeric nitrogen, in pure form at near-ambient conditions for the first time. Credit: Drexel University

NOVEMBER 14, 2019

by Drexel University

Scientists have long theorized that the energy stored in the atomic bonds of nitrogen could one day be a source of clean energy. But coaxing the nitrogen atoms into linking up has been a daunting task. Researchers at Drexel University’s C&J Nyheim Plasma Institute have finally proven that it’s experimentally possible—with some encouragement from a liquid plasma spark.

Reported in the Journal of Physics D: Applied Physics, the production of pure polymeric nitrogen—polynitrogen—is possible by zapping a compound called sodium azide with a jet of plasma in the middle of a super-cooling cloud of liquid nitrogen. The result is six nitrogen atoms bonded together—a compound called ionic, or neutral, nitrogen-six—that is predicted to be an extremely energy-dense material.

“Polynitrogen is being explored for use as a ‘green’ fuel source, for energy storage, or as an explosive,” said Danil Dobrynin, Ph.D., an associated research professor at the Nyheim Institute and lead author of the paper. “Versions of it have been experimentally synthesized—though never in a way that was stable enough to recover to ambient conditions or in pure nitrogen-six form. Our discovery using liquid plasma opens a new avenue for this research that could lead to a stable polynitrogen.”

Previous attempts to generate the energetic polymer have used high pressure and high temperature to entice bonding of nitrogen atoms. But neither of those methods provided enough energy to excite the requisite ions—atomic bonding agents—to produce a stable form of nitrogen-six. And the polymeric nitrogen created in these experiments could not be maintained at a pressure and temperature close to normal, ambient conditions.

It’s something like trying to glue together two heavy objects but only being strong enough to squeeze a few drops of glue out of the bottle. To make a bond strong enough to hold up, it takes a force strong enough to squeeze out a lot of glue.

From a cloud of cold and a spark, researchers create and stabilize pure polymeric nitrogen for the first time
Credit: Drexel University

That force, according to the researchers, is a concentrated ion blast provided by liquid plasma.

Liquid plasma is the name given to an emission of an ion-dense matter generated by a pulsed electrical spark discharged in a liquid environment—kind of like lightning in a bottle. Liquid plasma technology has barely been around for a decade though it already holds a great deal of promise. It was pioneered by researchers at the Nyheim Institute who have explored is use in a variety of applications, from health care to food treatment.

Because the plasma is encased in liquid it is possible to pressurize the environment, as well as controlling its temperature. This level of control is the key advantage that the researchers needed to synthesize polynitrogen because it allowed them to more precisely start and stop the reaction in order to preserve the material it produced. Dobrynin and his collaborators first reported their successful attempt to produce polynitrogen using plasma discharges in liquid nitrogen in a letter in the Journal of Physics D: Applied Physics over the summer.

In their most recent findings, the plasma spark sent a concentrated shower of ions toward the sodium azide—which contains nitrogen-three molecules. The blast of ions splits the nitrogen-three molecules from the sodium and, in the excited state, the nitrogen molecules can bond with each other. Not surprisingly, the reaction produces a good bit of heat, so putting the brakes on it requires an incredible blast of cold—the one provided by liquid nitrogen.

“We believe this procedure was successful at producing pure polynitrogen where others fell short, because of the density of ions involved and the presence of liquid nitrogen as a quenching agent for the reaction,” Dobrynin said. “Other experiments introduced high temperatures and high pressures as catalysts, but our experiment was a more precise combination of energy, temperature, electrons and ions.”

From a cloud of cold and a spark, researchers create and stabilize pure polymeric nitrogen for the first time
Credit: Drexel University

Upon inspection with a Raman spectrometer—an instrument that identifies the chemical composition of a material by measuring its response to laser stimulus—the plasma-treated material produced readings consistent with those predicted for pure polynitrogen.

“This is quite significant because until now scientists have only been able to synthesize stable polynitrogen compounds in the form of salts—but never in a pure nitrogen form like this at near-ambient conditions,” Dobrynin said. “The substance we produced is stable at atmospheric pressure in temperatures up to about -50 Celsius.”

Plasma, in its original gas-laden environment, has been under development for decades as a sterilization technology for water, food and medical equipment and it is also being explored for coating materials. But this is the first instance of liquid plasma being used to synthesize a new material. So, this breakthrough could prove to be an inflection point in plasma research, at the Nyheim Institute and throughout the field.

“This discovery opens a number of exciting possibilities for producing polymeric nitrogen as a fuel source,” said Alexander Fridman, Ph.D., John A. Nyheim Chair professor in Drexel’s College of Engineering and director of the C&J Nyheim Plasma Institute and co-author of the paper. “This new, clean energy-dense fuel could enable a new age of automobiles and mass transportation. It could even be the breakthrough necessary to allow the exploration of remote regions of space.”

Hot electrons harvested without tricks

Scientists from the University of Groningen and Nanyang Technological University (Singapore) have now shown that this may be easier than expected by combining a perovskite with an acceptor material for hot electrons.

Hot electrons harvested without tricks
A set up for ultrafast spectroscopy, as used in the study. Credit: Maxim Pchenitchnikov, University of Groningen

NOVEMBER 15, 2019

by University of Groningen

Semiconductors convert energy from photons (light) into an electron current. However, some photons carry too much energy for the material to absorb. These photons produce “hot electrons,” and the excess energy of these electrons is converted into heat. Materials scientists have been looking for ways to harvest this excess energy. Scientists from the University of Groningen and Nanyang Technological University (Singapore) have now shown that this may be easier than expected by combining a perovskite with an acceptor material for hot electrons. Their proof of principle was published in Science Advances on 15 November.

In photovoltaic cells, semiconductors will absorb photon energy, but only from photons that have the right amount of energy: too little, and the photons pass right through the material; too much, and the excess energy is lost as heat. The right amount is determined by the bandgap: the difference in energy levels between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO).

Nanoparticles

“The excess energy of hot electrons produced by the high-energy photons is very rapidly absorbed by the material as heat,” explains Maxim Pshenichnikov, professor of ultrafast spectroscopy at the University of Groningen. To fully capture the energy of hot electrons, materials with a larger bandgap must be used. However, this means that the hot electrons should be transported to this material before losing their energy. The current general approach to harvesting these electrons is to slow down the loss of energy, for example, by using nanoparticles instead of bulk material. “In these nanoparticles, there are fewer options for the electrons to release the excess energy as heat,” explains Pshenichnikov.

Together with colleagues from Nanyang Technological University, where he was a visiting professor for the past three years, Pshenichnikov studied a system in which an organic-inorganic hybrid perovskite semiconductor was combined with the organic compound bathophenanthroline (bphen), a material with a large bandgap. The scientists used laser light to excite electrons in the perovskite and studied the behavior of the hot electrons that were generated.

Barrier

“We used a method called pump-push probing to excite electrons in two steps and study them at femtosecond timescales,” explains Pshenichnikov. This allowed the scientists to produce electrons in the perovskites with energy levels just above the bandgap of bphen, without exciting electrons in the bphen. Therefore, any hot electrons in this material would have come from the perovskite.

The results showed that hot electrons from the perovskite semiconductor were readily absorbed by the bphen. “This happened without the need to slow down these electrons, and moreover, in bulk material. So without any tricks, the hot electrons were harvested.” However, the scientists noticed that the energy required was slightly higher than the bphen bandgap. “This was unexpected. Apparently, some extra energy is needed to overcome a barrier at the interface between the two materials.”

Nevertheless, the study provides a proof of principle for the harvesting of hot electrons in bulk perovskite semiconductor material. Pshenichnikov says, “The experiments were performed with a realistic amount of energy, comparable to visible light. The next challenge is to construct a real device using this combination of materials.”