When laser beams meet plasma: New data addresses gap in fusion research

When laser beams meet plasma: New data addresses gap in fusion research
Researchers used the Omega Laser Facility at the Rochester’s Laboratory for Laser Energetics to make highly detailed measurements of laser-heated plasmas. Credit: University photo / J. Adam Fenster

DECEMBER 2, 2019

by University of Rochester

New research from the University of Rochester will enhance the accuracy of computer models used in simulations of laser-driven implosions. The research, published in the journal Nature Physics, addresses one of the challenges in scientists’ longstanding quest to achieve fusion.

In laser-driven inertial confinement fusion (ICF) experiments, such as the experiments conducted at the University of Rochester’s Laboratory for Laser Energetics (LLE), short beams consisting of intense pulses of light—pulses lasting mere billionths of a second—deliver energy to heat and compress a target of hydrogen fuel cells. Ideally, this process would release more energy than was used to heat the system.

Laser-driven ICF experiments require that many laser beams propagate through a plasma—a hot soup of free moving electrons and ions—to deposit their radiation energy precisely at their intended target. But, as the beams do so, they interact with the plasma in ways that can complicate the intended result.

“ICF necessarily generates environments in which many laser beams overlap in a hot plasma surrounding the target, and it has been recognized for many years that the laser beams can interact and exchange energy,” says David Turnbull, an LLE scientist and the first author of the paper.

To accurately model this interaction, scientists need to know exactly how the energy from the laser beam interacts with the plasma. While researchers have offered theories about the ways in which laser beams alter a plasma, none has ever before been demonstrated experimentally.

Now, researchers at the LLE, along with their colleagues at Lawrence Livermore National Laboratory in California and the Centre National de la Recherche Scientifique in France, have directly demonstrated for the first time how laser beams modify the conditions of the underlying plasma, in turn affecting the transfer of energy in fusion experiments.

“The results are a great demonstration of the innovation at the Laboratory and the importance of building a solid understanding of laser-plasma instabilities for the national fusion program,” says Michael Campbell, the director of the LLE.

USING SUPERCOMPUTERS TO MODEL FUSION

Researchers often use supercomputers to study the implosions involved in fusion experiments. It is important, therefore, that these computer models accurately depict the physical processes involved, including the exchange of energy from the laser beams to the plasma and eventually to the target.

For the past decade, researchers have used computer models describing the mutual laser beam interaction involved in laser-driven fusion experiments. However, the models have generally assumed that the energy from the laser beams interacts in a type of equilibrium known as Maxwellian distribution—an equilibrium one would expect in the exchange when no lasers are present.

“But, of course, lasers are present,” says Dustin Froula, a senior scientist at the LLE.

Froula notes that scientists predicted almost 40 years ago that lasers alter the underlying plasma conditions in important ways. In 1980, a theory was presented that predicted these non-Maxwellian distribution functions in laser plasmas due to the preferential heating of slow electrons by the laser beams. In subsequent years, Rochester graduate Bedros Afeyan ’89 (Ph.D.) predicted that the effect of these non-Maxwellian electron distribution functions would change how laser energy is transferred between beams.

But lacking experimental evidence to verify that prediction, researchers did not account for it in their simulations.

Turnbull, Froula, and physics and astronomy graduate student Avram Milder conducted experiments at the Omega Laser Facility at the LLE to make highly detailed measurements of the laser-heated plasmas. The results of these experiments show for the first time that the distribution of electron energies in a plasma is affected by their interaction with the laser radiation and can no longer be accurately described by prevailing models.

The new research not only validates a longstanding theory, but it also shows that laser-plasma interaction strongly modifies the transfer of energy.

“New inline models that better account for the underlying plasma conditions are currently under development, which should improve the predictive capability of integrated implosion simulations,” Turnbull says.

Helping machines perceive some laws of physics

Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI.

An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as
rolling behind a wall and not reappearing on the other side.
An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as rolling behind a wall and not reappearing on the other side.
Image: Christine Daniloff, MIT

Rob Matheson | MIT News Office
December 2, 2019

Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.

Now MIT researchers have designed a model that demonstrates an understanding of some basic “intuitive physics” about how objects should behave. The model could be used to help build smarter artificial intelligence and, in turn, provide information to help scientists understand infant cognition.

The model, called ADEPT, observes objects moving around a scene and makes predictions about how the objects should behave, based on their underlying physics. While tracking the objects, the model outputs a signal at each video frame that correlates to a level of “surprise” — the bigger the signal, the greater the surprise. If an object ever dramatically mismatches the model’s predictions — by, say, vanishing or teleporting across a scene — its surprise levels will spike.

In response to videos showing objects moving in physically plausible and implausible ways, the model registered levels of surprise that matched levels reported by humans who had watched the same videos.  

“By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport,” says first author Kevin A. Smith, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). “We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.”

Joining Smith on the paper are co-first authors Lingjie Mei, an undergraduate in the Department of Electrical Engineering and Computer Science, and BCS research scientist Shunyu Yao; Jiajun Wu PhD ’19; CBMM investigator Elizabeth Spelke; Joshua B. Tenenbaum, a professor of computational cognitive science, and researcher in CBMM, BCS, and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and CBMM investigator Tomer D. Ullman PhD ’15.

Mismatched realities

ADEPT relies on two modules: an “inverse graphics” module that captures object representations from raw images, and a “physics engine” that predicts the objects’ future representations from a distribution of possibilities.

Inverse graphics basically extracts information of objects — such as shape, pose, and velocity — from pixel inputs. This module captures frames of video as images and uses inverse graphics to extract this information from objects in the scene. But it doesn’t get bogged down in the details. ADEPT requires only some approximate geometry of each shape to function. In part, this helps the model generalize predictions to new objects, not just those it’s trained on.

“It doesn’t matter if an object is rectangle or circle, or if it’s a truck or a duck. ADEPT just sees there’s an object with some position, moving in a certain way, to make predictions,” Smith says. “Similarly, young infants also don’t seem to care much about some properties like shape when making physical predictions.”

These coarse object descriptions are fed into a physics engine — software that simulates behavior of physical systems, such as rigid or fluidic bodies, and is commonly used for films, video games, and computer graphics. The researchers’ physics engine “pushes the objects forward in time,” Ullman says. This creates a range of predictions, or a “belief distribution,” for what will happen to those objects in the next frame.

Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch.

ADEPT then resamples from its belief distribution and notes a very low probability that the object had simply vanished. If there’s a low enough probability, the model registers great “surprise” as a signal spike. Basically, surprise is inversely proportional to the probability of an event occurring. If the probability is very low, the signal spike is very high.  

“If an object goes behind a wall, your physics engine maintains a belief that the object is still behind the wall. If the wall goes down, and nothing is there, there’s a mismatch,” Ullman says. “Then, the model says, ‘There’s an object in my prediction, but I see nothing. The only explanation is that it disappeared, so that’s surprising.’”

Violation of expectations

In development psychology, researchers run “violation of expectations” tests in which infants are shown pairs of videos. One video shows a plausible event, with objects adhering to their expected notions of how the world works. The other video is the same in every way, except objects behave in a way that violates expectations in some way. Researchers will often use these tests to measure how long the infant looks at a scene after an implausible action has occurred. The longer they stare, researchers hypothesize, the more they may be surprised or interested in what just happened.

For their experiments, the researchers created several scenarios based on classical developmental research to examine the model’s core object knowledge. They employed 60 adults to watch 64 videos of known physically plausible and physically implausible scenarios. Objects, for instance, will move behind a wall and, when the wall drops, they’ll still be there or they’ll be gone. The participants rated their surprise at various moments on an increasing scale of 0 to 100. Then, the researchers showed the same videos to the model. Specifically, the scenarios examined the model’s ability to capture notions of permanence (objects do not appear or disappear for no reason), continuity (objects move along connected trajectories), and solidity (objects cannot move through one another).

ADEPT matched humans particularly well on videos where objects moved behind walls and disappeared when the wall was removed. Interestingly, the model also matched surprise levels on videos that humans weren’t surprised by but maybe should have been. For example, in a video where an object moving at a certain speed disappears behind a wall and immediately comes out the other side, the object might have sped up dramatically when it went behind the wall or it might have teleported to the other side. In general, humans and ADEPT were both less certain about whether that event was or wasn’t surprising. The researchers also found traditional neural networks that learn physics from observations — but don’t explicitly represent objects — are far less accurate at differentiating surprising from unsurprising scenes, and their picks for surprising scenes don’t often align with humans.

Next, the researchers plan to delve further into how infants observe and learn about the world, with aims of incorporating any new findings into their model. Studies, for example, show that infants up until a certain age actually aren’t very surprised when objects completely change in some ways — such as if a truck disappears behind a wall, but reemerges as a duck.

“We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents,” Smith says.

Designing humanity’s future in space

The Space Exploration Initiative’s latest research flight explores work and play in microgravity.

Ariel Ekblaw, founder and lead of the Space Exploration Initiative, tests the latest iteration of her TESSERAE self-assembling architecture onboard a parabolic research flight.
Ariel Ekblaw, founder and lead of the Space Exploration Initiative, tests the latest iteration of her TESSERAE self-assembling architecture onboard a parabolic research flight.
Photo: Steve Boxall/ZERO-G

Janine Liberty | MIT Media Lab
November 26, 2019

How will dancers perform in space? How will scientists do lab experiments without work tables? How will artists pursue crafting in microgravity? How can exercise, gastronomy, research, and other uniquely human endeavors be reimagined for the unique environment of space? These are the questions that drove the 14 projects aboard the MIT Media Lab Space Exploration Initiative’s second parabolic research flight.

Just past the 50th anniversary of the Apollo moon landing, humanity’s life in space isn’t so very far away. Virgin Galactic just opened its spaceport with the goal of launching space tourists into orbit within months, not years; Blue Origin’s New Shepard rocket is gearing up to carry its first human cargo to the edge of space, with New Glenn and a moon mission not far behind. We are nearing a future where trained, professional astronauts aren’t the only people who will regularly leave Earth. The new Space Age will reach beyond the technical and scientific achievements of getting people into space and keeping them alive there; the next frontier is bringing our creativity, our values, our personal pursuits and hobbies with us, and letting them evolve into a new culture unique to off-planet life. 

But unlike the world of Star Trek, there’s no artificial gravity capability in sight. Any time spent in space will, for the foreseeable future, mean life without weight, and without the rules of gravity that govern every aspect of life on the ground. Through its annual parabolic flight charter with the ZERO-G Research Program, the Space Exploration Initiative (SEI) is actively anticipating and solving for the challenges of microgravity.

Space for everyone

SEI’s first zero-gravity flight, in 2017, set a high bar for the caliber of the projects, but it was also a learning experience in doing research in 20-second bursts of microgravity. In preparation for an annual research flight, SEI founder and lead Ariel Ekblaw organized MIT’s first graduate course for parabolic flights (Prototyping Our Sci-Fi Space Future: Zero Gravity Flight Class) with the goal of preparing researchers for the realities of parabolic flights, from the rigors of the preflight test readiness review inspections to project hardware considerations and mid-flight adjustments.

The class also served to take some of the intimidation factor out of the prospect of space research and focused on democratizing access to microgravity testbed environments. 

“The addition of the course helped us build bridges across other departments at MIT and take the time to document and open-source our mentorship process for robust, creative, and rigorous experiments,” says Ekblaw.

SEI’s mission of democratizing access to space is broad: It extends to actively recruiting researchers, artists, and designers, whose work isn’t usually associated with space, as well as ensuring that the traditional engineering and hard sciences of space research are open to people of all genders, nationalities, and identities. This proactive openness was manifest in every aspect of this year’s microgravity flight. 

While incubated in the Media Lab, the Space Exploration Initiative now supports research across MIT. Paula do Vale Pereira, a grad student in MIT’s Department of Aeronautics and Astronautics (AeroAsto), was on board to test out automated actuators for CubeSats. Tim McGrath and Jeremy Strong, also from AeroAstro, built an erg machine specially designed for exercise in microgravity. Chris Carr and Maria Zuber, of the Department of Earth, Atmospheric and Planetary Sciences, flew to test out the latest iteration of their Electronic Life-detection Instrument (ELI) research.

Research specialist Maggie Coblentz is pursuing her fascination with food in space — including the world’s first molecular gastronomy experiment in microgravity. She also custom-made an astronaut’s helmet specially designed to accommodate a multi-course tasting menu, allowing her to experiment with different textures and techniques to make both food and eating more enjoyable on long space flights. 

“The function of food is not simply to provide nourishment — it’s a key creature comfort in spaceflight and will play an even more significant role on long-duration space travel and future life in space habitats. I hope to uncover new food cultures and food preparation techniques by evoking the imagination and sense of play in space, Willy Wonka style,” says Coblentz.

With Sensory Synchrony, a project supported by NASA’s Translational Research Institute for Space Health, Abhi Jain and fellow researchers in the Media Lab’s Fluid Interfaces group investigated vestibular neuromodulation techniques for mitigating the effects of motion sickness caused by the sensory mismatch in microgravity. The team will iterate on the data from this flight to consider possibilities for novel experiences using augmented and virtual reality in microgravity environments.

The Space Enabled research group is testing how paraffin wax behaves as a liquid in microgravity, exploring it as an affordable, accessible alternative satellite fuel. Their microgravity experiment, run by Juliet Wanyiri, aimed to determine the speed threshold, and corresponding voltage, needed for the wax to form into a shape called an annulus, which is one of the preferred geometric shapes to store satellite fuel. “This will help us understand what design might be appropriate to use wax as a satellite fuel for an on-orbit mission in the future,” explains Wanyiri.

Xin Liu flew for the second time this year, with a new project that continues her explorations into the relationship between couturemovement, and self-expression when an artist is released from the constraints of gravity. This year’s project, Mollastica, is a mollusk-inspired costume designed to swell and float in microgravity. Liu also motion-captured a body performance to be rendered later for a “deep-sea-to-deep-space” video work.

The human experience

The extraordinary range of fields, goals, projects, and people represented on this year’s microgravity flight speaks to the unique role the Space Exploration Initiative is already starting to play in the future of space. 

For designer and researcher Alexis Hope, the flight offered the opportunity to discover how weightlessness affects the creative process — how it changes not only the art, but also the artist. Her project, Space/Craft, was an experiment in zero-g sculpture: exploring the artistic processes and possibilities enabled by microgravity by using a hot glue gun to “draw in 3D.”

Like all of the researchers aboard the flight, Hope found the experience both challenging and inspiring. Her key takeaway, she says, is excitement for all the unexplored possibilities of art, crafting, and creativity in space.

“Humans always find a way to express themselves creatively, and I expect no different in a zero-gravity environment,” she says. “I’m excited for new materials that will behave in interesting ways in a zero-gravity environment, and curious about how those new materials might inspire future artists to create novel structures, forms, and physical expressions.”

Ekblaw herself spent the flight testing out the latest iteration of TESSERAE, her self-assembling space architecture prototype. The research has matured extensively over the last year and a half, including a recent suborbital test flight with Blue Origin and an upcoming International Space Station mission to take place in early 2020. 

All of the research projects from this year’s flight — as well as some early results, the projects from the Blue Origin flight, and the early prototypes for the ISS mission — were on display at a recent SEI open house at the Media Lab. 

For Ekblaw, the great challenge and the great opportunity in these recurring research flights is helping researchers to keep their projects and goals realistic in the moment, while keeping SEI’s gaze firmly fixed on the future. 

“While parabolic flights are already a remarkable experience, this year was particularly meaningful for us. We had the immense privilege of finalizing our pre-flight testing over the exact days when Neil Armstrong, Buzz Aldrin, and Mike Collins were in microgravity on their way to the moon,” she says. “This 50th anniversary of Apollo 11 reminds us that the next 50 years of interplanetary civilization beckons. We are all now part of this — designing, building, and testing artifacts for our human, lived experience of space.”

The plot thickens for a hypothetical X17 particle

The plot thickens for a hypothetical X17 particle
The NA64 experiment at CERN (Image: CERN)

NOVEMBER 29, 2019

by Ana Lopes, CERN

resh evidence of an unknown particle that could carry a fifth force of nature gives the NA64 collaboration at CERN a new incentive to continue searches.

In 2015, a team of scientists spotted an unexpected glitch, or “anomaly”, in a nuclear transition that could be explained by the production of an unknown particle. About a year later, theorists suggested that the new particle could be evidence of a new fundamental force of nature, in addition to electromagnetism, gravity and the strong and weak forces. The findings caught worldwide attention and prompted, among other studies, a direct search for the particle by the NA64 collaboration at CERN.

new paper from the same team, led by Attila Krasznahorkay at the Atomki institute in Hungary, now reports another anomaly, in a similar nuclear transition, that could also be explained by the same hypothetical particle.

The first anomaly spotted by Krasznahorkay’s team was seen in a transition of beryllium-8 nuclei. This transition emits a high-energy virtual photon that transforms into an electron and its antimatter counterpart, a positron. Examining the number of electron–positron pairs at different angles of separation, the researchers found an unexpected surplus of pairs at a separation angle of about 140º. In contrast, theory predicts that the number of pairs decreases with increasing separation angle, with no excess at a particular angle. Krasznahorkay and colleagues reasoned that the excess could be interpreted by the production of a new particle with a mass of about 17 million electronvolts (MeV), the “X17” particle, which would transform into an electron–positron pair.

The latest anomaly reported by Krasznahorkay’s team, in a paper that has yet to be peer-reviewed, is also in the form of an excess of electron–positron pairs, but this time the excess is from a transition of helium-4 nuclei. “In this case, the excess occurs at an angle 115º but it can also be interpreted by the production of a particle with a mass of about 17 MeV,” explained Krasznahorkay. “The result lends support to our previous result and the possible existence of a new elementary particle,” he adds.

Sergei Gninenko, spokesperson for the NA64 collaboration at CERN, which has not found signs of X17 in its direct search, says: “The Atomki anomalies could be due to an experimental effect, a nuclear physics effect or something completely new such as a new particle. To test the hypothesis that they are caused by a new particle, both a detailed theoretical analysis of the compatibility between the beryllium-8 and the helium-4 results as well as independent experimental confirmation is crucial.”

The NA64 collaboration searches for X17 by firing a beam of tens of billions of electrons from the Super Proton Synchrotron accelerator onto a fixed target. If X17 did exist, the interactions between the electrons and nuclei in the target would sometimes produce this particle, which would then transform into an electron–positron pair. The collaboration has so far found no indication that such events took place, but its datasets allowed them to exclude part of the possible values for the strength of the interaction between X17 and an electron. The team is now upgrading their detector for the next round of searches, which are expected to be more challenging but at the same time more exciting, says Gninenko.

Among other experiments that could also hunt for X17 in direct searches is the LHCb experiment. Jesse Thaler, a theoretical physicist from the Massachusetts Institute of Technology, says: “By 2023, the LHCb experiment should be able to make a definitive measurement to confirm or refute the interpretation of the Atomki anomalies as arising from a new fundamental force. In the meantime, experiments such as NA64 can continue to chip away at the possible values for the hypothetical particle’s properties, and every new analysis brings with it the possibility (however remote) of discovery.”

Coated seeds may enable agriculture on marginal lands

A specialized silk covering could protect seeds from salinity while also providing fertilizer-generating microbes.

Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil.
Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil.
Image courtesy of the researchers

David L. Chandler | MIT News Office
November 25, 2019

Providing seeds with a protective coating that also supplies essential nutrients to the germinating plant could make it possible to grow crops in otherwise unproductive soils, according to new research at MIT.

A team of engineers has coated seeds with silk that has been treated with a kind of bacteria that naturally produce a nitrogen fertilizer, to help the germinating plants develop. Tests have shown that these seeds can grow successfully in soils that are too salty to allow untreated seeds to develop normally. The researchers hope this process, which can be applied inexpensively and without the need for specialized equipment, could open up areas of land to farming that are now considered unsuitable for agriculture.

The findings are being published this week in the journal PNAS, in a paper by graduate students Augustine Zvinavashe ’16 and Hui Sun, postdoc Eugen Lim, and professor of civil and environmental engineering Benedetto Marelli.

The work grew out of Marelli’s previous research on using silk coatings as a way to extend the shelf life of seeds used as food crops. “When I was doing some research on that, I stumbled on biofertilizers that can be used to increase the amount of nutrients in the soil,” he says. These fertilizers use microbes that live symbiotically with certain plants and convert nitrogen from the air into a form that can be readily taken up by the plants.

Not only does this provide a natural fertilizer to the plant crops, but it avoids problems associated with other fertilizing approaches, he says: “One of the big problems with nitrogen fertilizers is they have a big environmental impact, because they are very energetically demanding to produce.” These artificial fertilizers may also have a negative impact on soil quality, according to Marelli.

Although these nitrogen-fixing bacteria occur naturally in soils around the world, with different local varieties found in different regions, they are very hard to preserve outside of their natural soil environment. But silk can preserve biological material, so Marelli and his team decided to try it out on these nitrogen-fixing bacteria, known as rhizobacteria.

“We came up with the idea to use them in our seed coating, and once the seed was in the soil, they would resuscitate,” he says. Preliminary tests did not turn out well, however; the bacteria weren’t preserved as well as expected.

That’s when Zvinavashe came up with the idea of adding a particular nutrient to the mix, a kind of sugar known as trehalose, which some organisms use to survive under low-water conditions. The silk, bacteria, and trehalose were all suspended in water, and the researchers simply soaked the seeds in the solution for a few seconds to produce an even coating. Then the seeds were tested at both MIT and a research facility operated by the Mohammed VI Polytechnic University in Ben Guerir, Morocco. “It showed the technique works very well,” Zvinavashe says.

The resulting plants, helped by ongoing fertilizer production by the bacteria, developed in better health than those from untreated seeds and grew successfully in soil from fields that are presently not productive for agriculture, Marelli says.

In practice, such coatings could be applied to the seeds by either dipping or spray coating, the researchers say. Either process can be done at ordinary ambient temperature and pressure. “The process is fast, easy, and it might be scalable” to allow for larger farms and unskilled growers to make use of it, Zvinavashe says. “The seeds can be simply dip-coated for a few seconds,” producing a coating that is just a few micrometers thick.

The ordinary silk they use “is water soluble, so as soon as it’s exposed to the soil, the bacteria are released,” Marelli says. But the coating nevertheless provides enough protection and nutrients to allow the seeds to germinate in soil with a salinity level that would ordinarily prevent their normal growth. “We do see plants that grow in soil where otherwise nothing grows,” he says.

These rhizobacteria normally provide fertilizer to legume crops such as common beans and chickpeas, and those have been the focus of the research so far, but it may be possible to adapt them to work with other kinds of crops as well, and that is part of the team’s ongoing research. “There is a big push to extend the use of rhizobacteria to nonlegume crops,” he says. One way to accomplish that might be to modify the DNA of the bacteria, plants, or both, he says, but that may not be necessary.

“Our approach is almost agnostic to the kind of plant and bacteria,” he says, and it may be feasible “to stabilize, encapsulate and deliver [the bacteria] to the soil, so it becomes more benign for germination” of other kinds of plants as well.

Even if limited to legume crops, the method could still make a significant difference to regions with large areas of saline soil. “Based on the excitement we saw with our collaboration in Morocco,” Marelli says, “this could be very impactful.”

As a next step, the researchers are working on developing new coatings that could not only protect seeds from saline soil, but also make them more resistant to drought, using coatings that absorb water from the soil. Meanwhile, next year they will begin test plantings out in open experimental fields in Morocco; their previous plantings have been done indoors under more controlled conditions.

The research was partly supported by the Université Mohammed VI Polytechnique-MIT Research Program, the Office of Naval Research, and the Office of the Dean for Graduate Fellowship and Research.

Researchers uncover key reaction that influences growth of potentially harmful particles in atmosphere

atmosphere
Credit: CC0 Public Domain

NOVEMBER 25, 2019

by University of Pennsylvania

Air-quality alerts often include the levels of particulate matter, small clumps of molecules in the lower atmosphere that can range in size from microscopic to visible. These particles can contribute to haze, clouds, and fog and also can pose a health risk, especially those at the smaller end of the spectrum. Particles known as PM10 and PM2.5, referring to clumps that are 2.5 to 10 micrometers in size, can be inhaled, potentially harming the heart and lungs.

This week, a group led by University of Pennsylvania scientists in collaboration with an international team report a new factor that affects particle formation in the atmosphere. Their analysis, published in the Proceedings of the National Academy of Sciences, found that alcohols such as methanol can reduce particle formation by consuming one of the process’s key ingredients, sulfur trioxide (SO3).

“Right now, we’re all concerned about PM2.5 and PM10 because these have some real air-quality and health consequences,” says Joseph S. Francisco, a corresponding author on the paper and an atmospheric chemist in Penn’s School of Arts and Sciences. “The question has been, How do you suppress the formation of these kinds of particles? This work actually gives some very important insight, for the first time, into how you can suppress particle growth.”

“We and others have been studying this process of how particles grow so we can better understand the weather and the health implications,” says Jie Zhong, a postdoctoral fellow at Penn and co-lead author of the work. “Previously people thought that alcohols were not important because they interact weakly with other molecules. But alcohols attracted our attention because they’re abundant in the atmosphere, and we found they do in fact play a significant role in reducing particle formation.”

Leading up to this work, Zhong and colleagues had been focused on various reactions involving SO3, which can arise from various types of pollution, such as burning fossil fuels. When combined with water molecules, SO3 forms sulfuric acid, a major component of acid rain but also one of the most important “seeds” for growing particles in the atmosphere.

Chemists knew that alcohols are not very “sticky,” forming only weak interactions with SO3, and had thus dismissed it as a key contributor to particle formation. But when Zhong and colleagues took a closer look, using powerful computational chemistry models and molecular dynamics simulations, they realized that SO3 could indeed react with alcohols such as methanol when there is a lot of it in the atmosphere. The resulting product, methyl hydrogen sulfate (MHS), is sticky enough to participate in the particle-formation process.

“Because this reaction converts alcohols to more sticky compounds,” says Zhong, “initially we thought it would promote the particle formation process. But it doesn’t. That’s the most interesting part. Alcohols consume or compete for SO3 so less of it is available to form sulfuric acid.”

Even though the reaction between methanol and SO3 requires more energy, the researchers found that MHS itself, in addition to sulfuric acid and water, could catalyze the methanol reaction.

“That was an interesting part for us, to find that the MHS can catalyze its own formation,” says Francisco. “And what was also unique about this work and what caught us by surprise was the impact of the effect.”

Francisco and Zhong note that in dry and polluted conditions, when alcohols and SO3 are abundant in the atmosphere but water molecules are less available, this reaction may play an especially significant role in driving down the rate of particle formation. Yet they also acknowledge that MHS, the production of the methanol-SO3 reaction, has also been linked to negative health impacts.

“It’s a balance,” says Zhong. “On the one hand this reaction reduces new particle formation, but on the other hand it produces another product that is not very healthy.”

What the new insight into particle formation does offer, however, is information that can power more accurate models for air pollution and even weather and climate, the researchers say. “These models haven’t been very accurate, and now we know they were not incorporating this mechanism that wasn’t recognized previously,” Zhong says.

As a next step, the researchers are investigating how colder conditions, involving snow and ice, affect new particle formation. “That’s very appropriate because winter is coming.” Francisco says.

Producing better guides for medical-image analysis

Model quickly generates brain scan templates that represent a given patient population.

With their model, researchers were able to generate on-demand brain scan templates of various ages (pictured) that can be used in medical-image analysis to guide disease diagnosis.
With their model, researchers were able to generate on-demand brain scan templates of various ages (pictured) that can be used in medical-image analysis to guide disease diagnosis.
Image courtesy of the researchers

Rob Matheson | MIT News Office
November 26, 2019

MIT researchers have devised a method that accelerates the process for creating and customizing templates used in medical-image analysis, to guide disease diagnosis.  

One use of medical image analysis is to crunch datasets of patients’ medical images and capture structural relationships that may indicate the progression of diseases. In many cases, analysis requires use of a common image template, called an “atlas,” that’s an average representation of a given patient population. Atlases serve as a reference for comparison, for example to identify clinically significant changes in brain structures over time.

Building a template is a time-consuming, laborious process, often taking days or weeks to generate, especially when using 3D brain scans. To save time, researchers often download publicly available atlases previously generated by research groups. But those don’t fully capture the diversity of individual datasets or specific subpopulations, such as those with new diseases or from young children. Ultimately, the atlas can’t be smoothly mapped onto outlier images, producing poor results.

In a paper being presented at the Conference on Neural Information Processing Systems in December, the researchers describe an automated machine-learning model that generates “conditional” atlases based on specific patient attributes, such as age, sex, and disease. By leveraging shared information from across an entire dataset, the model can also synthesize atlases from patient subpopulations that may be completely missing in the dataset.

“The world needs more atlases,” says first author Adrian Dalca, a former postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and now a faculty member in radiology at Harvard Medical School and Massachusetts General Hospital. “Atlases are central to many medical image analyses. This method can build a lot more of them and build conditional ones as well.”

Joining Dalca on the paper are Marianne Rakic, a visiting researcher in CSAIL; John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL’s Data Driven Inference Group; and Mert R. Sabuncu of Cornell University.

Simultaneous alignment and atlases

Traditional atlas-building methods run lengthy, iterative optimization processes on all images in a dataset. They align, say, all 3D brain scans to an initial (often blurry) atlas, and compute a new average image from the aligned scans. They repeat this iterative process for all images. This computes a final atlas that minimizes the extent to which all scans in the dataset must deform to match the atlas. Doing this process for patient subpopulations can be complex and imprecise if there isn’t enough data available.

Mapping an atlas to a new scan generates a “deformation field,” which characterizes the differences between the two images. This captures structural variations, which can then be further analyzed. In brain scans, for instance, structural variations can be due to tissue degeneration at different stages of a disease.

In previous work, Dalca and other researchers developed a neural network to rapidly align these images. In part, that helped speed up the traditional atlas-building process. “We said, ‘Why can’t we build conditional atlases while learning to align images at the same time?’” Dalca says.

To do so, the researchers combined two neural networks: One network automatically learns an atlas at each iteration, and another — adapted from the previous research — simultaneously aligns that atlas to images in a dataset.

In training, the joint network is fed a random image from a dataset encoded with desired patient attributes. From that, it estimates an attribute-conditional atlas. The second network aligns the estimated atlas with the input image, and generates a deformation field.

The deformation field generated for each image pair is used to train a “loss function,” a component of machine-learning models that helps minimize deviations from a given value. In this case, the function specifically learns to minimize distances between the learned atlas and each image. The network continuously refines the atlas to smoothly align to any given image across the dataset.

On-demand atlases

The end result is a function that’s learned how specific attributes, such as age, correlate to structural variations across all images in a dataset. By plugging new patient attributes into the function, it leverages all learned information across the dataset to synthesize an on-demand atlas — even if that attribute data is missing or scarce in the dataset.

Say someone wants a brain scan atlas for a 45-year-old female patient from a dataset with information from patients aged 30 to 90, but with little data for women aged 40 to 50. The function will analyze patterns of how the brain changes between the ages of 30 to 90 and incorporate what little data exists for that age and sex. Then, it will produce the most representative atlas for females of the desired age. In their paper, the researchers verified the function by generating conditional templates for various age groups from 15 to 90.

The researchers hope clinicians can use the model to build their own atlases quickly from their own, potentially small datasets. Dalca is now collaborating with researchers at Massachusetts General Hospital, for instance, to harness a dataset of pediatric brain scans to generate conditional atlases for younger children, which are hard to come by.

A big dream is to build one function that can generate conditional atlases for any subpopulation, spanning birth to 90 years old. Researchers could log into a webpage, input an age, sex, diseases, and other parameters, and get an on-demand conditional atlas. “That would be wonderful, because everyone can refer to this one function as a single universal atlas reference,” Dalca says.

Another potential application beyond medical imaging is athletic training. Someone could train the function to generate an atlas for, say, a tennis player’s serve motion. The player could then compare new serves against the atlas to see exactly where they kept proper form or where things went wrong.

“If you watch sports, it’s usually commenters saying they noticed if someone’s form was off from one time compared to another,” Dalca says. “But you can imagine that it could be much more quantitative than that.”

Study paves way to better understanding, treatment of arthritis

Study paves way to better understanding, treatment of arthritis
Whole-joint images from mice in arthritis imaging study. Credit: Brian Bay, Oregon State University

NOVEMBER 25, 2019

by Steve Lundeberg, Oregon State University

Oregon State University research has provided the first complete, cellular-level look at what’s going on in joints afflicted by osteoarthritis, a debilitating and costly condition that affects nearly one-quarter of adults in the United States.

The study, published today in Nature Biomedical Engineering, opens the door to better understanding how interventions such as diet, drugs and exercise affect a joint’s cells, which is important because cells do the work of developing, maintaining and repairing tissue.

Research by the OSU College of Engineering’s Brian Bay and scientists from the Royal Veterinary College in London and University College London developed a sophisticated scanning technique to view the “loaded” joints of arthritic and healthy mice—loaded means under strain, such as an ankle, knee or elbow would be while running, walking, throwing, etc.

“Imaging techniques for quantifying changes in arthritic joints have been constrained by a number of factors,” said Bay, associate professor of mechanical engineering. “Restrictions on sample size and the length of scanning time are two of them, and the level of radiation used in some of the techniques ultimately damages or destroys the samples being scanned. Nanoscale resolution of intact, loaded joints had been considered unattainable.”

Bay and a collaboration that also included scientists from 3Dmagination Ltd (UK), Edinburgh Napier University, the University of Manchester, the Research Complex at Harwell and the Diamond Light Source developed a way to conduct nanoscale imaging of complete bones and whole joints under precisely controlled loads.

To do that, they had to enhance resolution without compromising the field of view; reduce total radiation exposure to preserve tissue mechanics; and prevent movement during scanning.

“With low-dose pink-beam synchrotron X-ray tomography, and mechanical loading with nanometric precision, we could simultaneously measure the structural organization and functional response of the tissues,” Bay said. “That means we can look at joints from the tissue layers down to the cellular level, with a large field of view and high resolution, without having to cut out samples.”

Two features of the study make it particularly helpful in advancing the study of osteoarthritis, he said.

“Using intact bones and joints means all of the functional aspects of the complex tissue layering are preserved,” Bay said. “And the small size of the mouse bones leads to imaging that is on the scale of the cells that develop, maintain and repair the tissues.”

Osteoarthritis, the degeneration of joints, affects more than 50 million American adults, according to the Centers for Disease Control and Prevention. Women are affected at nearly a 25% rate, while 18% of men suffer from osteoarthritis.

As baby boomers continue to swell the ranks of the U.S. senior population, the prevalence of arthritis will likely increase in the coming decades, according to the CDC.

The CDC forecasts that by 2040 there will be 78 million arthritis patients, more than one-quarter of the projected total adult population; two-thirds of those with arthritis are expected to be women. Also by 2040, more than 34 million adults in the U.S. will have activity limitations due to arthritis.

“Osteoarthritis will affect most of us during our lifetimes, many to the point where a knee joint or hip joint requires replacement with a costly and difficult surgery after enduring years of disability and pain,” Bay said. “Damage to the cartilage surfaces is associated with failure of the joint, but that damage only becomes obvious very late in the disease process, and cartilage is just the outermost layer in a complex assembly of tissues that lie deep below the surface.”

Those deep tissue layers are where early changes occur as osteoarthritis develops, he said, but their basic biomechanical function and the significance of the changes are not well understood.

“That has greatly hampered knowing the basic disease process and the evaluation of potential therapies to interrupt the long, uncomfortable path to joint replacement,” Bay said.

Bay first demonstrated the tissue strain measurement technique 20 years ago, and it is growing in prominence as imaging has improved. Related work is being conducted for intervertebral discs and other tissues with high rates of degeneration.

“This study for the first time connects measures of tissue mechanics and the arrangement of the tissues themselves at the cellular level,” Bay said. “This is a significant advance as methods for interrupting the osteoarthritis process will likely involve controlling cellular activity. It’s a breakthrough in linking the clinical problem of joint failure with the most basic biological mechanisms involved in maintaining joint health.”

NASA rockets study why tech goes haywire near poles

NASA rockets study why tech goes haywire near poles
nimated illustration showing the the solar wind streaming around Earth’s magnetosphere. Near the North and South Poles, Earth’s magnetic field forms funnels that allow the solar wind access to the upper atmosphere. Credit: NASA/CILab/Josh Masters

NOVEMBER 25, 2019

by Miles Hatfield, NASA’s Goddard Space Flight Center

Each second, 1.5 million tons of solar material shoot off of the Sun and out into space, traveling at hundreds of miles per second. Known as the solar wind, this incessant stream of plasma, or electrified gas, has pelted Earth for more than 4 billion years. Thanks to our planet’s magnetic field, it’s mostly deflected away. But head far enough north, and you’ll find the exception.

“Most of Earth is shielded from the solar wind,” said Mark Conde, space physicist as the University of Alaska, Fairbanks. “But right near the poles, in the midday sector, our magnetic field becomes a funnel where the solar wind can get all the way down to the atmosphere.”

These funnels, known as the polar cusps, can cause some trouble. The influx of solar wind disturbs the atmosphere, disrupting satellites and radio and GPS signals. Beginning Nov. 25, 2019, three new NASA-supported missions will launch into the northern polar cusp, aiming to improve the technology affected by it.

Shaky Satellites

The three missions are all part of the Grand Challenge Initiative – Cusp, a series of nine sounding rocket missions exploring the polar cusp. Sounding rockets are a type of space vehicle that makes 15-minute flights into space before falling back to Earth. Standing up to 65 feet tall and flying anywhere from 20 to 800 miles high, sounding rockets can be aimed and fired at moving targets with only a few minutes notice. This flexibility and precision make them ideal for capturing the strange phenomena inside the cusp.

Two of the three upcoming missions will study the same anomaly: a patch of atmosphere inside the cusp notably denser than its surroundings. It was discovered in 2004, when scientists noticed that part of the atmosphere inside the cusp was about 1.5 times heavier than expected.

NASA rockets study why tech goes haywire near poles
Video from CREX’s last flight, showing vapor tracers following high-altitude polar winds. Both CREX-2 and CHI missions will use a similar methodology to track winds thought to support the density enhancement inside the cusp. Credit: NASA/CREX/Mark Conde

“A little extra mass 200 miles up might seem like no big deal,” said Conde, the principal investigator for the Cusp Region Experiment-2, or CREX-2, mission. “But the pressure change associated with this increased mass density, if it occurred at ground level, would cause a continuous hurricane stronger than anything seen in meteorological records.”

This additional mass creates problems for spacecraft flying through it, like the many satellites that follow a polar orbit. Passing through the dense patch can shake up their trajectories, making close encounters with other spacecraft or orbital debris riskier than they would otherwise be.

“A small change of a few hundred meters can make the difference between having to do an evasive maneuver, or not,” Conde said. 

Both CREX-2 and Cusp Heating Investigation, or CHI mission, led by Miguel Larsen of Clemson University in South Carolina, will study this heavy patch of atmosphere to better predict its effects on satellites passing through. “Each mission has its own strengths, but ideally, they’ll be launched together,” Larsen said.

Corrupted Communication

It’s not just spacecraft that behave unpredictably near the cusp – so do the GPS and communications signals they transmit. The culprit, in many cases, is atmospheric turbulence. 

NASA rockets study why tech goes haywire near poles
Illustration of the ICI-5 rocket deploying its 12 daughter payloads. Once in space, these additional sensors will help scientists distinguish turbulence from waves, both of which could be the cause of corrupted communication signals. Credit: Andøya Space Center/Trond Abrahamsen

“Turbulence is one of the really hard remaining questions in classical physics,” said Jøran Moen, space physicist at the University of Oslo. “We don’t really know what it is because we have no direct measurements yet.” 

Moen, who is leading the Investigation of Cusp Irregularities-5 or ICI-5 mission, likens turbulence to the swirling eddies that form when rivers rush around rocks. When the atmosphere grows turbulent, GPS and communication signals passing through it can become garbled, sending unreliable signals to the planes and ships that depend on them. 

Moen hopes to make the first measurements to distinguish true turbulence from electric waves that can also disrupt communication signals. Though both processes have similar effects on GPS, figuring out which phenomenon drives these disturbances is critical to predicting them. 

“The motivation is to increase the integrity of the GPS signals,” Moen said. “But we need to know the driver to forecast when and where these disturbances will occur.”

Waiting on Weather

The extreme North provides a pristine locale for examining physics much harder to study elsewhere. The tiny arctic town on Svalbard, the Norwegian archipelago from which the ICI-5 and CHI rockets will launch, has a small population and strict restrictions on the use of radio or Wi-Fi, creating an ideal laboratory environment for science.

NASA rockets study why tech goes haywire near poles
Earth’s magnetosphere, showing the northern and southern polar cusps. Credit: Andøya Space Center/Trond Abrahamsen

“Turbulence occurs in many places, but it’s better to go to this laboratory that is not contaminated by other processes,” Moen said. “The ‘cusp laboratory’—that’s Svalbard.”

Ideally, the CHI rocket would launch from Svalbard at nearly the same time that CREX-2 launches from Andenes, Norway. The ICI-5 rocket, on a second launcher in Svalbard, would fly soon after. But the timing can be tricky: Andenes is 650 miles south of Svalbard, and can experience different weather. “It’s not a requirement, but launching together would certainly multiply the scientific returns of the missions,” Conde said.  

Keeping a constant eye on the weather, waiting for the right moment to launch, is a key part of launching rockets — even part of the draw. 

“It really is an all-consuming thing,” Conde said. “All you do when you’re out there is watch conditions and talk about the rocket and decide what you would do.”

Heating techniques could improve treatment of macular degeneration

Heating techniques could improve treatment of macular degeneration
A comparison of experimental results between heater off and heater on. Credit: Gharib Research Group

NOVEMBER 24, 2019

by American Physical Society

Age-related macular degeneration is the primary cause of central vision loss and results in the center of the visual field being blurred or fully blacked out. Though treatable, some methods can be ineffective or cause unwanted side effects.

Jinglin Huang, a graduate student in medical engineering at Caltech, suggests inefficient fluid mixing of the injected medicine and the gel within the eye may be to blame. Huang will be discussing the effects of a thermally induced fluid mixing approach for AMD therapy during a session at the American Physical Society’s Division of Fluid Dynamics 72nd Annual Meeting, which will take place on Nov. 23-26, 2019, at the Washington State Convention Center in Seattle.

The talk, “Thermal Effects on Fluid Mixing in the Eye,” will be presented as part of the session on biological fluid dynamics: microfluidics.

AMD usually starts as “dry” AMD, a disorder in which the macula—the central part of the retina, responsible for sending information about focused light to the brain to create a detailed picture—thins with age. Dry AMD is very common and is not treatable but may eventually evolve into “wet” AMD, which is more likely to result in vision loss. In wet AMD, abnormal blood vessels grow on the retina, leaking fluids under the macula. In the case of wet AMD, injections of medications called anti-vascular endothelial growth factor agents into the eye can help manage the disorder.

Huang said because the medication does not mix with the gel-like fluid in the eye—the vitreous—efficiently. Applying heat to the mixture can solve this problem.

“Because thermally induced mixing in the vitreous chamber can promote the formation of a circulation flow structure, this can potentially serve the drug delivery process,” Huang said. “Since the half life of the drug is limited, this thermally induced mixing approach ensures that more drug of high potency can reach the target tissue.”

To apply the thermally induced mixing technique, no changes in the injection procedure are needed. An additional heating step after the injection is all that is required.

“It can potentially reduce the amount of drug injected into the vitreous,” said Huang. “It is definitely easy to be implemented.”

Huang and her colleagues hope this work will inspire eye doctors to develop better treatment techniques and improve patient experiences.