Coated seeds may enable agriculture on marginal lands

A specialized silk covering could protect seeds from salinity while also providing fertilizer-generating microbes.

Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil.
Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil.
Image courtesy of the researchers

David L. Chandler | MIT News Office
November 25, 2019

Providing seeds with a protective coating that also supplies essential nutrients to the germinating plant could make it possible to grow crops in otherwise unproductive soils, according to new research at MIT.

A team of engineers has coated seeds with silk that has been treated with a kind of bacteria that naturally produce a nitrogen fertilizer, to help the germinating plants develop. Tests have shown that these seeds can grow successfully in soils that are too salty to allow untreated seeds to develop normally. The researchers hope this process, which can be applied inexpensively and without the need for specialized equipment, could open up areas of land to farming that are now considered unsuitable for agriculture.

The findings are being published this week in the journal PNAS, in a paper by graduate students Augustine Zvinavashe ’16 and Hui Sun, postdoc Eugen Lim, and professor of civil and environmental engineering Benedetto Marelli.

The work grew out of Marelli’s previous research on using silk coatings as a way to extend the shelf life of seeds used as food crops. “When I was doing some research on that, I stumbled on biofertilizers that can be used to increase the amount of nutrients in the soil,” he says. These fertilizers use microbes that live symbiotically with certain plants and convert nitrogen from the air into a form that can be readily taken up by the plants.

Not only does this provide a natural fertilizer to the plant crops, but it avoids problems associated with other fertilizing approaches, he says: “One of the big problems with nitrogen fertilizers is they have a big environmental impact, because they are very energetically demanding to produce.” These artificial fertilizers may also have a negative impact on soil quality, according to Marelli.

Although these nitrogen-fixing bacteria occur naturally in soils around the world, with different local varieties found in different regions, they are very hard to preserve outside of their natural soil environment. But silk can preserve biological material, so Marelli and his team decided to try it out on these nitrogen-fixing bacteria, known as rhizobacteria.

“We came up with the idea to use them in our seed coating, and once the seed was in the soil, they would resuscitate,” he says. Preliminary tests did not turn out well, however; the bacteria weren’t preserved as well as expected.

That’s when Zvinavashe came up with the idea of adding a particular nutrient to the mix, a kind of sugar known as trehalose, which some organisms use to survive under low-water conditions. The silk, bacteria, and trehalose were all suspended in water, and the researchers simply soaked the seeds in the solution for a few seconds to produce an even coating. Then the seeds were tested at both MIT and a research facility operated by the Mohammed VI Polytechnic University in Ben Guerir, Morocco. “It showed the technique works very well,” Zvinavashe says.

The resulting plants, helped by ongoing fertilizer production by the bacteria, developed in better health than those from untreated seeds and grew successfully in soil from fields that are presently not productive for agriculture, Marelli says.

In practice, such coatings could be applied to the seeds by either dipping or spray coating, the researchers say. Either process can be done at ordinary ambient temperature and pressure. “The process is fast, easy, and it might be scalable” to allow for larger farms and unskilled growers to make use of it, Zvinavashe says. “The seeds can be simply dip-coated for a few seconds,” producing a coating that is just a few micrometers thick.

The ordinary silk they use “is water soluble, so as soon as it’s exposed to the soil, the bacteria are released,” Marelli says. But the coating nevertheless provides enough protection and nutrients to allow the seeds to germinate in soil with a salinity level that would ordinarily prevent their normal growth. “We do see plants that grow in soil where otherwise nothing grows,” he says.

These rhizobacteria normally provide fertilizer to legume crops such as common beans and chickpeas, and those have been the focus of the research so far, but it may be possible to adapt them to work with other kinds of crops as well, and that is part of the team’s ongoing research. “There is a big push to extend the use of rhizobacteria to nonlegume crops,” he says. One way to accomplish that might be to modify the DNA of the bacteria, plants, or both, he says, but that may not be necessary.

“Our approach is almost agnostic to the kind of plant and bacteria,” he says, and it may be feasible “to stabilize, encapsulate and deliver [the bacteria] to the soil, so it becomes more benign for germination” of other kinds of plants as well.

Even if limited to legume crops, the method could still make a significant difference to regions with large areas of saline soil. “Based on the excitement we saw with our collaboration in Morocco,” Marelli says, “this could be very impactful.”

As a next step, the researchers are working on developing new coatings that could not only protect seeds from saline soil, but also make them more resistant to drought, using coatings that absorb water from the soil. Meanwhile, next year they will begin test plantings out in open experimental fields in Morocco; their previous plantings have been done indoors under more controlled conditions.

The research was partly supported by the Université Mohammed VI Polytechnique-MIT Research Program, the Office of Naval Research, and the Office of the Dean for Graduate Fellowship and Research.

Researchers uncover key reaction that influences growth of potentially harmful particles in atmosphere

atmosphere
Credit: CC0 Public Domain

NOVEMBER 25, 2019

by University of Pennsylvania

Air-quality alerts often include the levels of particulate matter, small clumps of molecules in the lower atmosphere that can range in size from microscopic to visible. These particles can contribute to haze, clouds, and fog and also can pose a health risk, especially those at the smaller end of the spectrum. Particles known as PM10 and PM2.5, referring to clumps that are 2.5 to 10 micrometers in size, can be inhaled, potentially harming the heart and lungs.

This week, a group led by University of Pennsylvania scientists in collaboration with an international team report a new factor that affects particle formation in the atmosphere. Their analysis, published in the Proceedings of the National Academy of Sciences, found that alcohols such as methanol can reduce particle formation by consuming one of the process’s key ingredients, sulfur trioxide (SO3).

“Right now, we’re all concerned about PM2.5 and PM10 because these have some real air-quality and health consequences,” says Joseph S. Francisco, a corresponding author on the paper and an atmospheric chemist in Penn’s School of Arts and Sciences. “The question has been, How do you suppress the formation of these kinds of particles? This work actually gives some very important insight, for the first time, into how you can suppress particle growth.”

“We and others have been studying this process of how particles grow so we can better understand the weather and the health implications,” says Jie Zhong, a postdoctoral fellow at Penn and co-lead author of the work. “Previously people thought that alcohols were not important because they interact weakly with other molecules. But alcohols attracted our attention because they’re abundant in the atmosphere, and we found they do in fact play a significant role in reducing particle formation.”

Leading up to this work, Zhong and colleagues had been focused on various reactions involving SO3, which can arise from various types of pollution, such as burning fossil fuels. When combined with water molecules, SO3 forms sulfuric acid, a major component of acid rain but also one of the most important “seeds” for growing particles in the atmosphere.

Chemists knew that alcohols are not very “sticky,” forming only weak interactions with SO3, and had thus dismissed it as a key contributor to particle formation. But when Zhong and colleagues took a closer look, using powerful computational chemistry models and molecular dynamics simulations, they realized that SO3 could indeed react with alcohols such as methanol when there is a lot of it in the atmosphere. The resulting product, methyl hydrogen sulfate (MHS), is sticky enough to participate in the particle-formation process.

“Because this reaction converts alcohols to more sticky compounds,” says Zhong, “initially we thought it would promote the particle formation process. But it doesn’t. That’s the most interesting part. Alcohols consume or compete for SO3 so less of it is available to form sulfuric acid.”

Even though the reaction between methanol and SO3 requires more energy, the researchers found that MHS itself, in addition to sulfuric acid and water, could catalyze the methanol reaction.

“That was an interesting part for us, to find that the MHS can catalyze its own formation,” says Francisco. “And what was also unique about this work and what caught us by surprise was the impact of the effect.”

Francisco and Zhong note that in dry and polluted conditions, when alcohols and SO3 are abundant in the atmosphere but water molecules are less available, this reaction may play an especially significant role in driving down the rate of particle formation. Yet they also acknowledge that MHS, the production of the methanol-SO3 reaction, has also been linked to negative health impacts.

“It’s a balance,” says Zhong. “On the one hand this reaction reduces new particle formation, but on the other hand it produces another product that is not very healthy.”

What the new insight into particle formation does offer, however, is information that can power more accurate models for air pollution and even weather and climate, the researchers say. “These models haven’t been very accurate, and now we know they were not incorporating this mechanism that wasn’t recognized previously,” Zhong says.

As a next step, the researchers are investigating how colder conditions, involving snow and ice, affect new particle formation. “That’s very appropriate because winter is coming.” Francisco says.

Producing better guides for medical-image analysis

Model quickly generates brain scan templates that represent a given patient population.

With their model, researchers were able to generate on-demand brain scan templates of various ages (pictured) that can be used in medical-image analysis to guide disease diagnosis.
With their model, researchers were able to generate on-demand brain scan templates of various ages (pictured) that can be used in medical-image analysis to guide disease diagnosis.
Image courtesy of the researchers

Rob Matheson | MIT News Office
November 26, 2019

MIT researchers have devised a method that accelerates the process for creating and customizing templates used in medical-image analysis, to guide disease diagnosis.  

One use of medical image analysis is to crunch datasets of patients’ medical images and capture structural relationships that may indicate the progression of diseases. In many cases, analysis requires use of a common image template, called an “atlas,” that’s an average representation of a given patient population. Atlases serve as a reference for comparison, for example to identify clinically significant changes in brain structures over time.

Building a template is a time-consuming, laborious process, often taking days or weeks to generate, especially when using 3D brain scans. To save time, researchers often download publicly available atlases previously generated by research groups. But those don’t fully capture the diversity of individual datasets or specific subpopulations, such as those with new diseases or from young children. Ultimately, the atlas can’t be smoothly mapped onto outlier images, producing poor results.

In a paper being presented at the Conference on Neural Information Processing Systems in December, the researchers describe an automated machine-learning model that generates “conditional” atlases based on specific patient attributes, such as age, sex, and disease. By leveraging shared information from across an entire dataset, the model can also synthesize atlases from patient subpopulations that may be completely missing in the dataset.

“The world needs more atlases,” says first author Adrian Dalca, a former postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and now a faculty member in radiology at Harvard Medical School and Massachusetts General Hospital. “Atlases are central to many medical image analyses. This method can build a lot more of them and build conditional ones as well.”

Joining Dalca on the paper are Marianne Rakic, a visiting researcher in CSAIL; John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL’s Data Driven Inference Group; and Mert R. Sabuncu of Cornell University.

Simultaneous alignment and atlases

Traditional atlas-building methods run lengthy, iterative optimization processes on all images in a dataset. They align, say, all 3D brain scans to an initial (often blurry) atlas, and compute a new average image from the aligned scans. They repeat this iterative process for all images. This computes a final atlas that minimizes the extent to which all scans in the dataset must deform to match the atlas. Doing this process for patient subpopulations can be complex and imprecise if there isn’t enough data available.

Mapping an atlas to a new scan generates a “deformation field,” which characterizes the differences between the two images. This captures structural variations, which can then be further analyzed. In brain scans, for instance, structural variations can be due to tissue degeneration at different stages of a disease.

In previous work, Dalca and other researchers developed a neural network to rapidly align these images. In part, that helped speed up the traditional atlas-building process. “We said, ‘Why can’t we build conditional atlases while learning to align images at the same time?’” Dalca says.

To do so, the researchers combined two neural networks: One network automatically learns an atlas at each iteration, and another — adapted from the previous research — simultaneously aligns that atlas to images in a dataset.

In training, the joint network is fed a random image from a dataset encoded with desired patient attributes. From that, it estimates an attribute-conditional atlas. The second network aligns the estimated atlas with the input image, and generates a deformation field.

The deformation field generated for each image pair is used to train a “loss function,” a component of machine-learning models that helps minimize deviations from a given value. In this case, the function specifically learns to minimize distances between the learned atlas and each image. The network continuously refines the atlas to smoothly align to any given image across the dataset.

On-demand atlases

The end result is a function that’s learned how specific attributes, such as age, correlate to structural variations across all images in a dataset. By plugging new patient attributes into the function, it leverages all learned information across the dataset to synthesize an on-demand atlas — even if that attribute data is missing or scarce in the dataset.

Say someone wants a brain scan atlas for a 45-year-old female patient from a dataset with information from patients aged 30 to 90, but with little data for women aged 40 to 50. The function will analyze patterns of how the brain changes between the ages of 30 to 90 and incorporate what little data exists for that age and sex. Then, it will produce the most representative atlas for females of the desired age. In their paper, the researchers verified the function by generating conditional templates for various age groups from 15 to 90.

The researchers hope clinicians can use the model to build their own atlases quickly from their own, potentially small datasets. Dalca is now collaborating with researchers at Massachusetts General Hospital, for instance, to harness a dataset of pediatric brain scans to generate conditional atlases for younger children, which are hard to come by.

A big dream is to build one function that can generate conditional atlases for any subpopulation, spanning birth to 90 years old. Researchers could log into a webpage, input an age, sex, diseases, and other parameters, and get an on-demand conditional atlas. “That would be wonderful, because everyone can refer to this one function as a single universal atlas reference,” Dalca says.

Another potential application beyond medical imaging is athletic training. Someone could train the function to generate an atlas for, say, a tennis player’s serve motion. The player could then compare new serves against the atlas to see exactly where they kept proper form or where things went wrong.

“If you watch sports, it’s usually commenters saying they noticed if someone’s form was off from one time compared to another,” Dalca says. “But you can imagine that it could be much more quantitative than that.”

Study paves way to better understanding, treatment of arthritis

Study paves way to better understanding, treatment of arthritis
Whole-joint images from mice in arthritis imaging study. Credit: Brian Bay, Oregon State University

NOVEMBER 25, 2019

by Steve Lundeberg, Oregon State University

Oregon State University research has provided the first complete, cellular-level look at what’s going on in joints afflicted by osteoarthritis, a debilitating and costly condition that affects nearly one-quarter of adults in the United States.

The study, published today in Nature Biomedical Engineering, opens the door to better understanding how interventions such as diet, drugs and exercise affect a joint’s cells, which is important because cells do the work of developing, maintaining and repairing tissue.

Research by the OSU College of Engineering’s Brian Bay and scientists from the Royal Veterinary College in London and University College London developed a sophisticated scanning technique to view the “loaded” joints of arthritic and healthy mice—loaded means under strain, such as an ankle, knee or elbow would be while running, walking, throwing, etc.

“Imaging techniques for quantifying changes in arthritic joints have been constrained by a number of factors,” said Bay, associate professor of mechanical engineering. “Restrictions on sample size and the length of scanning time are two of them, and the level of radiation used in some of the techniques ultimately damages or destroys the samples being scanned. Nanoscale resolution of intact, loaded joints had been considered unattainable.”

Bay and a collaboration that also included scientists from 3Dmagination Ltd (UK), Edinburgh Napier University, the University of Manchester, the Research Complex at Harwell and the Diamond Light Source developed a way to conduct nanoscale imaging of complete bones and whole joints under precisely controlled loads.

To do that, they had to enhance resolution without compromising the field of view; reduce total radiation exposure to preserve tissue mechanics; and prevent movement during scanning.

“With low-dose pink-beam synchrotron X-ray tomography, and mechanical loading with nanometric precision, we could simultaneously measure the structural organization and functional response of the tissues,” Bay said. “That means we can look at joints from the tissue layers down to the cellular level, with a large field of view and high resolution, without having to cut out samples.”

Two features of the study make it particularly helpful in advancing the study of osteoarthritis, he said.

“Using intact bones and joints means all of the functional aspects of the complex tissue layering are preserved,” Bay said. “And the small size of the mouse bones leads to imaging that is on the scale of the cells that develop, maintain and repair the tissues.”

Osteoarthritis, the degeneration of joints, affects more than 50 million American adults, according to the Centers for Disease Control and Prevention. Women are affected at nearly a 25% rate, while 18% of men suffer from osteoarthritis.

As baby boomers continue to swell the ranks of the U.S. senior population, the prevalence of arthritis will likely increase in the coming decades, according to the CDC.

The CDC forecasts that by 2040 there will be 78 million arthritis patients, more than one-quarter of the projected total adult population; two-thirds of those with arthritis are expected to be women. Also by 2040, more than 34 million adults in the U.S. will have activity limitations due to arthritis.

“Osteoarthritis will affect most of us during our lifetimes, many to the point where a knee joint or hip joint requires replacement with a costly and difficult surgery after enduring years of disability and pain,” Bay said. “Damage to the cartilage surfaces is associated with failure of the joint, but that damage only becomes obvious very late in the disease process, and cartilage is just the outermost layer in a complex assembly of tissues that lie deep below the surface.”

Those deep tissue layers are where early changes occur as osteoarthritis develops, he said, but their basic biomechanical function and the significance of the changes are not well understood.

“That has greatly hampered knowing the basic disease process and the evaluation of potential therapies to interrupt the long, uncomfortable path to joint replacement,” Bay said.

Bay first demonstrated the tissue strain measurement technique 20 years ago, and it is growing in prominence as imaging has improved. Related work is being conducted for intervertebral discs and other tissues with high rates of degeneration.

“This study for the first time connects measures of tissue mechanics and the arrangement of the tissues themselves at the cellular level,” Bay said. “This is a significant advance as methods for interrupting the osteoarthritis process will likely involve controlling cellular activity. It’s a breakthrough in linking the clinical problem of joint failure with the most basic biological mechanisms involved in maintaining joint health.”

NASA rockets study why tech goes haywire near poles

NASA rockets study why tech goes haywire near poles
nimated illustration showing the the solar wind streaming around Earth’s magnetosphere. Near the North and South Poles, Earth’s magnetic field forms funnels that allow the solar wind access to the upper atmosphere. Credit: NASA/CILab/Josh Masters

NOVEMBER 25, 2019

by Miles Hatfield, NASA’s Goddard Space Flight Center

Each second, 1.5 million tons of solar material shoot off of the Sun and out into space, traveling at hundreds of miles per second. Known as the solar wind, this incessant stream of plasma, or electrified gas, has pelted Earth for more than 4 billion years. Thanks to our planet’s magnetic field, it’s mostly deflected away. But head far enough north, and you’ll find the exception.

“Most of Earth is shielded from the solar wind,” said Mark Conde, space physicist as the University of Alaska, Fairbanks. “But right near the poles, in the midday sector, our magnetic field becomes a funnel where the solar wind can get all the way down to the atmosphere.”

These funnels, known as the polar cusps, can cause some trouble. The influx of solar wind disturbs the atmosphere, disrupting satellites and radio and GPS signals. Beginning Nov. 25, 2019, three new NASA-supported missions will launch into the northern polar cusp, aiming to improve the technology affected by it.

Shaky Satellites

The three missions are all part of the Grand Challenge Initiative – Cusp, a series of nine sounding rocket missions exploring the polar cusp. Sounding rockets are a type of space vehicle that makes 15-minute flights into space before falling back to Earth. Standing up to 65 feet tall and flying anywhere from 20 to 800 miles high, sounding rockets can be aimed and fired at moving targets with only a few minutes notice. This flexibility and precision make them ideal for capturing the strange phenomena inside the cusp.

Two of the three upcoming missions will study the same anomaly: a patch of atmosphere inside the cusp notably denser than its surroundings. It was discovered in 2004, when scientists noticed that part of the atmosphere inside the cusp was about 1.5 times heavier than expected.

NASA rockets study why tech goes haywire near poles
Video from CREX’s last flight, showing vapor tracers following high-altitude polar winds. Both CREX-2 and CHI missions will use a similar methodology to track winds thought to support the density enhancement inside the cusp. Credit: NASA/CREX/Mark Conde

“A little extra mass 200 miles up might seem like no big deal,” said Conde, the principal investigator for the Cusp Region Experiment-2, or CREX-2, mission. “But the pressure change associated with this increased mass density, if it occurred at ground level, would cause a continuous hurricane stronger than anything seen in meteorological records.”

This additional mass creates problems for spacecraft flying through it, like the many satellites that follow a polar orbit. Passing through the dense patch can shake up their trajectories, making close encounters with other spacecraft or orbital debris riskier than they would otherwise be.

“A small change of a few hundred meters can make the difference between having to do an evasive maneuver, or not,” Conde said. 

Both CREX-2 and Cusp Heating Investigation, or CHI mission, led by Miguel Larsen of Clemson University in South Carolina, will study this heavy patch of atmosphere to better predict its effects on satellites passing through. “Each mission has its own strengths, but ideally, they’ll be launched together,” Larsen said.

Corrupted Communication

It’s not just spacecraft that behave unpredictably near the cusp – so do the GPS and communications signals they transmit. The culprit, in many cases, is atmospheric turbulence. 

NASA rockets study why tech goes haywire near poles
Illustration of the ICI-5 rocket deploying its 12 daughter payloads. Once in space, these additional sensors will help scientists distinguish turbulence from waves, both of which could be the cause of corrupted communication signals. Credit: Andøya Space Center/Trond Abrahamsen

“Turbulence is one of the really hard remaining questions in classical physics,” said Jøran Moen, space physicist at the University of Oslo. “We don’t really know what it is because we have no direct measurements yet.” 

Moen, who is leading the Investigation of Cusp Irregularities-5 or ICI-5 mission, likens turbulence to the swirling eddies that form when rivers rush around rocks. When the atmosphere grows turbulent, GPS and communication signals passing through it can become garbled, sending unreliable signals to the planes and ships that depend on them. 

Moen hopes to make the first measurements to distinguish true turbulence from electric waves that can also disrupt communication signals. Though both processes have similar effects on GPS, figuring out which phenomenon drives these disturbances is critical to predicting them. 

“The motivation is to increase the integrity of the GPS signals,” Moen said. “But we need to know the driver to forecast when and where these disturbances will occur.”

Waiting on Weather

The extreme North provides a pristine locale for examining physics much harder to study elsewhere. The tiny arctic town on Svalbard, the Norwegian archipelago from which the ICI-5 and CHI rockets will launch, has a small population and strict restrictions on the use of radio or Wi-Fi, creating an ideal laboratory environment for science.

NASA rockets study why tech goes haywire near poles
Earth’s magnetosphere, showing the northern and southern polar cusps. Credit: Andøya Space Center/Trond Abrahamsen

“Turbulence occurs in many places, but it’s better to go to this laboratory that is not contaminated by other processes,” Moen said. “The ‘cusp laboratory’—that’s Svalbard.”

Ideally, the CHI rocket would launch from Svalbard at nearly the same time that CREX-2 launches from Andenes, Norway. The ICI-5 rocket, on a second launcher in Svalbard, would fly soon after. But the timing can be tricky: Andenes is 650 miles south of Svalbard, and can experience different weather. “It’s not a requirement, but launching together would certainly multiply the scientific returns of the missions,” Conde said.  

Keeping a constant eye on the weather, waiting for the right moment to launch, is a key part of launching rockets — even part of the draw. 

“It really is an all-consuming thing,” Conde said. “All you do when you’re out there is watch conditions and talk about the rocket and decide what you would do.”

Heating techniques could improve treatment of macular degeneration

Heating techniques could improve treatment of macular degeneration
A comparison of experimental results between heater off and heater on. Credit: Gharib Research Group

NOVEMBER 24, 2019

by American Physical Society

Age-related macular degeneration is the primary cause of central vision loss and results in the center of the visual field being blurred or fully blacked out. Though treatable, some methods can be ineffective or cause unwanted side effects.

Jinglin Huang, a graduate student in medical engineering at Caltech, suggests inefficient fluid mixing of the injected medicine and the gel within the eye may be to blame. Huang will be discussing the effects of a thermally induced fluid mixing approach for AMD therapy during a session at the American Physical Society’s Division of Fluid Dynamics 72nd Annual Meeting, which will take place on Nov. 23-26, 2019, at the Washington State Convention Center in Seattle.

The talk, “Thermal Effects on Fluid Mixing in the Eye,” will be presented as part of the session on biological fluid dynamics: microfluidics.

AMD usually starts as “dry” AMD, a disorder in which the macula—the central part of the retina, responsible for sending information about focused light to the brain to create a detailed picture—thins with age. Dry AMD is very common and is not treatable but may eventually evolve into “wet” AMD, which is more likely to result in vision loss. In wet AMD, abnormal blood vessels grow on the retina, leaking fluids under the macula. In the case of wet AMD, injections of medications called anti-vascular endothelial growth factor agents into the eye can help manage the disorder.

Huang said because the medication does not mix with the gel-like fluid in the eye—the vitreous—efficiently. Applying heat to the mixture can solve this problem.

“Because thermally induced mixing in the vitreous chamber can promote the formation of a circulation flow structure, this can potentially serve the drug delivery process,” Huang said. “Since the half life of the drug is limited, this thermally induced mixing approach ensures that more drug of high potency can reach the target tissue.”

To apply the thermally induced mixing technique, no changes in the injection procedure are needed. An additional heating step after the injection is all that is required.

“It can potentially reduce the amount of drug injected into the vitreous,” said Huang. “It is definitely easy to be implemented.”

Huang and her colleagues hope this work will inspire eye doctors to develop better treatment techniques and improve patient experiences.

Shaking head to get rid of water in ears could cause brain damage, physicists find

Shaking head to get rid of water in ears could cause brain damage
Various tube sizes and different accelerations were tried to determine what combination was necessary to remove water from a confined area. Credit: Anuj Baskota, Seungho Kim, and Sunghwan Jung

NOVEMBER 24, 2019

by American Physical Society

Trapped water in the ear canal can cause infection and even damage, but it turns out that one of the most common methods people use to get rid of water in their ears can also cause complications. Researchers at Cornell University and Virginia Tech show shaking the head to free trapped water can cause brain damage in small children.

Anuj Baskota, Seungho Kim, Hosung Kang, and Sunghwan Jung will present their findings at the American Physical Society’s Division of Fluid Dynamics 72nd Annual Meeting. The conference takes place at the Washington State Convention Center in Seattle on Nov. 23-26, 2019.

“Our research mainly focuses on the acceleration required to get the water out of the ear canal,” said Baskota. “The critical acceleration that we obtained experimentally on glass tubes and 3-D printed ear canals was around the range of 10 times the force of gravity for infant ear sizes, which could cause damage to the brain.”

For adults, the acceleration was lower due to the larger diameter of the ear canals. They said the overall volume and position of the water in the canal changes the acceleration needed to remove it.

“From our experiments and theoretical model, we figured out that surface tension of the fluid is one of the crucial factors promoting the water to get stuck in ear canals,” said Baskota.

Luckily, the researchers said there is a solution that does not involve any head shaking.

“Presumably, putting a few drops of a liquid with lower surface tension than water, like alcohol or vinegar, in the ear would reduce the surface tension force allowing the water to flow out,” Baskota said.

Blowing bubbles: Scientist confirms novel way to launch and drive current in fusion plasmas

Blowing bubbles: PPPL scientist confirms way to launch current in fusion plasmas
PPPL physicist Fatima Ebrahimi. Credit: Elle Starkman / PPPL Office of Communications

NOVEMBER 18, 2019

by Raphael Rosen, Princeton Plasma Physics Laboratory

An obstacle to generating fusion reactions inside facilities called tokamaks is that producing the current in plasma that helps create confining magnetic fields happens in pulses. Such pulses, generated by an electromagnet that runs down the center of the tokamak, would make the steady-state creation of fusion energy difficult to achieve. To address the problem, physicists have developed a technique known as transient coaxial helicity injection (CHI) to create a current that is not pulsed.

Now, physicist Fatima Ebrahimi of the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) has used high-resolution computer simulations to investigate the practicality of this technique. The simulations show that CHI could produce the current continuously in larger, more powerful tokamaks than exist today to produce stable fusion plasmas.

“Stability is the most important aspect of any current-drive system in tokamaks,” said Ebrahimi, author of a paper reporting the findings in Physics of Plasmas. “If the plasma is stable, you can have more current and more fusion, and have it all sustained over time.”

Fusion, the power that drives the sun and stars, is the fusing of light elements in the form of plasma—the hot, charged state of matter composed of free electrons and atomic nuclei—that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

The CHI technique replaces an electromagnet called a solenoid that induces current in today’s tokamaks. CHI produces the critical current by spontaneously generating magnetic bubbles, or plasmoids, into the plasma. The new high-resolution simulations confirm that a parade of plasmoids marching through the plasma in future tokamaks could create the current that produces the confining fields. The simulations further showed that the plasmoids would stay intact even when buffeted by three-dimensional instabilities.

In the future, Ebrahimi plans to simulate CHI startup while including even more physics about the plasma, which would provide insights to further optimize the process and to extrapolate toward next-step devices. “That’s a little bit harder,” she says, “but the news right now is that these simulations show that CHI is a reliable current-drive technique that could be used in fusion facilities around the world as they start to incorporate stronger magnetic fields.”

Physicists determine dripline for fluorine and neon isotopes

Physicists determine dripline for fluorine and neon isotopes
Researchers have mapped the boundary (green line) that charts the heaviest possible isotopes of fluorine (F) and neon (Ne). Previously this so-called neutron dripline was known only for the first eight elements of the periodic table (pink line). Credit: APS/ Joan Tycko

NOVEMBER 22, 2019

An international team of physicists with the BigRIPS experiment taking place at the RIKEN Radioactive Isotope Beam Factory in Japan has determined the dripline for fluorine and neon isotopes. In their paper published in the journal Physical Review Letters, the researchers describe how they found the driplines and where their research is headed next.

One of the goals of physics research is to discover some of nature’s limits—in this new effort, the researchers were looking to determine how many neutrons could be added to a nucleus before they stop sticking to one another, leading to dripping out. Such a limit is known as the dripline. Prior researchers have already found the dripline for a number of elements—all eight of the lightest of them, for example. Doing so for the heavier elements has been challenging. In this new effort, the researchers sought to find the dripline for fluorine, sodium and neon. Prior research has shown that adding neutrons to a nucleus increases its atomic number when the maximum is found—but there are exceptions. The driplines for isotopes such as nitrogen-23, carbon-22 and oxygen-24 are all 16 neutrons, for example.

To find the dripline for the target elements, the researchers focused beams of calcium-48 ions at beryllium atoms, resulting in fragmentation and the creation of small nuclei. They studied the fragments using the BigRIPs large acceptance fragment separator—a two-stage process whereby a primary beam is first converted to radioactive ion beams by a uranium beam. That was followed by a tagging stage in which a secondary beam was tagged using the radioactive ions downstream.

The researchers report that they were unable to find any evidence of neon-36, neon-35, fluorine-33 or fluorine-32. They claim this showed evidence indicating that fluorine-31 and neon-34 are both dripline isotopes. They report that they also looked for sodium-38 and 39 and found one instance of sodium-39 but no sodium-38—they suggest this likely indicates that the dripline for sodium isotopes is beyond 28 neutrons.

The researchers note that in 2022, a new facility will open at Michigan State University with more intense beams, giving researchers a chance to discover the dripline for sodium and then to keep working along the periodic table to resolve those next in line—starting with magnesium.

Chemistry in the turbulent interstellar medium

Chemistry in the turbulent interstellar medium
A multi-wavelength image of a portion of the Perseus molecular cloud, located about 850 light-years away, and its nebulae. Turbulence is pervasive in molecular clouds and plays an important role in producing small density and temperature fluctuations that in turn help determine the abundances of complex molecules in the cloud. A new set of chemical and hydrodynamical models is able to account for the effects of such turbulence and offers an improved explanation for observed chemical abundances. Credit: Agrupació Astronòmica d’Eivissa/Ibiza AAE, Alberto Prats Rodríguez

NOVEMBER 22, 2019

by Harvard-Smithsonian Center for Astrophysics

Over 200 molecules have been discovered in space, some (like Buckminsterfullerene) very complex with carbon atoms. Besides being intrinsically interesting, these molecules radiate away heat, helping giant clouds of interstellar material cool and contract to form new stars. Moreover, astronomers use the radiation from these molecules to study the local conditions, for example, as planets form in disks around young stars.

The relative abundance of these molecular species is an important but longstanding puzzle, dependent on many factors from the abundances of the basic elements and the strength of the ultraviolet radiation field to a cloud’s density, temperature, and age. The abundances of the small molecules (those with two or three atoms) are particularly important since they form stepping stones to larger species, and among these the ones that carry a net charge are even more important since they undergo chemical reactions more readily. Current models of the diffuse interstellar medium assume uniform layers of ultraviolet illuminated gas with either a constant density or a density that varies smoothly with depth into the cloud. The problem is that the models’ predictions often disagree with observations.

Decades of observations have also shown, however, that the interstellar medium is not uniform but rather turbulent, with large variations in density and temperature over small distances. CfA astronomer Shmuel Bialy led a team of scientists investigating the abundances of four key molecules—H2, OH+, H2O+, and ArH+—in a supersonic (with motions exceeding the speed of sound) and turbulent medium. These particular molecules are both useful astronomical probes and highly sensitive to the density fluctuations that naturally arise in turbulent media. Building on their previous studies of the behavior of molecular hydrogen (H2) in turbulent media, the scientists performed detailed computer simulations that incorporate a wide range of chemical pathways together with models of supersonic turbulent motions under a variety of excitation scenarios driven by ultraviolet radiation and cosmic rays. Their results, when compared to extensive observations of molecules, show good agreement. The range of turbulent conditions is wide and the predictions correspondingly wide, however, so that while the new models do a better job of explaining the observed ranges, they can be ambiguous and explain a particular situation with several different combinations of parameters. The authors make a case for additional observations and a next-generation of models to constrain the conclusions more tightly.