Mathematics at the speed of light

This new optical computing and imaging technique operates at the speed of light and the mathematical operation itself consumes no energy as it involves only passive optical components. 

Mathematics at the speed of light
(left) Schematic of edge detection and spatial differentiation; (right) derivative image of AMOLF logo taken at a wavelength of 726 nm. Credit: AMOLF

NOVEMBER 7, 2019


AMOLF researchers and their collaborators from the Advanced Science Research Center (ASRC/CUNY) in New York have created a nanostructured surface capable of performing on-the-fly mathematical operations on an input image. This discovery could boost the speed of existing imaging processing techniques and lower energy usage. The work enables ultrafast object detection and augmented reality applications. The researchers publish their results today in the journal Nano Letters.

Image processing is at the core of several rapidly growing technologies, such as augmented reality, autonomous driving and more general object recognition. But how does a computer find and recognize an object? The initial step is to understand where its boundaries are, hence edge detection in an image becomes the starting point for image recognition. Edge detection is typically performed digitally using integrated electronic circuits implying fundamental speed limitations and high energy consumption, or in an analog fashion which requires bulky optics.

Nanostructured metasurface

In a completely new approach, AMOLF Ph.D. student Andrea Cordaro and his co-workers created a special “metasurface,” a transparent substrate with a specially designed array of silicon nanobars. When an image is projected onto the metasurface, the transmitted light forms a new image that shows the edges of the original. Effectively, the metasurface performs a mathematical derivative operation on the image, which provides a direct probe of edges in the image. In a first experiment, an image of the AMOLF logo was projected onto the metasurface. At a specially designed wavelength (726 nm), a clear image of the edges is observed. The mathematical transformation results from the fact that each spatial frequency that composes the image has a tailored transmission coefficient through the metasurface. This tailored transmission is the result of a complex interference of light as it propagates through the metasurface.

Mathematics at the speed of light
(left) Meisje met de parel (J. Vermeer, circa 1665, collection Mauritshuis, The Hague, the Netherlands); (center) chrome nano-dots replica; (right-top) normal image taken under off-resonant conditions; (right-bottom) edge image taken on resonance. Credit: AMOLF

Edge detection

To demonstrate edge detection experimentally on an image the researchers created a miniature version of the painting Meisje met de parel (Girl with a Pearl Earring, J. Vermeer) by printing tiny chromium dots onto a transparent substrate. If the image is projected onto the metasurface using off-resonant illumination (λ= 750 nm) the original image is clearly recognized. In contrast, if the illumination has the right color (λ= 726 nm) the edges are clearly resolved in the transformed image.

Mathematics at the speed of light
Direct integration of metasurface in a camera with CCD chip. Credit: AMOLF

This new optical computing and imaging technique operates at the speed of light and the mathematical operation itself consumes no energy as it involves only passive optical components. The metasurface can be readily implemented by placing it directly onto a standard CCD or CMOS detector chip, opening new opportunities in hybrid optical and electronic computing that operates at low cost, low power, and small dimensions.

Researchers measure wake of supersonic projectiles

Researchers measure wake of supersonic projectiles
Example instantaneous velocity fields showing only 1/18th of the total velocity vectors. Credit: University of Illinois at Urbana-Champaign

NOVEMBER 7, 2019

by University of Illinois at Urbana-Champaign

Imaging technology has vastly improved over the past 30 years. It’s been about that long since the flow coming off of the base of projectiles, such as ballistic missiles, has been measured. Researchers in the Department of Aerospace Engineering at the University of Illinois at Urbana-Champaign used a modern measurement technique called stereoscopic particle image velocimetry to take high-resolution measurements of the complicated flow field downstream of a blunt-based cylinder moving at supersonic speeds, which is representative of a projectile or an unpowered rocket.

The experiment was done in a Mach 2.5 wind tunnel in the Gas Dynamics Laboratory in The Grainger College of Engineering at Illinois. Researchers mounted a large cylinder model and forced a high-pressure air supply mixed with a large amount of smoke particles across it.

“We shine a laser at the smoke particles to illuminate a desired region and then we can take a picture of those particles from multiple angles. Imaging the same region from different perspectives simultaneously allows us to measure all three components of velocity” said doctoral student Branden Kirchner. “The images are taken 600 nanoseconds apart at high resolution.

“This technique allows us to simultaneously measure velocity at a lot of points very close together, instead of measuring one point and then moving on to the next. We now have a map of velocity throughout the flow field as a snapshot in time.”

Kirchner said the 3,000 snapshots imaged by four cameras aimed at the flow provide much higher spatial resolution measurements than any previous studies. He said computationalists who study this flow will benefit from having these new data to compare with their simulations.

Illinois aerospace engineering Professor J. Craig Dutton, co-author on the study, has been working on this complicated flow for decades, using the same wind tunnel while working on his Ph.D. Kirchner said, “I remember the first time we took data using this technique, I showed Professor Dutton and he said ‘in 90 seconds you took more data than we used to take in six months.'”

When the flow separates off of the cylinder, it creates a wake, like what trails from a boat or an airplane. That’s where the important flow features begin, downstream of the cylinder, which represents the body of a rocket or projectile.

“There’s a thin layer just downstream of separation, called the shear layer, where friction between slow-moving and fast-moving air is really dominant,” he said. “This shear layer extracts fluid particles from the region immediately behind the cylinder base, in a process called entrainment. This process causes really low pressures on the base of the cylinder, and it is something that we don’t currently understand well.

Kirchner said the example he likes to use to explain the physics of what’s happening in the flow is the drafting technique some people use to get better gas mileage on a highway. They drive their car at a certain distance behind a semi-truck to get better fuel economy.

“The pressure right behind the semi-truck is really low, so if you can get the front end of your car in the low-pressure zone and the back end in a high-pressure zone, you actually get thrust out of it, but the aerodynamic drag on the semi-truck is very high because of this low-pressure zone,” Kirchner said.

Having a better understanding of how the flow actually creates this low-pressure region could give other researchers the knowledge they need to come up with a way to change the pressure.

“We’re not changing anything along the cylinder body or the front of the cylinder in this study,” he said. “But if we know what mechanisms could cause a change in the pressure distribution on the base and develop a method to raise that pressure, we can decrease the drag or have better vehicle directional control.”

The study, Three-Component Turbulence Measurements and Analysis of a Supersonic, Axisymmetric Base Flow,” was written by Branden M. Kirchner, James V. Favale, Gregory S. Elliott, and J. Craig Dutton. It is published in the AIAA Journal.

World-leading microscopes take candid snapshots of atoms in their ‘neighborhoods’

NOVEMBER 6, 2019

by Lawrence Berkeley National Laboratory

World-leading microscopes take candid snapshots of atoms in their 'neighborhoods'
(Top figure) Selected electron beam diffraction patterns that were used to form the molecular structure shown at the bottom. (Bottom figure) 4D-STEM map traces the molecular structure of a small-molecule thin film. (Credit: Colin Ophus/Berkeley Lab)

Researchers from MIT will be collaborating with colleagues at the University of Colorado at Boulder on an experiment scheduled to be sent to the International Space Station (ISS) on Nov. 2. The experiment is looking for ways to address the formation of biofilms on surfaces within the space station. These hard-to-kill communities of bacteria or fungi can cause equipment malfunctions and make astronauts sick. MIT News asked professor of mechanical engineering Kripa Varanasi and doctoral student Samantha McBride to describe the planned experiments and their goals.

Q: For starters, tell us about the problem that this research aims to address.

Varanasi: Biofilms grow on surfaces in space stations, which initially was a surprise to me. Why would they grow in space? But it’s an issue that can jeopardize the key equipment — space suits, water recycling units, radiators, navigation windows, and so on — and can also lead to human illness. It therefore needs to be understood and characterized, especially for long-duration space missions.

In some of the early space station missions like Mir and Skylab, there were astronauts who were getting sick in space. I don’t know if we can say for sure it’s due to these biofilms, but we do know that there have been equipment failures due to biofilm growth, such as clogged valves.

In the past there have been studies that show the biofilms actually grow and accumulate more in space than on Earth, which is kind of surprising. They grow thicker; they have different forms. The goal of this project is to study how biofilms grow in space. Why do they get all these different morphologies? Essentially, it’s the absence of gravity and probably other driving forces, convection for example.

We also want to think about remediation approaches. How could you solve this problem? In our current collaboration with Luis Zea at UC Boulder, we are looking at biofilm growth on engineered substrates in the presence and absence of gravity. We make different surfaces for these biofilms to grow on, and we apply some of our technologies developed in this lab, including liquid impregnated surfaces [LIS] and superhydrophobic nanotextured surfaces, and we looked at how biofilms grow on them. We found that after a year’s worth of experiments, here on Earth, the LIS surfaces did really well: There was no biofilm growth, compared to many other state of the art substrates.

Q: So what will you be looking for in this new experiment to be flown on the ISS?

McBride: There are signs indicating that bacteria might actually increase their virulence in space, and so astronauts are more likely to get sick. This is interesting because usually when you think of bacteria, you’re thinking of something that’s so small that gravity shouldn’t play that big a role.

Professor Cynthia Collin’s group at RPI [Rensselaer Polytechnic Institute] did a previous experiment on the ISS showing that when you have normal gravity, the bacteria are able to move around and form these mushroom-like shapes, versus in microgravity mobile bacteria form this kind of canopy shape of biofilm. So basically, they’re not as constrained any more and they can start to grow outward in this unusual morphology.

Our current work is a collaboration with UC Boulder and Luis Zea as the principal investigator. So now instead of just looking at how bacteria respond to microgravity versus gravity on Earth, we’re also looking at how they grow on different engineered substrates. And also, more fundamentally, we can see why bacteria biofilms form the way that they do on Earth, just by taking away that one variable of having the gravity.

There are two different experiments, one with bacterial biofilms and one with fungal biofilms. Zea and his group have been growing these organisms in a test media in the presence of those surfaces, and then characterizing them by the biofilm mass, the thickness, morphology, and then the gene expression. These samples will now be sent to the space station to see how they grow there.

Q: So based on the earlier tests, what are you expecting to see when the samples come back to Earth after two months?

Varanasi: What we’ve found so far is that, interestingly, a great deal of biomass grows on superhydrophobic surfaces, which is usually thought to be antifouling. In contrast, on the liquid-impregnated surfaces, the technology behind Liquiglide, there was basically no biomass growth. This produced the same result as the negative control, where there were no bacteria.

We also did some control tests to confirm that the oil used on the liquid impregnated surfaces is not biocidal. So we’re not just killing the bacteria, they’re actually just not adhering to the substrate, and they’re not growing there.

McBride: For the LIS surfaces, we’ll be looking at whether biofilms form on them or not. I think both results would be really interesting. If biofilms grow on these surfaces in space, but not on the ground, I think that’s going to tell us something very interesting about the behavior of these organisms. And of course, if biofilms don’t form and the surfaces prevent formation like they do on on the ground, then that’s also great, because now we have a mechanism to prevent biofilm formation on some of the equipment in the space station. 

So we would be happy with either result, but if the LIS does perform as well as it did on the ground, I think it’s going to have a huge impact on future missions in terms of preventing biofilms and not getting people sick. 

Fundamentally, from a science point of view, we want to understand the growth of these films and understand all of the biomechanical, biophysical, and biochemical mechanisms behind the growth. By adding the surface morphology, texture, and other properties like the liquid-impregnated surfaces, we may see new phenomena in the growth and evolution of these films, and maybe actually come up with a solution to fix the problem.

Varanasi: And then that can lead to designing new equipment or even space suits that have these features. So that’s where I think we would like to learn from this and then propose solutions.

Chemists observe “spooky” quantum tunneling

Extremely large electric fields can prevent umbrella-shaped ammonia molecules from inverting.

MIT chemists have observed, for the first time, inversion of the umbrella-like ammonia molecule by quantum tunneling.
MIT chemists have observed, for the first time, inversion of the umbrella-like ammonia molecule by quantum tunneling.
Image: Chelsea Turner, MIT

Anne Trafton | MIT News Office
November 4, 2019

A molecule of ammonia, NH3, typically exists as an umbrella shape, with three hydrogen atoms fanned out in a nonplanar arrangement around a central nitrogen atom. This umbrella structure is very stable and would normally be expected to require a large amount of energy to be inverted.

However, a quantum mechanical phenomenon called tunneling allows ammonia and other molecules to simultaneously inhabit geometric structures that are separated by a prohibitively high energy barrier. A team of chemists that includes Robert Field, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT, has examined this phenomenon by using a very large electric field to suppress the simultaneous occupation of ammonia molecules in the normal and inverted states.

“It’s a beautiful example of the tunneling phenomenon, and it reveals a wonderful strangeness of quantum mechanics,” says Field, who is one of the senior authors of the study.

Heon Kang, a professor of chemistry at Seoul National University, is also a senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Youngwook Park and Hani Kang of Seoul National University are also authors of the paper.

Suppressing inversion

The experiments, performed at Seoul National University, were enabled by the researchers’ new method for applying a very large electric field (up to 200,000,000 volts per meter) to a sample sandwiched between two electrodes. This assembly is only a few hundred nanometers thick, and the electric field applied to it generates forces nearly as strong as the interactions between adjacent molecules.

“We can apply these huge fields, which are almost the same magnitude as the fields that two molecules experience when they approach each other,” Field says. “That means we’re using an external means to operate on an equal playing field with what the molecules can do themselves.”

This allowed the researchers to explore quantum tunneling, a phenomenon often used in undergraduate chemistry courses to demonstrate one of the “spookinesses” of quantum mechanics, Field says.

As an analogy, imagine you are hiking in a valley. To reach the next valley, you need to climb a large mountain, which requires a lot of work. Now, imagine that you could tunnel through the mountain to get to the next valley, with no real effort required. This is what quantum mechanics allows, under certain conditions. In fact, if the two valleys have exactly the same shape, you would be simultaneously located in both valleys.

In the case of ammonia, the first valley is the low-energy, stable umbrella state. For the molecule to reach the other valley — the inverted state, which has exactly the same low-energy — classically it would need to ascend into a very high-energy state. However, quantum mechanically, the isolated molecule exists with equal probability in both valleys.

Under quantum mechanics, the possible states of a molecule, such as ammonia, are described in terms of a characteristic energy level pattern.  The molecule initially exists in either the normal or inverted structure, but it can tunnel spontaneously to the other structure. The amount of time required for that tunneling to occur is encoded in the energy level pattern. If the barrier between the two structures is high, the tunneling time is long. Under certain circumstances, such as application of a strong electric field, tunneling between the regular and inverted structures can be suppressed.

For ammonia, exposure to a strong electric field lowers the energy of one structure and raises the energy of the other (inverted) structure. As a result, all of the ammonia molecules can be found in the lower energy state. The researchers demonstrated this by creating a layered argon-ammonia-argon structure at 10 kelvins. Argon is an inert gas which is solid at 10 K, but the ammonia molecules can rotate freely in the argon solid. As the electric field is increased, the energy states of the ammonia molecules change in such a way that the probabilities of finding the molecules in the normal and inverted states become increasingly far apart, and tunneling can no longer occur.

This effect is completely reversible and nondestructive: As the electric field is decreased, the ammonia molecules return to their normal state of being simultaneously in both wells.

“This manuscript describes a burgeoning frontier in our ability to tame molecules and control their underlying dynamics,” says Patrick Vaccaro, a professor of chemistry at Yale University who was not involved in the study. “The experimental approach set forth in this paper is unique, and it has enormous ramifications for future efforts to interrogate molecular structure and dynamics, with the present application affording fundamental insights into the nature of tunneling-mediated phenomena.”

Lowering the barriers

For many molecules, the barrier to tunneling is so high that tunneling would never happen during the lifespan of the universe, Field says. However, there are molecules other than ammonia that can be induced to tunnel by careful tuning of the applied electric field. His colleagues are now working on exploiting this approach with some of those molecules.

“Ammonia is special because of its high symmetry and the fact that it’s probably the first example anybody would ever discuss from a chemical point of view of tunneling,” Field says. “However, there are many examples where this could be exploited. The electric field, because it’s so large, is capable of acting on the same scale as the actual chemical interactions,” offering a powerful way of externally manipulating molecular dynamics.

The research was funded by the Samsung Science and Technology Foundation and the National Science Foundation.

System provides cooling with no electricity

Passive device relies on a layer of material that blocks incoming sunlight but lets heat radiate away.

David Chandler | MIT News Office
October 30, 2019

In the photo on the left, a disk of the new insulating material blocks and reflects visible light, hiding the MIT logo beneath it. But seen in infrared light, at right, the material is transparent and the logo is visible.
In the photo on the left, a disk of the new insulating material blocks and reflects visible light, hiding the MIT logo beneath it. But seen in infrared light, at right, the material is transparent and the logo is visible.
Image courtesy of the researchers

Imagine a device that can sit outside under blazing sunlight on a clear day, and without using any power cool things down by more than 23 degrees Fahrenheit (13 degrees Celsius). It almost sounds like magic, but a new system designed by researchers at MIT and in Chile can do exactly that.

The device, which has no moving parts, works by a process called radiative cooling. It blocks incoming sunlight to keep from heating it up, and at the same time efficiently radiates infrared light — which is essentially heat — that passes straight out into the sky and into space, cooling the device significantly below the ambient air temperature.

The key to the functioning of this simple, inexpensive system is a special kind of insulation, made of a polyethylene foam called an aerogel. This lightweight material, which looks and feels a bit like marshmallow, blocks and reflects the visible rays of sunlight so that they don’t penetrate through it. But it’s highly transparent to the infrared rays that carry heat, allowing them to pass freely outward.

The new system is described today in a paper in the journal Science Advances, by MIT graduate student Arny Leroy, professor of mechanical engineering and department head Evelyn Wang, and seven others at MIT and at the Pontifical Catholic University of Chile.

Such a system could be used, for example, as a way to keep vegetables and fruit from spoiling, potentially doubling the time the produce could remain fresh, in remote places where reliable power for refrigeration is not available, Leroy explains.

Minimizing heat gain

Radiative cooling is simply the main process that most hot objects use to cool down. They emit midrange infrared radiation, which carries the heat energy from the object straight off into space because air is highly transparent to infrared light.

The new device is based on a concept that Wang and others demonstrated a year ago, which also used radiative cooling but employed a physical barrier, a narrow strip of metal, to shade the device from direct sunlight to prevent it from heating up. That device worked, but it provided less than half the amount of cooling power that the new system achieves because of its highly efficient insulating layer.

“The big problem was insulation,” Leroy explains. The biggest input of heat preventing the earlier device from achieving deeper cooling was from the heat of the surrounding air. “How do you keep the surface cold while still allowing it to radiate?” he wondered. The problem is that almost all insulating materials are also very good at blocking infrared light and so would interfere with the radiative cooling effect.

There has been a lot of research on ways to minimize heat loss, says Wang, who is the Gail E. Kendall Professor of Mechanical Engineering. But this is a different issue that has received much less attention: how to minimize heat gain. “It’s a very difficult problem,” she says.

The solution came through the development of a new kind of aerogel. Aerogels are lightweight materials that consist mostly of air and provide very good thermal insulation, with a structure made up of microscopic foam-like formations of some material. The team’s new insight was to make an aerogel out of polyethylene, the material used in many plastic bags. The result is a soft, squishy, white material that’s so lightweight that a given volume weighs just 1/50 as much as water.

The key to its success is that while it blocks more than 90 percent of incoming sunlight, thus protecting the surface below from heating, it is very transparent to infrared light, allowing about 80 percent of the heat rays to pass freely outward. “We were very excited when we saw this material,” Leroy says.

The result is that it can dramatically cool down a plate, made of a material such as metal or ceramic, placed below the insulating layer, which is referred to as an emitter. That plate could then cool a container connected to it, or cool liquid passing through coils in contact with it, to provide cooling for produce or air or water.

Putting the device to the test

To test their predictions of its effectiveness, the team along with their Chilean collaborators set up a proof-of-concept device in Chile’s Atacama desert, parts of which are the driest land on Earth. They receive virtually no rainfall, yet, being right on the equator, they receive blazing sunlight that could put the device to a real test. The device achieved a cooling of 13 degrees Celsius under full sunlight at solar noon. Similar tests on MIT’s campus in Cambridge, Massachusetts, achieved just under 10 degrees cooling.

That’s enough cooling to make a significant difference in preserving produce in remote locations, the researchers say. In addition, it could be used to provide an initial cooling stage for electric refrigeration, thus minimizing the load on those systems to allow them to operate more efficiently with less power.

Theoretically, such a device could achieve a temperature reduction of as much as 50 C, the researchers say, so they are continuing to work on ways of further optimizing the system so that it could be expanded to other cooling applications such as building air conditioning without the need for any source of power. Radiative cooling has already been integrated with some existing air conditioning systems to improve their efficiency.

Already, though, they have achieved a greater amount of cooling under direct sunlight than any other passive, radiative system other than those that use a vacuum system for insulation — which is very effective but also heavy, expensive, and fragile.

This approach could also be a low-cost add-on to any other kind of cooling system, providing additional cooling to supplement a more conventional system. “Whatever system you have,” Leroy says, “put the aerogel on it, and you’ll get much better performance.”

Peter Bermel, an associate professor of electrical and computer engineering at Purdue University, who was not involved in this work, says, “The main potential benefit of the polyethylene aerogel presented here may be its relative compactness and simplicity, compared to a number of prior experiments.”

He adds, “It might be helpful to quantitatively compare and contrast this method with some alternatives, such as polyethylene films and angle-selective blocking in terms of performance (e.g., temperature change), cost, and weight per unit area. … The practical benefit could be significant if the comparison were performed and the cost/benefit tradeoff significantly favored these aerogels.”

The work was partly supported by an MIT International Science and Technology Initiatives (MISTI) Chile Global Seed Fund grant, and by the U.S. Department of Energy through the Solid State Solar Thermal Energy Conversion Center (S3TEC).

Self-transforming robot blocks jump, spin, flip, and identify each other

Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, robots can self-assemble to form various structures with applications including inspection.

One modular robotic cube snaps into place with rest of the M-blocks.
One modular robotic cube snaps into place with rest of the M-blocks.
Image: Jason Dorfman/MIT CSAIL

Rachel Gordon | MIT CSAIL
October 30, 2019

Swarms of simple, interacting robots have the potential to unlock stealthy abilities for accomplishing complex tasks. Getting these robots to achieve a true hive-like mind of coordination, though, has proved to be a hurdle.

In an effort to change this, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a surprisingly simple scheme: self-assembling robotic cubes that can climb over and around one another, leap through the air, and roll across the ground.

Six years after the project’s first iteration, the robots can now “communicate” with each other using a barcode-like system on each face of the block that allows the modules to identify each other. The autonomous fleet of 16 blocks can now accomplish simple tasks or behaviors, such as forming a line, following arrows, or tracking light.

Inside each modular “M-Block” is a flywheel that moves at 20,000 revolutions per minute, using angular momentum when the flywheel is braked. On each edge and every face are permanent magnets that let any two cubes attach to each other.

While the cubes can’t be manipulated quite as easily as, say, those from the video game “Minecraft,” the team envisions strong applications in inspection, and eventually disaster response. Imagine a burning building where a staircase has disappeared. In the future, you can envision simply throwing M-Blocks on the ground, and watching them build out a temporary staircase for climbing up to the roof, or down to the basement to rescue victims.

“M stands for motion, magnet, and magic,” says MIT Professor and CSAIL Director Daniela Rus. “’Motion,’ because the cubes can move by jumping. ‘Magnet,’ because the cubes can connect to other cubes using magnets, and once connected they can move together and connect to assemble structures. ‘Magic,’ because we don’t see any moving parts, and the cube appears to be driven by magic.”

While the mechanism is quite intricate on the inside, the exterior is just the opposite, which enables more robust connections. Beyond inspection and rescue, the researchers also imagine using the blocks for things like gaming, manufacturing, and health care.

“The unique thing about our approach is that it’s inexpensive, robust, and potentially easier to scale to a million modules,” says CSAIL PhD student John Romanishin, lead author on a new paper about the system. “M-Blocks can move in a general way. Other robotic systems have much more complicated movement mechanisms that require many steps, but our system is more scalable.”

Romanishin wrote the paper alongside Rus and undergraduate student John Mamish of the University of Michigan. They will present the paper on M-blocks at IEEE’s International Conference on Intelligent Robots and Systems in November in Macau.

Previous modular robot systems typically tackle movement using unit modules with small robotic arms known as external actuators. These systems require a lot of coordination for even the simplest movements, with multiple commands for one jump or hop.

On the communication side, other attempts have involved the use of infrared light or radio waves, which can quickly get clunky: If you have lots of robots in a small area and they’re all trying to send each other signals, it opens up a messy channel of conflict and confusion.

When a system uses radio signals to communicate, the signals can interfere with each other when there are many radios in a small volume.

Back in 2013, the team built out their mechanism for M-Blocks. They created six-faced cubes that move about using something called “inertial forces.” This means that, instead of using moving arms that help connect the structures, the blocks have a mass inside of them which they “throw” against the side of the module, which causes the block to rotate and move.

Each module can move in four cardinal directions when placed on any one of the six faces, which results in 24 different movement directions. Without little arms and appendages sticking out of the blocks, it’s a lot easier for them to stay free of damage and avoid collisions.

Knowing that the team had tackled the physical hurdles, the critical challenge still persisted: How to make these cubes communicate and reliably identify the configuration of neighboring modules?

Romanishin came up with algorithms designed to help the robots accomplish simple tasks, or “behaviors,” which led them to the idea of a barcode-like system where the robots can sense the identity and face of what other blocks they’re connected to.

In one experiment, the team had the modules turn into a line from a random structure, and they watched if the modules could determine the specific way that they were connected to each other. If they weren’t, they’d have to pick a direction and roll that way until they ended up on the end of the line.

Essentially, the blocks used the configuration of how they’re connected to each other in order to guide the motion that they choose to move — and 90 percent of the M-Blocks succeeded in getting into a line.

The team notes that building out the electronics was very challenging, especially when trying to fit intricate hardware inside such a small package. To make the M-Block swarms a larger reality, the team wants just that — more and more robots to make bigger swarms with stronger capabilities for various structures.

The project was supported, in part, by the National Science Foundation and Amazon Robotics.

Double-sided tape for tissues could replace surgical sutures

New adhesive that binds wet surfaces within seconds could be used to heal wounds or implant medical devices.

Anne Trafton | MIT News Office
October 30, 2019

MIT engineers have devised a double-sided adhesive that can be used to seal tissues together.
MIT engineers have devised a double-sided adhesive that can be used to seal tissues together.
Image: Felice Frankel, Christine Daniloff, MIT

Inspired by a sticky substance that spiders use to catch their prey, MIT engineers have designed a double-sided tape that can rapidly seal tissues together.

In tests in rats and pig tissues, the researchers showed that their new tape can tightly bind tissues such as the lungs and intestines within just five seconds. They hope that this tape could eventually be used in place of surgical sutures, which don’t work well in all tissues and can cause complications in some patients.

“There are over 230 million major surgeries all around the world per year, and many of them require sutures to close the wound, which can actually cause stress on the tissues and can cause infections, pain, and scars. We are proposing a fundamentally different approach to sealing tissue,” says Xuanhe Zhao, an associate professor of mechanical engineering and of civil and environmental engineering at MIT and the senior author of the study.

The double-sided tape can also be used to attach implantable medical devices to tissues, including the heart, the researchers showed. In addition, it works much faster than tissue glues, which usually take several minutes to bind tightly and can drip onto other parts of the body.

Graduate students Hyunwoo Yuk and Claudia Varela are the lead authors of the study, which appears today in Nature. Other authors are MIT graduate student Xinyu Mao, MIT assistant professor of mechanical engineering Ellen Roche, Mayo Clinic critical care physician Christoph Nabzdyk, and Brigham and Women’s Hospital pathologist Robert Padera.

A tight seal

Forming a tight seal between tissues is considered to be very difficult because water on the surface of the tissues interferes with adhesion. Existing tissue glues diffuse adhesive molecules through the water between two tissue surfaces to bind them together, but this process can take several minutes or even longer.

The MIT team wanted to come up with something that would work much faster. Zhao’s group had previously developed other novel adhesives, including a hydrogel superglue that provides tougher adhesion than the sticky materials that occur in nature, such as those that mussels and barnacles use to cling to ships and rocks.

To create a double-sided tape that could rapidly join two wet surfaces together, the team drew inspiration from the natural world — specifically, the sticky material that spiders use to capture their prey in wet conditions. This spider glue includes charged polysaccharides that can absorb water from the surface of an insect almost instantaneously, clearing off a small dry patch that the glue can adhere to.

To mimic this with an engineered adhesive, the researchers designed a material that first absorbs water from wet tissues and then rapidly binds two tissues together. For water absorption, they used polyacrylic acid, a very absorbent material that is used in diapers. As soon as the tape is applied, it sucks up water, allowing the polyacrylic acid to quickly form weak hydrogen bonds with both tissues.

These hydrogen bonds and other weak interactions temporarily hold the tape and tissues in place while chemical groups called NHS esters, which the researchers embedded in the polyacrylic acid, form much stronger bonds, called covalent bonds, with proteins in the tissue. This takes about five seconds.

To make their tape tough enough to last inside the body, the researchers incorporated either gelatin or chitosan (a hard polysaccharide found in insect shells). These polymers allow the adhesive to hold its shape for long periods of time. Depending on the application that the tape is being used for, the researchers can control how fast it breaks down inside the body by varying the ingredients that go into it. Gelatin tends to break down within a few days or weeks in the human body, while chitosan can last longer (a month or even up to a year).

“Combining two innovative concepts, the research team succeeded in adhering quickly and effectively to the wet and soft surface of a tissue, and in maintaining good adhesion and mechanical properties for several days without causing too much inflammatory response,” says Costantino Creton, a research director at ESPCI Paris, who was not involved in the research.

Rapid healing

This type of adhesive could have a major impact on surgeons’ ability to seal incisions and heal wounds, Yuk says. To explore possible applications for the new double-sided tape, the researchers tested it in a few different types of pig tissue, including skin, small intestine, stomach, and liver. They also performed tests in pig lungs and trachea, showing that they could rapidly repair damage to those organs.

“It’s very challenging to suture soft or fragile tissues such as the lung and trachea, but with our double-sided tape, within five seconds we can easily seal them,” Yuk says.

The tape also worked well to seal damage to the gastrointestinal tract, which could be very useful in preventing leakage that sometimes occurs following surgery. This leakage can cause sepsis and other potentially fatal complications.

“I anticipate tremendous translational potential of this elegant approach into various clinical practices, as well as basic engineering applications, in particular in situations where surgical operations, such as suturing, are not straightforward,” says Yu Shrike Zhang, an assistant professor of medicine at Harvard Medical School, who was not involved in the research.

Implanting medical devices within the body is another application the MIT team is exploring. Working with Roche’s lab, the researchers showed that the tape could be used to firmly attach a small polyurethane patch to the hearts of living rats, which are about the size of a thumbnail. Normally this kind of procedure is extremely complicated and requires an experienced surgeon to perform, but the research team was able to simply stick the patch on with their tape by pressing for a few seconds, and it stayed in place for several days.

In addition to the polyurethane heart patch, the researchers found that the tape could successfully attach materials such as silicone rubber, titanium, and hydrogels to tissues.

“This provides a more elegant, more straightforward, and more universally applicable way of introducing an implantable monitor or drug delivery device, because we can adhere to many different sites without causing damage or secondary complications from puncturing tissue to affix the devices,” Yuk says.

The researchers are now working with doctors to identify additional applications for this kind of adhesive and to perform more tests in animal models.

The research was funded by the National Science Foundation and the Office of Naval Research.

Implantable cancer traps could provide earlier diagnosis, help monitor treatment

OCTOBER 29, 2019

by University of Michigan

Implantable cancer traps could provide earlier diagnosis, help monitor treatment
Credit: University of Michigan

Invasive procedures to biopsy tissue from cancer-tainted organs could be replaced by simply taking samples from a tiny “decoy” implanted just beneath the skin, University of Michigan researchers have demonstrated in mice.

These devices have a knack for attracting cancer cells traveling through the body. In fact, they can even pick up signs that cancer is preparing to spread, before cancer cells arrive.

“Biopsying an organ like the lung is a risky procedure that’s done only sparingly,” said Lonnie Shea, the William and Valerie Hall Chair of biomedical engineering at U-M. “We place these scaffolds right under the skin, so they’re readily accessible.”

The ease of access would also allow doctors to monitor the effectiveness of cancer treatments closer to real time.

The U-M team’s most recent work appears in Cancer Research, a publication of the American Association for Cancer Research.

Biopsies of the scaffold allowed researchers to analyze 635 genes present in the captured cancer cells. From these genes, the team identified ten that could predict whether a mouse was healthy, if it had a cancer that had not begun to spread yet, or if a cancer was present and had begun to spread. They could do that all without the need for an invasive biopsy of an organ.

The gene expression obtained at the scaffold had distinct patterns relative to cells from the blood, which are obtained through a technique known as liquid biopsy. These differences highlight that the tissue in these traps provides unique information that correlates with disease progression.

The researchers have demonstrated that the synthetic scaffolds work with multiple types of cancers in mice, including pancreatic cancer. They work by luring immune cells, which, in turn, attract cancer cells.

“When we started off, the idea was that we would biopsy the scaffold and look for tumor cells that had followed the immune cells there,” Shea said. “But we realized that by analyzing the immune cells that gather first, we can detect the cancer before it’s spreading.”

In treating cancer, early detection is key.

“Currently, early signs of metastasis can be difficult to detect,” said Jacqueline Jeruss, an associate professor of surgery and biomedical engineering and a co-author of the study. “Imaging may be done once a patient experiences symptoms, but that implies the number of cancer cells may already be substantial. Improved detection methods are needed to identify metastasis at a point when targeted treatments can have a significant beneficial impact on slowing disease progression.”

The immune cells allowed researchers to identify whether treatments were effective in the mice and which subjects were sensitive or resistant to treatment.

The decoy’s ability to draw immune and cancer cells can also bolster the treatment itself. In previous research, the devices demonstrated an ability to slow the growth of metastatic breast cancer tumors in mice, by reducing the number of cancer cells that can reach those tumors.

In the future, Shea envisions that the scaffolds could be outfitted with sensors and Bluetooth technology that could deliver information in real time without the need for a biopsy.

New findings detail a method for investigating the inner workings of stars in a rare phase

Helium core flash plays an integral role in our understanding of the life cycles of low-mass stars. 

New findings detail a method for investigating the inner workings of stars in a rare phase
Credit: Jørgen Christensen-Dalsgaard

OCTOBER 29, 2019

by Harrison Tasoff, University of California – Santa Barbara

In 5 billion years or so, when the sun has used up the hydrogen in its core, it will inflate and turn into a red giant star. This phase of its life—and that of other stars up to twice its mass—is relatively short compared with the more than 10 billion-year life of the sun. The red giant will shine 1000 times brighter than the sun, and suddenly the helium deep in its core will begin fusing to carbon in a process called the “helium core flash.” After this, the star settles into 100 million years of quiet helium fusion.

Astrophysicists have predicted these flashes in theory and in models for 50 years, but none has ever been observed. However, a new study in Nature Astronomy suggests this may soon change.

“The effects of helium core flash are clearly predicted by the models, but we have found no observations that directly reflect them,” said coauthor Jørgen Christensen-Dalsgaard, a Simons Distinguished Visiting Scholar at UC Santa Barbara’s Kavli Institute for Theoretical Physics (KITP) and professor at Aarhus University in Denmark.

A star like the sun is powered by fusing hydrogen into helium at temperatures around 15 million K. Helium, however, requires a much higher temperature than hydrogen, around 100 million K, to begin fusing into carbon, so it simply accumulates in the core while a shell of hydrogen continues to burn around it. All the while, the star expands to a size comparable to the Earth’s orbit. Eventually, the star’s core reaches the perfect conditions, triggering a violent ignition of the helium: the helium core flash. The core undergoes several flashes over the next 2 million years, and then settles into a more static state where it proceeds to burn all of the helium in the core to carbon and oxygen over the course of around 100 million years.

Helium core flash plays an integral role in our understanding of the life cycles of low-mass stars. Unfortunately, gathering data from the cores of distant stars is incredibly difficult, so scientists have been unable to observe this phenomenon.

The power of modern space-based observatories like Kepler, CoRoT and now NASA’s Transiting Exoplanet Survey Satellite (TESS) promises to change this. “The availability of very sensitive measurements from space has made it possible to observe subtle oscillations in the brightness of a very large number of stars,” Christensen-Dalsgaard explained.

The helium core flash produces a series of different waves that propagate through the star. This causes the star to vibrate like a bell, which manifests as a weak variation in its overall brightness. Observations of stellar pulsations have already taught astronomers about the processes inside stars in much the way that geologists learn about the Earth’s interior by studying earthquakes. This technique, known as asteroseismology, has grown to become a flourishing field in astrophysics.

The core flash happens quite suddenly, and like an earthquake, begins with a very energetic event followed by a series of successively weaker events over the next 2 million years—a relatively short period in the life of most stars. As shown in an early paper in 2012 led by KITP Director Lars Bildsten and KITP Senior Fellow Bill Paxton, the pulsation frequencies of these stars are very sensitive to the conditions in the core. As a result, asteroseismology could provide scientists with information that tests our understanding of these processes.

“We were excited at the time that these new space capabilities might allow us to confirm this long-studied piece of stellar evolution. However, we did not consider the even more exciting possibility that these authors explored of using the vigorously convecting star to actually get the star ringing,” said Bildsten.

The main purpose of the new study was to determine whether these flashing regions could excite pulsations large enough for us to see. And after months of analysis and simulations, the researchers found that many should be relatively easy to observe.

“I was certainly surprised that the mechanism actually worked so well,” said Christensen-Dalsgaard.

The new and promising angle detailed in the paper is that the astronomers have been studying the processes in a very special—and up to now not very well understood—type of star designated a subdwarf B star. These are former red giants that, for unknown reasons, have lost most of their outer layer of hydrogen. Subdwarf B stars provide scientists a unique opportunity to more directly probe the hot core of a star. What’s more, the remaining thin layer of hydrogen is not thick enough to dampen the oscillations from the repeated helium core flashes, giving the researchers a chance to potentially observe them directly.

This study provides the first observational information about the complex processes predicted by stellar models at the ignition of helium fusion. “This work took strong advantage of a series of fluid dynamical calculations led by former KITP Graduate Fellow Daniel Lecoanet,” Bildsten noted. “If this all works out, these stars may provide a new testing ground for this fundamental puzzle in astrophysics.”

Christensen-Dalsgaard said he is eager to apply these findings to actual data. And in fact, helium core flashes may already have been observed. Several of the stars observed by CoRoT and Kepler show unexplained oscillations that appear similar to predictions of helium core flashes. TESS will prove crucial in this future research, he explained, since it will observe a whole swath of stars, including several where these pulsations may be detectable. This will provide further strong tests of the models and an insight of what the future holds for our own sun.

First magnet installed for the ALPS II experiment at DESY

There are several theories that try to explain the nature of dark matter and the particles it may consist of. 

First magnet installed for the ALPS II experiment at DESY

OCTOBER 29, 2019

by Dr. Thomas Zoufal, Deutsches Elektronen-Synchrotron

The international ALPS II (“Any light particle search”) collaboration installed the first of 24 superconducting magnets today, marking the start of the installation of a unique particle physics experiment to look for dark matter. Located at the German research centre DESY in Hamburg, it is set to start taking data in 2021 by looking for dark matter particles that literally make light shine through a wall, thus providing clues to one of the biggest questions in physics today: what is the nature of dark matter?

“It is very exciting to see the project that many of us have been working on for so many years finally taking shape in the tunnel,” ALPS-II spokesman Axel Linder from DESY said. “When installation and commissioning proceed as planned we will be able to start the search in the first half of 2021.”

Dark matter is one of the greatest mysteries in physics. Observations and calculations of the motion of stars in galaxies, for example, show that there must be more matter in the Universe than we can account for with matter particles known today. In fact, dark matter must make up 85 % of all the matter in the Universe. However, currently we don’t know what it is. But we know that it does not interact with regular matter and is essentially invisible, so that it is called “dark”.

There are several theories that try to explain the nature of dark matter and the particles it may consist of. One of these theories states that dark matter consists of very light-weight particles with very specific properties. One example is the axion which was originally postulated to explain aspects of the strong interaction, one of the fundamental forces of nature. There are also puzzling astrophysical observations such as discrepancies in the evolution of stellar systems, which might also be explained by the existence of axions or axion-like particles.

This is where ALPS II comes in. It is designed to create and detect those axions. A strong magnetic field can make axions switch to photons and vice versa. “This bizarre property was already exploited in the initial ALPS I experiment which we ran from 2007 to 2010. Despite its limited size, it achieved the world-wide best sensitivities for these kinds of experiments,” said Benno Willke, the leader of the ALPS and of the laser development group at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) and at the Institute for Gravitational Physics at Leibniz Universität of Hannover.

ALPS II is being set up in a straight tunnel section of DESY’s former particle physics accelerator HERA. Twenty-four superconducting accelerator magnets, twelve on either side of a wall, house two 120-metre-long optical cavities. A powerful and intricate laser system produces light that is amplified by the cavity inside the magnetic field and will, to a very small fraction, convert into dark matter particles. A light-blocking barrier—a wall—separates the second compartment of ALPS II, but this wall is no hurdle for axions and similar particles that can easily pass through it. In the second cavity dark matter particles would convert back into light. The tiny signal will be picked up by dedicated detection systems.

The more than 1,000-fold improvement in sensitivity of ALPS II is made possible by the increased length of the magnet strings but also by significant advances in optical technologies. “These advances emerged from the work on gravitational wave interferometers such as GEO600 and LIGO, and nicely show how technological advances in one area enable progress in others,” said Co-Spokesperson Guido Mueller from the University of Florida in Gainesville.

ALPS II is also an example of recycling in research: it does not only reuse a stretch of tunnel that once housed DESY’s flagship particle accelerator, but it also reuses the very magnets that drove protons around the ring until 2007. These magnets needed to be reengineered to fit the ALPS purposes: the slight bend needed in an accelerator ring had to be removed to allow photons to propagate through them.

The ALPS II collaboration consists of some 25 scientists from these institutes: DESY, the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) and the Institute for Gravitational Physics at Leibniz Universität of Hannover, the Johannes Gutenberg-Universität Mainz, the University of Florida in Gainesville, and Cardiff University. Beyond that, the collaboration is supported by partners worldwide like the National Metrology Institute (PTB) in Germany and the National Institute of Standards and Technology in the U.S.. The experiment is mainly funded by DESY, the Heising-Simons Foundation, the US National Science Foundation, the German Volkswagen Stiftung and German Research Foundation (DFG).

At DESY, ALPS II might be just the first experiment within a new strategic approach to tackle dark matter. “International collaborations are preparing the IAXO experiment to search for axions emitted by the Sun as well as the MADMAX detector, which will look directly for axions as constituents of the local dark matter surrounding us”, explained Joachim Mnich, DESY’s director for particle physics.