New synthesis method yields degradable polymers

Materials could be useful for delivering drugs or imaging agents in the body; may offer alternative to some industrial plastics.

A new type of polymer designed by MIT chemists incorporates a special monomer (yellow) that helps the polymers to break down more easily under certain conditions.
A new type of polymer designed by MIT chemists incorporates a special monomer (yellow) that helps the polymers to break down more easily under certain conditions.
Image: Demin Liu

Anne Trafton | MIT News Office
October 28, 2019

MIT chemists have devised a way to synthesize polymers that can break down more readily in the body and in the environment.

A chemical reaction called ring-opening metathesis polymerization, or ROMP, is handy for building novel polymers for various uses such as nanofabrication, high-performance resins, and delivering drugs or imaging agents. However, one downside to this synthesis method is that the resulting polymers do not naturally break down in natural environments, such as inside the body.

The MIT research team has come up with a way to make those polymers more degradable by adding a novel type of building block to the backbone of the polymer. This new building block, or monomer, forms chemical bonds that can be broken down by weak acids, bases, and ions such as fluoride.

“We believe that this is the first general way to produce ROMP polymers with facile degradability under biologically relevant conditions,” says Jeremiah Johnson, an associate professor of chemistry at MIT and the senior author of the study. “The nice part is that it works using the standard ROMP workflow; you just need to sprinkle in the new monomer, making it very convenient.”

This building block could be incorporated into polymers for a wide variety of uses, including not only medical applications but also synthesis of industrial polymers that would break down more rapidly after use, the researchers say.

The lead author of the paper, which appears in Nature Chemistry today, is MIT postdoc Peyton Shieh. Postdoc Hung VanThanh Nguyen is also an author of the study.

Powerful polymerization

The most common building blocks of ROMP-generated polymers are molecules called norbornenes, which contain a ring structure that can be easily opened up and strung together to form polymers. Molecules such as drugs or imaging agents can be added to norbornenes before the polymerization occurs.

Johnson’s lab has used this synthesis approach to create polymers with many different structures, including linear polymers, bottlebrush polymers, and star-shaped polymers. These novel materials could be used for delivering many cancer drugs at once, or carrying imaging agents for magnetic resonance imaging (MRI) and other types of imaging.

“It’s a very robust and powerful polymerization reaction,” Johnson says. “But one of the big downsides is that the backbone of the polymers produced entirely consists of carbon-carbon bonds, and as a result, the polymers are not readily degradable. That’s always been something we’ve kept in the backs of our minds when thinking about making polymers for the biomaterials space.”

To circumvent that issue, Johnson’s lab has focused on developing small polymers, on the order of about 10 nanometers in diameter, which could be cleared from the body more easily than larger particles. Other chemists have tried to make the polymers degradable by using building blocks other than norbornenes, but these building blocks don’t polymerize as efficiently. It’s also more difficult to attach drugs or other molecules to them, and they often require harsh conditions to degrade.

“We prefer to continue to use norbornene as the molecule that enables us to polymerize these complex monomers,” Johnson says. “The dream has been to identify another type of monomer and add it as a co-monomer into a polymerization that already uses norbornene.”

The researchers came upon a possible solution through work Shieh was doing on another project. He was looking for new ways to trigger drug release from polymers, when he synthesized a ring-containing molecule that is similar to norbornene but contains an oxygen-silicon-oxygen bond. The researchers discovered that this kind of ring, called a silyl ether, can also be opened up and polymerized with the ROMP reaction, leading to polymers with oxygen-silicon-oxygen bonds that degrade more easily. Thus, instead of using it for drug release, the researchers decided to try to incorporate it into the polymer backbone to make it degradable.

They found that by simply adding the silyl-ether monomer in a 1:1 ratio with norbornene monomers, they could create similar polymer structures to what they have previously made, with the new monomer incorporated fairly uniformly throughout the backbone. But now, when exposed to a slightly acidic pH, around 6.5, the polymer chain begins to break apart.

“It’s quite simple,” Johnson says. “It’s a monomer we can add to widely used polymers to make them degradable. But as simple as that is, examples of such an approach are surprisingly rare.”

Faster breakdown

In tests in mice, the researchers found that during the first week or two, the degradable polymers showed the same distribution through the body as the original polymers, but they began to break down soon after that. After six weeks, the concentrations of the new polymers in the body were between three and 10 times less than the concentrations of the original polymers, depending on the exact chemical composition of the silyl-ether monomers that the researchers used.

The findings suggest that adding this monomer to polymers for drug delivery or imaging could help them get cleared from the body more quickly.

“We are excited about the prospect of using this technology to precisely tune the breakdown of ROMP-based polymers in biological tissues, which we believe could be leveraged to control biodistribution, drug release kinetics, and many other features,” Johnson says.

The researchers have also started working on adding the new monomers to industrial resins, such as plastics or adhesives. They believe it would be economically feasible to incorporate these monomers into the manufacturing processes of industrial polymers, to make them more degradable, and they are working with Millipore-Sigma to commercialize this family of monomers and make them available for research.

The research was funded by the National Institutes of Health, the American Cancer Society, and the National Science Foundation.

Mathematics reveals new insights into Marangoni flows

The Marangoni effect is a popular physics experiment. It is produced when an interface between water and air is heated in just one spot.

Credit: CC0 Public Domain

OCTOBER 28, 2019

by Springer

The Marangoni effect is a popular physics experiment. It is produced when an interface between water and air is heated in just one spot. Since this heat will radiate outwards, a temperature gradient is produced on the surface, causing the fluid to move through the radiation process of convection. When un-dissolvable impurities are introduced to this surface, they are immediately swept to the side of the water’s container. In turn, this creates a gradient in surface tension which causes the interface to become elastic.

The structures of these flows have been well understood theoretically for over a century, but still don’t completely line up with experimental observations of the effect. In a new study published in EPJ E, Thomas Bickel at the University of Bordeaux in France has discovered new mathematical laws governing the properties of Marangoni flows.

The Marangoni effect can have a variety of applications, for example in welding and computer manufacturing. Therefore, Bickel’s findings could provide important new information for researchers and engineers working with fluid-based systems. Bickel found that in deeper water, the region in which impurities are swept away decreases in size with an increase in the surface’s elasticity. Outside of these regions, Marangoni flows are cancelled out by counterflows originating from the impurities, meaning the fluid becomes static. The region can even disappear if the surface’s elasticity is too great, in which case the concentration of impurities on the interface becomes constant. Furthermore, the boundary of the region becomes more blurred in shallow water.

Bickel uncovered these mechanisms through mathematical derivations, starting from the known properties of Marangoni flows. He then incorporated aspects including water depth and impurity concentration, and he calculated their effect on the overall system. Bickel’s research shows that even in old, well-studied physics experiments, mathematical analysis can still reveal new processes.

Space: a major legal void

The internet of space is here.

Experts debated the subject at length this week in Washington at the 70th International Astronautical Conference.

The Earth, photographed by astronaut Nick Hague from the International Space Station on October 2, 2019
The Earth, photographed by astronaut Nick Hague from the International Space Station on October 2, 2019

OCTOBER 27, 2019

by Ivan Couronne

SpaceX founder Elon Musk tweeted this week using a connection provided by the first satellites in his high-speed Starlink constellation, which one day could include… 42,000 mini-satellites.

The idea of putting tens of thousands more satellites into orbit, as compared with the roughly 2,000 that are currently active around the Earth, highlights the fact that space is a legal twilight zone.

Experts debated the subject at length this week in Washington at the 70th International Astronautical Conference.

The treaties that have governed space up until now were written at a time when only a few nations were sending civilian and military satellites into orbit.

Today, any university could decide to launch a mini-satellite.

That could yield a legal morass.

Roughly 20,000 objects in space are now big enough—the size of a fist or about four inches (10 centimeters)—to be catalogued.

That list includes everything from upper stages and out-of-service satellites to space junk and the relatively small number of active satellites.

A disused satellite at an altitude of 1,000 kilometers (620 miles) will eventually fall back into the atmosphere, but only after about 1,000 years, according to French expert Christophe Bonnal.

Bonnal, who chairs the International Astronautical Federation’s committee on space debris, explains that during those years, the object—traveling 30,000 kilometers an hour—could end up colliding with a live satellite and killing it.

For now, that possibility is rare—as an example, Bonnal says there are only 15 objects bigger than a fist above France at any given time.

“Space is infinitely empty—this is not like maritime pollution,” he told AFP.

Jean-Yves Le Gall, the head of France’s space agency and the outgoing IAF president, also downplayed the issue.

“There are practically no examples of satellite problems caused by space debris,” Le Gall told AFP.

“But this is starting to be a more urgent concern because of the (satellite) constellation projects. It’s clear that even if we only had to think about SpaceX’s constellation, the issue would need to be addressed.”

For Le Gall, Musk’s company “isn’t doing anything against the rules. The problem is that there are no rules. There are air traffic controllers for planes. We will end up with something similar.”

Thousands of pieces of junk

Jan Woerner, the director general of the European Space Agency, admits: “The best situation would be to have international law… but if you ask for that, it will take decades.”

So far, only France has stipulated in its own laws that any satellite in low orbit must be removed from orbit in 25 years.

The US space agency NASA and others have adopted rules for their own satellites, but without legal constraints.

So the space agencies and industry power players are hoping that everyone will voluntarily adopt rules of good behavior, defining things like the required space between satellites, coordination and data exchanges.

Various codes and standards were put down on paper from the 1990s, notably under the auspices of the United Nations.

One of the most recent charters was created by the Space Safety Coalition—so far, 34 actors including Airbus, Intelsat and the OneWeb constellation project have signed on.

The problem with such charters is that one major new satellite constellation project that refuses to play along could make things difficult for everyone.

“It’s a very classic problem with polluters,” says Carissa Christensen, the CEO of Bryce Space and Technology, an analytics and engineering firm.

“This is very typical of issues where there are long-term challenges, and costs and benefits.”

In addition, national space agencies would like to clean up Earth’s orbits, which are now strewn with junk from 60 years of space history.

Three large US rocket stages mysteriously “fragmented” last year, says Bonnal—that created 1,800 pieces of debris.

The French expert says removing just a few large objects a year would help.

One example would be the stages of the Soviet-era Zenit rockets, which each weigh nine tons and are nine meters long. Every month, they pass within 200 meters of one another.

If two of them collide, it would double the number of objects in orbit.

But for now, no one knows how to remove these giant objects from space.

In the short term, a best practices manual may be the best solution.

Experts also hope that SpaceX manages to maintain control of its satellites as Starlink takes shape.

Already, of the first 60 satellites launched, three of them—five percent—stopped responding after just a month in orbit.

New research on giant radio galaxies defies conventional wisdom

Conventional wisdom tells us that large objects appear smaller as they get farther from us, but this fundamental law of classical physics is reversed when we observe the distant universe.

Credit: CC0 Public Domain

OCTOBER 25, 2019

by Michelle Ulyatt, University of Kent

Astrophysicists at the University of Kent simulated the development of the biggest objects in the universe to help explain how galaxies and other cosmic bodies were formed. By looking at the distant universe, it is possible to observe it in a past state, when it was still at a formative stage. At that time, galaxies were growing and supermassive black holes were violently expelling enormous amounts of gas and energy. This matter accumulated into pairs of reservoirs, which formed the biggest objects in the universe, so-called giant radio galaxies. These giant radio galaxies stretch across a large part of the Universe. Even moving at the speed of light, it would take several million years to cross one.

Professor Michael D. Smith of the Centre for Astrophysics and Planetary Science, and student Justin Donohoe collaborated on the research. They expected to find that as they simulated objects farther into the distant universe, they would appear smaller, but in fact they found the opposite.

Professor Smith said: “When we look far into the distant universe, we are observing objects way in the past—when they were young. We expected to find that these distant giants would appear as a comparatively small pair of vague lobes. To our surprise, we found that these giants still appear enormous even though they are so far away.”

Radio galaxies have long been known to be powered by twin jets which inflate their lobes and create giant cavities. The team performed simulations using the Forge supercomputer, generating three-dimensional hydrodynamics that recreated the effects of these jets. They then compared the resulting images to observations of the distant galaxies. Differences were assessed using a new classification index, the Limb Brightening Index (LB Index), which measures changes to the orientation and size of the objects.

Professor Smith said: “We already know that once you are far enough away, the Universe acts like a magnifying glass and objects start to increase in size in the sky. Because of the distance, the objects we observed are extremely faint, which means we can only see the brightest parts of them, the hot spots. These hot spots occur at the outer edges of the radio galaxy and so they appear to be larger than ever, confounding our initial expectations.”

The full research, “The Morphological Classification of distant radio galaxies explored with three-dimensional simulations,” has been published in the Monthly Notices of the Royal Astronomical Society.

How to spot a wormhole (if they exist)

How to spot a wormhole (if they exist)
An artist’s concept illustrates a supermassive black hole. A new theoretical study outlines a method that could be used to search for wormholes (a speculative phenomenon) in the background of supermassive black holes. Credit: NASA/JPL-Caltech

A new study outlines a method for detecting a speculative phenomenon that has long captured the imagination of sci-fi fans: wormholes, which form a passage between two separate regions of spacetime.

OCTOBER 23, 2019

by Charlotte Hsu, University at Buffalo

Such pathways could connect one area of our universe to a different time and/or place within our universe, or to a different universe altogether.

Whether wormholes exist is up for debate. But in a paper published on Oct. 10 in Physical Review D, physicists describe a technique for detecting these bridges.

The method focuses on spotting a wormhole around Sagittarius A*, an object that’s thought to be a supermassive black hole at the heart of the Milky Way galaxy. While there’s no evidence of a wormhole there, it’s a good place to look for one because wormholes are expected to require extreme gravitational conditions, such as those present at supermassive black holes.

In the new paper, scientists write that if a wormhole does exist at Sagittarius A*, nearby stars would be influenced by the gravity of stars at the other end of the passage. As a result, it would be possible to detect the presence of a wormhole by searching for small deviations in the expected orbit of stars near Sagittarius A*.

“If you have two stars, one on each side of the wormhole, the star on our side should feel the gravitational influence of the star that’s on the other side. The gravitational flux will go through the wormhole,” says Dejan Stojkovic, Ph.D., cosmologist and professor of physics in the University at Buffalo College of Arts and Sciences. “So if you map the expected orbit of a star around Sagittarius A*, you should see deviations from that orbit if there is a wormhole there with a star on the other side.”

Stojkovic conducted the study with first author De-Chang Dai, Ph.D., of Yangzhou University in China and Case Western Reserve University.

A close look at S2, a star orbiting Sagittarius A*

Stojkovic notes that if wormholes are ever discovered, they’re not going to be the kind that science fiction often envisions.

“Even if a wormhole is traversable, people and spaceships most likely aren’t going to be passing through,” he says. “Realistically, you would need a source of negative energy to keep the wormhole open, and we don’t know how to do that. To create a huge wormhole that’s stable, you need some magic.”

Nevertheless, wormholes—traversable or not—are an interesting theoretical phenomenon to study. While there is no experimental evidence that these passageways exist, they are possible—according to theory. As Stojkovic explains, wormholes are “a legitimate solution to Einstein’s equations.”

The research in Physical Review D focuses on how scientists could hunt for a wormhole by looking for perturbations in the path of S2, a star that astronomers have observed orbiting Sagittarius A*.

While current surveillance techniques are not yet precise enough to reveal the presence of a wormhole, Stojkovic says that collecting data on S2 over a longer period of time or developing techniques to track its movement more precisely would make such a determination possible. These advancements aren’t too far off, he says, and could happen within one or two decades.

Stojkovic cautions, however, that while the new method could be used to detect a wormhole if one is there, it will not strictly prove that a wormhole is present.

“When we reach the precision needed in our observations, we may be able to say that a wormhole is the most likely explanation if we detect perturbations in the orbit of S2,” he says. “But we cannot say that, ‘Yes, this is definitely a wormhole.’ There could be some other explanation, something else on our side perturbing the motion of this star.”

Though the paper focuses on traversable wormholes, the technique it outlines could indicate the presence of either a traversable or non-traversable wormhole, Stojkovic says. He explains that because gravity is the curvature of spacetime, the effects of gravity are felt on both sides of a wormhole, whether objects can pass through or not.

Unique properties of quantum material explained for first time

This new understanding of the topological material will make it easier for engineers to use it in new applications.

Credit: CC0 Public Domain

OCTOBER 23, 2019

by Steve Tally, Purdue University

The characteristics of a new, iron-containing type of material that is thought to have future applications in nanotechnology and spintronics have been determined at Purdue University.

The native material, a topological insulator, is an unusual type of three-dimensional (3-D) system that has the interesting property of not significantly changing its crystal structure when it changes electronic phases—unlike water, for example, which goes from ice to liquid to steam. More important, the material has an electrically conductive surface but a non-conducting (insulating) core.

However, once iron is introduced into the native material, during a process called doping, certain structural rearrangements and magnetic properties appear which have been found with high-performance computational methods.

“These new materials, these topological insulators, have attracted quite a bit of attention because they display new states of matter,” said Jorge Rodriguez, associate professor of physics and astronomy.

“The addition of iron ions introduces new magnetic properties giving topological insulators new potential technological applications,” Rodriguez said. “With the addition of magnetic dopants to topological insulators, such as iron ions, new physical phenomena is expected as a result of the combination of topological and magnetic properties.”

In 2016, three scientists received the Nobel Prize in physics for their work on related materials.

But for all of the fascination and promise of iron-containing topological insulators, the use of these materials in nanotechnology needed further understanding about how their structural, electronic and magnetic properties work together.

Rodriguez said his work uses supercomputers to explain Mössbauer spectroscopy, a technique that detects very small structural and electronic configurations, to understand what other scientists have been observing experimentally on iron systems.

“By using the laws of quantum mechanics in a computational setting, we were able to use a modeling technique called density functional theory, which solves the basic equations of quantum mechanics for this material, and we were able to fully explain the experimental results,” Rodriguez said. “For the first time we were able to establish a relationship between the experimental data produced by Mössbauer spectroscopy, and the 3-D structure of this material. This new understanding of the topological material will make it easier for engineers to use it in new applications.”

The work was published in Physical Review B.

Scientists confirm a new ‘magic number’ for neutrons

 For example, a large energy for the first excited state of a nucleus is indicative of a magic number.

Credit: CC0 Public Domain

OCTOBER 25, 2019


An international collaboration led by scientists from the University of Hong Kong, RIKEN (Japan), and CEA (France) have used the RI Beam Factory (RIBF) at the RIKEN Nishina Center for Accelerator-base Science to show that 34 is a “magic number” for neutrons, meaning that atomic nuclei with 34 neutrons are more stable than would normally be expected. Earlier experiments had suggested, but not clearly demonstrated, that this would be the case.

The experiments, published in Physical Review Letters, were performed using calcium 54, an unstable nucleus which has 20 protons and 34 neutrons. Through the experiments, the researchers showed that it exhibits strong shell closure, a situation with neutrons that is similar to the way that atoms with closed electron shells, such as helium and neon, are chemically inactive.

While it was once believed that the protons and neutrons were lumped together like a soup within the nucleus, it is now known that they are organized in shells. With the complete filling of a nuclear shell, often referred to as “magic number,” nuclei exhibit distinctive attributes that can be probed in the laboratory. For example, a large energy for the first excited state of a nucleus is indicative of a magic number.

Recent studies on neutron rich nuclei have hinted that new numbers need to be added to the known, canonical numbers of 2, 8, 20, 28, 50, 82, and 126.

Initial tests on calcium 54, also carried out at the RIBF in 2013, had already indicated that the number should exist. During the new experiment, the research focus shifted towards determining its actual strength. In the current experiment, the team around Sidong Chen directly measured the number of neutrons occupying the individual shells in calcium 54 by painstakingly knocking out the neutrons one at a time.

To do this, the group used a beam containing the calcium traveling at around 60% of the speed of light, selected and identified by the BigRIPS isotope separator, and collided the beam into a target of thick liquid hydrogen, or protons, cooled to a tremendously low temperature of 20 K. The detailed shell structure of the isotope was inferred from the cross-sections of the neutrons knocked out as they collided with the protons, allowing the researchers to associate them with different shells.

According to Pieter Doornenbal of the Nishina Center, “For the first time, we were able to demonstrate quantitatively that all the neutron shells are completely filled in 54Ca, and that 34 neutrons is indeed a good magic number.” The finding demonstrates that 34 is a part of the set of magic numbers, though its appearance is restricted to a very limited region of the nuclear chart. Sidong Chen continues “Major efforts in the future will focus on delineating this region. Moreover, for more neutron rich systems, like 60Ca, further magic numbers are predicted. These ‘exotic’ systems are currently beyond the reach of the RIBF for detailed studies, but we believe that thanks to its increasing capabilities, they will become accessible in the foreseeable future.”

New process could make hydrogen peroxide available in remote places

MIT-developed method may lead to portable devices for making the disinfectant on-site where it’s needed.

David L. Chandler | MIT News Office
October 23, 2019

In a new method to produce hydrogen peroxide portably, an electrolyzer (left) splits water into hydrogen and oxygen. The hydrogen atoms initially form in an electrolyte material (green), which transfers them to a mediator material (red), which then carries them to a separate unit where the mediator comes in contact with oxygen-rich water (blue), where the hydrogen combines with it to form hydrogen peroxide. The mediator then returns to begin the cycle again.
In a new method to produce hydrogen peroxide portably, an electrolyzer (left) splits water into hydrogen and oxygen. The hydrogen atoms initially form in an electrolyte material (green), which transfers them to a mediator material (red), which then carries them to a separate unit where the mediator comes in contact with oxygen-rich water (blue), where the hydrogen combines with it to form hydrogen peroxide. The mediator then returns to begin the cycle again.
Image courtesy of the researchers.

Hydrogen peroxide, a useful all-purpose disinfectant, is found in most medicine cabinets in the developed world. But in remote villages in developing countries, where it could play an important role in health and sanitation, it can be hard to come by.

Now, a process developed at MIT could lead to a simple, inexpensive, portable device that could produce hydrogen peroxide continuously from just air, water, and electricity, providing a way to sterilize wounds, food-preparation surfaces, and even water supplies.

The new method is described this week in the journal Joule in a paper by MIT students Alexander Murray, Sahag Voskian, and Marcel Schreier and MIT professors T. Alan Hatton and Yogesh Surendranath.

Even at low concentrations, hydrogen peroxide is an effective antibacterial agent, and after carrying out its sterilizing function it breaks down into plain water, in contrast to other agents such as chlorine that can leave unwanted byproducts from its production and use.

Hydrogen peroxide is just water with an extra oxygen atom tacked on — it’s H2O2, instead of H2O. That extra oxygen is relatively loosely bound, making it a highly reactive chemical eager to oxidize any other molecules around it. It’s so reactive that in high concentrations it can be used as rocket fuel, and even concentrations of 35 percent require very special handling and shipping procedures. The kind used as a household disinfectant is typically only 3 percent hydrogen peroxide and 97 percent water.

Because high concentrations are hard to transport, and low concentrations, being mostly water, are uneconomical to ship, the material is often hard to get in places where it could be especially useful, such as remote communities with untreated water. (Bacteria in water supplies can be effectively controlled by adding hydrogen peroxide.) As a result, many research groups around the world have been pursuing approaches to developing some form of portable hydrogen peroxide production equipment.

Most of the hydrogen peroxide produced in the industrialized world is made in large chemical plants, where methane, or natural gas, is used to provide a source of hydrogen, which is then reacted with oxygen in a catalytic process under high heat. This process is energy-intensive and not easily scalable, requiring large equipment and a steady supply of methane, so it does not lend itself to smaller units or remote locations.

“There’s a growing community interested in portable hydrogen peroxide,” Surendranath says, “because of the appreciation that it would really meet a lot of needs, both on the industrial side as well as in terms of human health and sanitation.”

Other processes developed so far for potentially portable systems have key limitations. For example, most catalysts that promote the formation of hydrogen peroxide from hydrogen and oxygen also make a lot of water, leading to low concentrations of the desired product. Also, processes that involve electrolysis, as this new process does, often have a hard time separating the produced hydrogen peroxide from the electrolyte material used in the process, again leading to low efficiency.

Surendranath and the rest of the team solved the problem by breaking the process down into two separate steps. First, electricity (ideally from solar cells or windmills) is used to break down water into hydrogen and oxygen, and the hydrogen then reacts with a “carrier” molecule. This molecule — a compound called anthroquinone, in these initial experiments — is then introduced into a separate reaction chamber where it meets with oxygen taken from the outside air, and a pair of hydrogen atoms binds to an oxygen molecule (O2) to form the hydrogen peroxide. In the process, the carrier molecule is restored to its original state and returns to carry out the cycle all over again, so none of this material is consumed.

The process could address numerous challenges, Surendranath says, by making clean water, first-aid care for wounds, and sterile food preparation surfaces more available in places where they are presently scarce or unavailable.

“Even at fairly low concentrations, you can use it to disinfect water of microbial contaminants and other pathogens,” Surendranath says. And, he adds, “at higher concentrations, it can be used even to do what’s called advanced oxidation,” where in combination with UV light it can be used to decontaminate water of even strong industrial wastes, for example from mining operations or hydraulic fracking.

So, for example, a portable hydrogen peroxide plant might be set up adjacent to a fracking or mining site and used to clean up its effluent, then moved to another location once operations cease at the original site.

In this initial proof-of-concept unit, the concentration of hydrogen peroxide produced is still low, but further engineering of the system should lead to being able to produce more concentrated output, Surendranath says. “One of the ways to do that is to just increase the concentration of the mediator, and fortunately, our mediator has already been used in flow batteries at really high concentrations, so we think there’s a route toward being able to increase those concentrations,” he says.

“It’s kind of an amazing process,” he says, “because you take abundant things, water, air and electricity, that you can source locally, and you use it to make this important chemical that you can use to actually clean up the environment and for sanitation and water quality.”

“The ability to create a hydrogen peroxide solution in water without electrolytes, salt, base, etc., all of which are intrinsic to other electrochemical processes, is noteworthy,” says Shannon Stahl, a professor of chemistry at the University of Wisconsin, who was not involved in this work. Stahl adds that “Access to salt-free aqueous solutions of H2O2 has broad implications for practical applications.”

Stahl says that “This work represents an innovative application of ‘mediated electrolysis.’ Mediated electrochemistry provides a means to merge conventional chemical processes with electrochemistry, and this is a particularly compelling demonstration of this concept. … There are many potential applications of this concept.”

Biologists build proteins that avoid crosstalk with existing molecules

Engineered signaling pathways could offer a new way to build synthetic biology circuits.

MIT researchers have found a way to generate many pairs of signaling proteins that don’t cross-talk with each other or with naturally occurring signaling proteins found in bacterial cells.
MIT researchers have found a way to generate many pairs of signaling proteins that don’t cross-talk with each other or with naturally occurring signaling proteins found in bacterial cells.
Image courtesy of the researchers

Anne Trafton | MIT News Office
October 23, 2019

Inside a living cell, many important messages are communicated via interactions between proteins. For these signals to be accurately relayed, each protein must interact only with its specific partner, avoiding unwanted crosstalk with any similar proteins.

A new MIT study sheds light on how cells are able to prevent crosstalk between these proteins, and also shows that there remains a huge number of possible protein interactions that cells have not used for signaling. This means that synthetic biologists could generate new pairs of proteins that can act as artificial circuits for applications such as diagnosing disease, without interfering with cells’ existing signaling pathways.

“Using our high-throughput approach, you can generate many orthogonal versions of a particular interaction, allowing you to see how many different insulated versions of that protein complex can be built,” says Conor McClune, an MIT graduate student and the lead author of the study.

In the new paper, which appears today in Nature, the researchers produced novel pairs of signaling proteins and demonstrated how they can be used to link new signals to new outputs by engineering E. coli cells that produce yellow fluorescence after encountering a specific plant hormone.

Michael Laub, an MIT professor of biology, is the senior author of the study. Other authors are recent MIT graduate Aurora Alvarez-Buylla and Christopher Voigt, the Daniel I.C. Wang Professor of Advanced Biotechnology.

New combinations

In this study, the researchers focused on a type of signaling pathway called two-component signaling, which is found in bacteria and some other organisms. A wide variety of two-component pathways has evolved through a process in which cells duplicate genes for signaling proteins they already have, and then mutate them, creating families of similar proteins.

“It’s intrinsically advantageous for organisms to be able to expand this small number of signaling families quite dramatically, but it runs the risk that you’re going to have crosstalk between these systems that are all very similar,” Laub says. “It then becomes an interesting challenge for cells: How do you maintain the fidelity of information flow, and how do you couple specific inputs to specific outputs?”

Most of these signaling pairs consist of an enzyme called a kinase and its substrate, which is activated by the kinase. Bacteria can have dozens or even hundreds of these protein pairs relaying different signals.

About 10 years ago, Laub showed that the specificity between bacterial kinases and their substrates is determined by only five amino acids in each of the partner proteins. This raised the question of whether cells have already used up, or are coming close to using up, all of the possible unique combinations that won’t interfere with existing pathways.

Some previous studies from other labs had suggested that the possible number of interactions that would not interfere with each other might be running out, but the evidence was not definitive. The MIT researchers decided to take a systematic approach in which they began with one pair of existing E. coli signaling proteins, known as PhoQ and PhoP, and then introduced mutations in the regions that determine their specificity.

This yielded more than 10,000 pairs of proteins. The researchers tested each kinase to see if they would activate any of the substrates, and identified about 200 pairs that interact with each other but not the parent proteins, the other novel pairs, or any other type of kinase-substrate family found in E. coli.

“What we found is that it’s pretty easy to find combinations that will work, where two proteins interact to transduce a signal and they don’t talk to anything else inside the cell,” Laub says.

He now plans to try to reconstruct the evolutionary history that has led to certain protein pairs being used by cells while many other possible combinations have not naturally evolved.

Synthetic circuits

This study also offers a new strategy for creating new synthetic biology circuits based on protein pairs that don’t crosstalk with other cellular proteins, the researchers say. To demonstrate that possibility, they took one of their new protein pairs and modified the kinase so that it would be activated by a plant hormone called trans-zeatin, and engineered the substrate so that it would glow yellow when the kinase activated it.

“This shows that we can overcome one of the challenges of putting a synthetic circuit in a cell, which is that the cell is already filled with signaling proteins,” Voigt says. “When we try to move a sensor or circuit between species, one of the biggest problems is that it interferes with the pathways already there.”

One possible application for this new approach is designing circuits that detect the presence of other microbes. Such circuits could be useful for creating probiotic bacteria that could help diagnose infectious diseases.

“Bacteria can be engineered to sense and respond to their environment, with widespread applications such as ‘smart’ gut bacteria that could diagnose and treat inflammation, diabetes, or cancer, or soil microbes that maintain proper nitrogen levels and eliminate the need for fertilizer. To build such bacteria, synthetic biologists require genetically encoded ‘sensors,’” says Jeffrey Tabor, an associate professor of bioengineering and biosciences at Rice University.

“One of the major limitations of synthetic biology has been our genetic parts failing in new organisms for reasons that we don’t understand (like cross-talk). What this paper shows is that there is a lot of space available to re-engineer circuits so that this doesn’t happen,” says Tabor, who was not involved in the research.

If adapted for use in human cells, this approach could also help researchers design new ways to program human T cells to destroy cancer cells. This type of therapy, known as CAR-T cell therapy, has been approved to treat some blood cancers and is being developed for other cancers as well.

Although the signaling proteins involved would be different from those in this study, “the same principle applies in that the therapeutic relies on our ability to take sets of engineered proteins and put them into a novel genomic context, and hope that they don’t interfere with pathways already in the cells,” McClune says.

The research was funded by the Howard Hughes Medical Institute, the Office of Naval Research, and the National Institutes of Health Pre-Doctoral Training Grant.

Putting the “bang” in the Big Bang

Physicists simulate critical “reheating” period that kickstarted the Big Bang in the universe’s first fractions of a second.

Image: Christine Daniloff, MIT, ESA/Hubble and NASA

Jennifer Chu | MIT News Office
October 24, 2019

As the Big Bang theory goes, somewhere around 13.8 billion years ago the universe exploded into being, as an infinitely small, compact fireball of matter that cooled as it expanded, triggering reactions that cooked up the first stars and galaxies, and all the forms of matter that we see (and are) today.

Just before the Big Bang launched the universe onto its ever-expanding course, physicists believe, there was another, more explosive phase of the early universe at play: cosmic inflation, which lasted less than a trillionth of a second. During this period, matter — a cold, homogeneous goop — inflated exponentially quickly before processes of the Big Bang took over to more slowly expand and diversify the infant universe.

Recent observations have independently supported theories for both the Big Bang and cosmic inflation. But the two processes are so radically different from each other that scientists have struggled to conceive of how one followed the other.

Now physicists at MIT, Kenyon College, and elsewhere have simulated in detail an intermediary phase of the early universe that may have bridged cosmic inflation with the Big Bang. This phase, known as “reheating,” occurred at the end of cosmic inflation and involved processes that wrestled inflation’s cold, uniform matter into the ultrahot, complex soup that was in place at the start of the Big Bang.

“The postinflation reheating period sets up the conditions for the Big Bang, and in some sense puts the ‘bang’ in the Big Bang,” says David Kaiser, the Germeshausen Professor of the History of Science and professor of physics at MIT. “It’s this bridge period where all hell breaks loose and matter behaves in anything but a simple way.”

Kaiser and his colleagues simulated in detail how multiple forms of matter would have interacted during this chaotic period at the end of inflation. Their simulations show that the extreme energy that drove inflation could have been redistributed just as quickly, within an even smaller fraction of a second, and in a way that produced conditions that would have been required for the start of the Big Bang.

The team found this extreme transformation would have been even faster and more efficient if quantum effects modified the way that matter responded to gravity at very high energies, deviating from the way Einstein’s theory of general relativity predicts matter and gravity should interact.

“This enables us to tell an unbroken story, from inflation to the postinflation period, to the Big Bang and beyond,” Kaiser says. “We can trace a continuous set of processes, all with known physics, to say this is one plausible way in which the universe came to look the way we see it today.”

The team’s results appear today in Physical Review Letters. Kaiser’s co-authors are lead author Rachel Nguyen, and John T. Giblin, both of Kenyon College, and former MIT graduate student Evangelos Sfakianakis and Jorinde van de Vis, both of Leiden University in the Netherlands.

“In sync with itself”

The theory of cosmic inflation, first proposed in the 1980s by MIT’s Alan Guth, the V.F. Weisskopf Professor of Physics, predicts that the universe began as an extremely small speck of matter, possibly about a hundred-billionth the size of a proton. This speck was filled with ultra-high-energy matter, so energetic that the pressures within generated a repulsive gravitational force — the driving force behind inflation. Like a spark to a fuse, this gravitational force exploded the infant universe outward, at an ever-faster rate, inflating it to nearly an octillion times its original size (that’s the number 1 followed by 26 zeroes), in less than a trillionth of a second.

Kaiser and his colleagues attempted to work out what the earliest phases of reheating — that bridge interval at the end of cosmic inflation and just before the Big Bang — might have looked like.

“The earliest phases of reheating should be marked by resonances. One form of high-energy matter dominates, and it’s shaking back and forth in sync with itself across large expanses of space, leading to explosive production of new particles,” Kaiser says. “That behavior won’t last forever, and once it starts transferring energy to a second form of matter, its own swings will get more choppy and uneven across space. We wanted to measure how long it would take for that resonant effect to break up, and for the produced particles to scatter off each other and come to some sort of thermal equilibrium, reminiscent of Big Bang conditions.”

The team’s computer simulations represent a large lattice onto which they mapped multiple forms of matter and tracked how their energy and distribution changed in space and over time as the scientists varied certain conditions. The simulation’s initial conditions were based on a particular inflationary model — a set of predictions for how the early universe’s distribution of matter may have behaved during cosmic inflation.

The scientists chose this particular model of inflation over others because its predictions closely match high-precision measurements of the cosmic microwave background — a remnant glow of radiation emitted just 380,000 years after the Big Bang, which is thought to contain traces of the inflationary period.

A universal tweak

The simulation tracked the behavior of two types of matter that may have been dominant during inflation, very similar to a type of particle, the Higgs boson, that was recently observed in other experiments.

Before running their simulations, the team added a slight “tweak” to the model’s description of gravity. While ordinary matter that we see today responds to gravity just as Einstein predicted in his theory of general relativity, matter at much higher energies, such as what’s thought to have existed during cosmic inflation, should behave slightly differently, interacting with gravity in ways that are modified by quantum mechanics, or interactions at the atomic scale.

In Einstein’s theory of general relativity, the strength of gravity is represented as a constant, with what physicists refer to as a minimal coupling, meaning that, no matter the energy of a particular particle, it will respond to gravitational effects with a strength set by a universal constant.

However, at the very high energies that are predicted in cosmic inflation, matter interacts with gravity in a slightly more complicated way. Quantum-mechanical effects predict that the strength of gravity can vary in space and time when interacting with ultra-high-energy matter — a phenomenon known as nonminimal coupling.

Kaiser and his colleagues incorporated a nonminimal coupling term to their inflationary model and observed how the distribution of matter and energy changed as they turned this quantum effect up or down.

In the end they found that the stronger the quantum-modified gravitational effect was in affecting matter, the faster the universe transitioned from the cold, homogeneous matter in inflation to the much hotter, diverse forms of matter that are characteristic of the Big Bang.

By tuning this quantum effect, they could make this crucial transition take place over 2 to 3 “e-folds,” referring to the amount of time it takes for the universe to (roughly) triple in size. In this case, they managed to simulate the reheating phase within the time it takes for the universe to triple in size two to three times. By comparison, inflation itself took place over about 60 e-folds.

“Reheating was an insane time, when everything went haywire,” Kaiser says. “We show that matter was interacting so strongly at that time that it could relax correspondingly quickly as well, beautifully setting the stage for the Big Bang. We didn’t know that to be the case, but that’s what’s emerging from these simulations, all with known physics. That’s what’s exciting for us.”

“There are hundreds of proposals for producing the inflationary phase, but the transition between the inflationary phase and the so-called “hot big bang” is the least understood part of the story,” says Richard Easther, professor of physics at the University of Auckland, who was not involved in the research. “This paper breaks new ground by accurately simulating the postinflationary phase in models with many individual fields and complex kinetic terms. These are extremely challenging numerical simulations, and extend the state of the art for studies of nonlinear dynamics in the very early universe.”

This research was supported, in part, by the U.S. Department of Energy and the National Science Foundation.