Physics experiment with ultrafast laser pulses produces a previously unseen phase of matter.
David L. Chandler | MIT News Office November 11, 2019
Adding energy to any material, such as by heating it, almost always makes its structure less orderly. Ice, for example, with its crystalline structure, melts to become liquid water, with no order at all.
But in new experiments by physicists at MIT and elsewhere, the opposite happens: When a pattern called a charge density wave in a certain material is hit with a fast laser pulse, a whole new charge density wave is created — a highly ordered state, instead of the expected disorder. The surprising finding could help to reveal unseen properties in materials of all kinds.
The discovery is being reported today in the journal Nature Physics, in a paper by MIT professors Nuh Gedik and Pablo Jarillo-Herrero, postdoc Anshul Kogar, graduate student Alfred Zong, and 17 others at MIT, Harvard University, SLAC National Accelerator Laboratory, Stanford University, and Argonne National Laboratory.
The experiments made use of a material called lanthanum tritelluride, which naturally forms itself into a layered structure. In this material, a wavelike pattern of electrons in high- and low-density regions forms spontaneously but is confined to a single direction within the material. But when hit with an ultrafast burst of laser light — less than a picosecond long, or under one trillionth of a second — that pattern, called a charge density wave or CDW, is obliterated, and a new CDW, at right angles to the original, pops into existence.
This new, perpendicular CDW is something that has never been observed before in this material. It exists for only a flash, disappearing within a few more picoseconds. As it disappears, the original one comes back into view, suggesting that its presence had been somehow suppressed by the new one.
Gedik explains that in ordinary materials, the density of electrons within the material is constant throughout their volume, but in certain materials, when they are cooled below some specific temperature, the electrons organize themselves into a CDW with alternating regions of high and low electron density. In lanthanum tritelluride, or LaTe3, the CDW is along one fixed direction within the material. In the other two dimensions, the electron density remains constant, as in ordinary materials.
The perpendicular version of the CDW that appears after the burst of laser light has never before been observed in this material, Gedik says. It “just briefly flashes, and then it’s gone,” Kogar says, to be replaced by the original CDW pattern which immediately pops back into view.
Gedik points out that “this is quite unusual. In most cases, when you add energy to a material, you reduce order.”
“It’s as if these two [kinds of CDW] are competing — when one shows up, the other goes away,” Kogar says. “I think the really important concept here is phase competition.”
The idea that two possible states of matter might be in competition and that the dominant mode is suppressing one or more alternative modes is fairly common in quantum materials, the researchers say. This suggests that there may be latent states lurking unseen in many kinds of matter that could be unveiled if a way can be found to suppress the dominant state. That is what seems to be happening in the case of these competing CDW states, which are considered to be analogous to crystal structures because of the predictable, orderly patterns of their subatomic constituents.
Normally, all stable materials are found in their minimum energy states — that is, of all possible configurations of their atoms and molecules, the material settles into the state that requires the least energy to maintain itself. But for a given chemical structure, there may be other possible configurations the material could potentially have, except that they are suppressed by the dominant, lowest-energy state.
“By knocking out that dominant state with light, maybe those other states can be realized,” Gedik says. And because the new states appear and disappear so quickly, “you can turn them on and off,” which may prove useful for some information processing applications.
The possibility that suppressing other phases might reveal entirely new material properties opens up many new areas of research, Kogar says. “The goal is to find phases of material that can only exist out of equilibrium,” he says — in other words, states that would never be attainable without a method, such as this system of fast laser pulses, for suppressing the dominant phase.
Gedik adds that “normally, to change the phase of a material you try chemical changes, or pressure, or magnetic fields. In this work, we are using light to make these changes.”
The new findings may help to better understand the role of phase competition in other systems. This in turn can help to answer questions like why superconductivity occurs in some materials at relatively high temperatures, and may help in the quest to discover even higher-temperature superconductors.Gedik says, “What if all you need to do is shine light on a material, and this new state comes into being?”
The work was supported by the U.S. Department of Energy, SLAC National Accelerator Laboratory, the Skoltech-MIT NGP Program, the Center for Excitonics, and the Gordon and Betty Moore Foundation.
Its extendable appendage can meander through tight spaces and then lift heavy loads.
Jennifer Chu | MIT News Office November 7, 2019
In today’s factories and warehouses, it’s not uncommon to see robots whizzing about, shuttling items or tools from one station to another. For the most part, robots navigate pretty easily across open layouts. But they have a much harder time winding through narrow spaces to carry out tasks such as reaching for a product at the back of a cluttered shelf, or snaking around a car’s engine parts to unscrew an oil cap.
Now MIT engineers have developed a robot designed to extend a chain-like appendage flexible enough to twist and turn in any necessary configuration, yet rigid enough to support heavy loads or apply torque to assemble parts in tight spaces. When the task is complete, the robot can retract the appendage and extend it again, at a different length and shape, to suit the next task.
The appendage design is inspired by the way plants grow, which involves the transport of nutrients, in a fluidized form, up to the plant’s tip. There, they are converted into solid material to produce, bit by bit, a supportive stem.
Likewise, the robot consists of a “growing point,” or gearbox, that pulls a loose chain of interlocking blocks into the box. Gears in the box then lock the chain units together and feed the chain out, unit by unit, as a rigid appendage.
The researchers presented the plant-inspired “growing robot” this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. They envision that grippers, cameras, and other sensors could be mounted onto the robot’s gearbox, enabling it to meander through an aircraft’s propulsion system and tighten a loose screw, or to reach into a shelf and grab a product without disturbing the organization of surrounding inventory, among other tasks.
“Think about changing the oil in your car,” says Harry Asada, professor of mechanical engineering at MIT. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”
“Now we have a robot that can potentially accomplish such tasks,” says Tongxi Yan, a former graduate student in Asada’s lab, who led the work. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”
The team also includes MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara, who presented the results at the conference.
The last foot
The design of the new robot is an offshoot of Asada’s work in addressing the “last one-foot problem” — an engineering term referring to the last step, or foot, of a robot’s task or exploratory mission. While a robot may spend most of its time traversing open space, the last foot of its mission may involve more nimble navigation through tighter, more complex spaces to complete a task.
Engineers have devised various concepts and prototypes to address the last one-foot problem, including robots made from soft, balloon-like materials that grow like vines to squeeze through narrow crevices. But Asada says such soft extendable robots aren’t sturdy enough to support “end effectors,” or add-ons such as grippers, cameras, and other sensors that would be necessary in carrying out a task, once the robot has wormed its way to its destination.
“Our solution is not actually soft, but a clever use of rigid materials,” says Asada, who is the Ford Foundation Professor of Engineering.
Once the team defined the general functional elements of plant growth, they looked to mimic this in a general sense, in an extendable robot.
“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.
The researchers designed a gearbox to represent the robot’s “growing tip,” akin to the bud of a plant, where, as more nutrients flow up to the site, the tip feeds out more rigid stem. Within the box, they fit a system of gears and motors, which works to pull up a fluidized material — in this case, a bendy sequence of 3-D-printed plastic units interlocked with each other, similar to a bicycle chain.
As the chain is fed into the box, it turns around a winch, which feeds it through a second set of motors programmed to lock certain units in the chain to their neighboring units, creating a rigid appendage as it is fed out of the box.
The researchers can program the robot to lock certain units together while leaving others unlocked, to form specific shapes, or to “grow” in certain directions. In experiments, they were able to program the robot to turn around an obstacle as it extended or grew out from its base.
“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.
When the chain is locked and rigid, it is strong enough to support a heavy, one-pound weight. If a gripper were attached to the robot’s growing tip, or gearbox, the researchers say the robot could potentially grow long enough to meander through a narrow space, then apply enough torque to loosen a bolt or unscrew a cap.
Auto maintenance is a good example of tasks the robot could assist with, according to Kamienski. “The space under the hood is relatively open, but it’s that last bit where you have to navigate around an engine block or something to get to the oil filter, that a fixed arm wouldn’t be able to navigate around. This robot could do something like that.”
Wielding state-of-the-art technologies and techniques, a team of Clemson University astrophysicists has added a novel approach to quantifying one of the most fundamental laws of the universe.
In a paper published Friday, Nov. 8, in The Astrophysical Journal, Clemson scientists Marco Ajello, Abhishek Desai, Lea Marcotulli and Dieter Hartmann have collaborated with six other scientists around the world to devise a new measurement of the Hubble Constant, the unit of measure used to describe the rate of expansion of the universe.
“Cosmology is about understanding the evolution of our universe—how it evolved in the past, what it is doing now and what will happen in the future,” said Ajello, an associate professor in the College of Science’s department of physics and astronomy. “Our knowledge rests on a number of parameters—including the Hubble Constant—that we strive to measure as precisely as possible. In this paper, our team analyzed data obtained from both orbiting and ground-based telescopes to come up with one of the newest measurements yet of how quickly the universe is expanding.”
The concept of an expanding universe was advanced by the American astronomer Edwin Hubble (1889-1953), who is the namesake for the Hubble Space Telescope. In the early 20th century, Hubble became one of the first astronomers to deduce that the universe was composed of multiple galaxies. His subsequent research led to his most renowned discovery: that galaxies were moving away from each other at a speed in proportion to their distance.
Hubble originally estimated the expansion rate to be 500 kilometers per second per megaparsec, with a megaparsec being equivalent to about 3.26 million light years. Hubble concluded that a galaxy two megaparsecs away from our galaxy was receding twice as fast as a galaxy only one megaparsec away. This estimate became known as the Hubble Constant, which proved for the first time that the universe was expanding. Astronomers have been recalibrating it—with mixed results—ever since.
With the help of skyrocketing technologies, astronomers came up with measurements that differed significantly from Hubble’s original calculations—slowing the expansion rate down to between 50 and 100 kilometers per second per megaparsec. And in the past decade, ultra-sophisticated instruments, such as the Planck satellite, have increased the precision of Hubble’s original measurements in relatively dramatic fashion.
In a paper titled “A New Measurement of the Hubble Constant and Matter Content of the Universe using Extragalactic Background Light-Gamma Ray Attenuation,” the collaborative team compared the latest gamma-ray attenuation data from the Fermi Gamma-ray Space Telescope and Imaging Atmospheric Cherenkov Telescopes to devise their estimates from extragalactic background light models. This novel strategy led to a measurement of approximately 67.5 kilometers per second per megaparsec.
Gamma rays are the most energetic form of light. Extragalactic background light (EBL) is a cosmic fog composed of all the ultraviolet, visible and infrared light emitted by stars or from dust in their vicinity. When gamma rays and EBL interact, they leave an observable imprint – a gradual loss of flow—that the scientists were able to analyze in formulating their hypothesis.
“The astronomical community is investing a very large amount of money and resources in doing precision cosmology with all the different parameters, including the Hubble Constant,” said Dieter Hartmann, a professor in physics and astronomy. “Our understanding of these fundamental constants has defined the universe as we now know it. When our understanding of laws becomes more precise, our definition of the universe also becomes more precise, which leads to new insights and discoveries.”
A common analogy of the expansion of the universe is a balloon dotted with spots, with each spot representing a galaxy. When the balloon is blown up, the spots spread farther and farther apart.
“Some theorize that the balloon will expand to a particular point in time and then re-collapse,” said Desai, a graduate research assistant in the department of physics and astronomy. “But the most common belief is that the universe will continue to expand until everything is so far apart there will be no more observable light. At this point, the universe will suffer a cold death. But this is nothing for us to worry about. If this happens, it will be trillions of years from now.”
But if the balloon analogy is accurate, what is it, exactly, that is blowing up the balloon?
“Matter – the stars, the planets, even us—is just a small fraction of the universe’s overall composition,” Ajello explained. “The large majority of the universe is made up of dark energy and dark matter. And we believe it is dark energy that is ‘blowing up the balloon.’ Dark energy is pushing things away from each other. Gravity, which attracts objects toward each other, is the stronger force at the local level, which is why some galaxies continue to collide. But at cosmic distances, dark energy is the dominant force.”
The other contributing authors are lead author Alberto Dominguez of the Complutense University of Madrid; Radek Wojtak of the University of Copenhagen; Justin Finke of the Naval Research Laboratory in Washington, D.C.; Kari Helgason of the University of Iceland; Francisco Prada of the Instituto de Astrofisica de Andalucia; and Vaidehi Paliya, a former postdoctoral researcher in Ajello’s group at Clemson who is now at Deutsches Elektronen-Synchrotron in Zeuthen, Germany.
“It is remarkable that we are using gamma rays to study cosmology. Our technique allows us to use an independent strategy—a new methodology independent of existing ones—to measure crucial properties of the universe,” said Dominguez, who is also a former postdoctoral researcher in Ajello’s group. “Our results show the maturity reached in the last decade by the relatively recent field of high-energy astrophysics. The analysis that we have developed paves the way for better measurements in the future using the Cherenkov Telescope Array, which is still in development and will be the most ambitious array of ground-based high-energy telescopes ever.”
Many of the same techniques used in the current paper correlate to previous work conducted by Ajello and his counterparts. In an earlier project, which appeared in the journal Science, Ajello and his team were able to measure all of the starlight ever emitted in the history of the universe.
“What we know is that gamma-ray photons from extragalactic sources travel in the universe toward Earth, where they can be absorbed by interacting with the photons from starlight,” Ajello said. “The rate of interaction depends on the length that they travel in the universe. And the length that they travel depends on expansion. If the expansion is low, they travel a small distance. If the expansion is large, they travel a very large distance. So the amount of absorption that we measured depended very strongly on the value of the Hubble Constant. What we did was turn this around and use it to constrain the expansion rate of the universe.”
Identifying a material’s magnetic structure is a key to unlocking new features and higher performance in electronic devices. However, solving increasingly complex magnetic structures requires increasingly sophisticated approaches.
Researchers from the Center for Materials Crystallography at Aarhus University, Denmark, are pioneering a novel technique to solve highly elaborate magnetic structures using neutrons at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL). Their aim is to develop the technique—based on mathematical analysis of large three-dimensional diffraction data—to establish a baseline approach that can be adapted to a broad class of magnetic materials with different structures.
“In magnetic materials, many of the atoms have a magnetic moment, or a spin, that acts like a very small magnet. In typical magnets, like refrigerator magnets, each one of them is aligned in the same direction and they combine to form a larger magnetic moment—that allows us to stick stuff to our fridge. That’s an example of an ordered magnetic structure, where a specific pattern is repeated over and over,” said Aarhus researcher Nikolaj Roth. “But we’re more interested in disordered systems, or frustrated magnetism, where there is no long-range magnetic order. Where there is no fixed pattern of spins, which repeats itself. This is where all sorts of neat things happen.”
Although “frustrated” or disordered magnetism may seem random or even chaotic, “it’s not,” explained Roth. There are correlations between the spins, if only for a short distance—known as short-range magnetic order. If the dynamic properties of frustrated magnetism can be harnessed, these materials could be used to develop new electronics with tremendously advanced capabilities. That, of course, hinges on the ability to identify short-range correlations in magnetic materials faster, more efficiently, and on a much broader scale.
“A few years ago, we developed a new technique for analyzing the data which made it possible to see these short-range correlations very easily,” said Roth.
In the early experiments, the team successfully calculated the magnetic correlations in a bixbyite sample—a manganese-iron oxide material found in Utah. In this follow-up experiment, they used bixbyite from South Africa that has a different ratio of manganese to iron and therefore has a slightly different magnetic structure.
“We’re getting help from Mother Nature in that we don’t have to synthesize these materials, they are simply found in the ground,” said researcher Kristoffer Holm. “The sample from Utah is about 50:50 iron to manganese, whereas the one from South Africa is more like 70:30. They’re very closely related samples, and we’re hoping they can tell us how the differences in composition will affect their short-range correlations.”
Neutrons are well suited to study magnetic behavior because the particles themselves act as tiny magnets. Neutrons can penetrate many materials more deeply than other complementary methods; and because they have no charge, they interact with samples without compromising or damaging the material to reveal critical information about energy and matter at the atomic scale.
By themselves, pure iron and pure manganese compositions have ordered structures at low temperatures, at which their spins are aligned in accordance with a specific repeating pattern. But when they are combined, they become disordered and form a “spin glass” state below 30 Kelvin (about minus 400° Fahrenheit), where a complex pattern of spin alignments becomes fixed.
Short-range magnetic order has a weak signal and is difficult to detect with conventional neutron scattering instruments. However, the CORELLI beamline at ORNL’s Spallation Neutron Source (SNS) provides a high flux, or large number of neutrons, with a detector array that can capture large volumes of data quickly and in unprecedented detail. Using CORELLI, the team was able to quantify the South African bixbyite sample’s magnetic structure to make comparisons between it and the material’s atomic structure.
“CORELLI is the only instrument in the world that could do this experiment the way we need it to be done. It allows us to measure in all directions, even at high angles, and it does it very fast, which is exactly what we need for the technique we’re developing,” said researcher Emil Klahn. “Even if we could do it at another facility, it would take weeks to do what we’ve been able to do in only a few days.”
The team says that with a fully developed technique, they will be able to study similar materials that exhibit bizarre and unusual behaviors or states of matter; candidate materials include quantum spin liquids, spin ices, and unconventional superconductors. In turn, those insights could lead to a wide range of radically advanced electronic applications.
“This work is so important, as it shows the first proof of concept of megahertz serial crystallography with one of the largest and most complex membrane proteins in photosynthesis: Photosystem I” says Fromme.
The ability to transform sunlight into energy is one of Nature’s more remarkable feats. Scientists understand the basic process of photosynthesis, but many crucial details remain elusive, occurring at dimensions and fleeting time scales long deemed too minuscule to probe.
Now, that is changing.
In a new study, led by Petra Fromme and Nadia Zatsepin at the Biodesign Center for Applied Structural Discovery, the School of Molecular Sciences and the Department of Physics at ASU, researchers investigated the structure of Photosystem I (PSI) with ultrashort X-ray pulses at the European X-ray Free Electron Laser (EuXFEL), located in Hamburg, Germany.
PSI is a large biomolecular system that acts as a solar energy converter transforming solar energy into chemical energy. Photosynthesis provides energy for all complex life on Earth and supplies the oxygen we breathe. Advances in unraveling the secrets of photosynthesis promise to improve agriculture and aid in the development of next-generation solar energy storage systems that combine the efficiency of Nature with the stability of human engineered systems.
“This work is so important, as it shows the first proof of concept of megahertz serial crystallography with one of the largest and most complex membrane proteins in photosynthesis: Photosystem I” says Fromme. “The work paves the way towards time-resolved studies at the EuXFEL to determine molecular movies of the light-driven path of the electrons in photosynthesis or visualize how cancer drugs attack malfunctioning proteins.”
The EuXFEL, which recently began operation, is the first to employ a superconducting linear accelerator that yields exciting new capabilities including very fast megahertz repetition rates of its X-ray pulses—over 9000 times faster than any other XFEL—with pulses separated by less than 1 millionth of a second. With these incredibly brief bursts of X-ray light, researchers will be able to much more quickly record molecular movies of fundamental biological processes and will likely impact diverse fields including medicine and pharmacology, chemistry, physics, materials science, energy research, environmental studies, electronics, nanotechnology, and photonics. Petra Fromme and Nadia Zatsepin are co-corresponding authors of the paper, published in the current issue of the journal Nature Communications.
Strength in numbers
Fromme is the director of the Biodesign Center for Applied Structural Discovery (CASD) and leads the experimental team efforts of the project, while Zatsepin led the XFEL data analysis team.
“This is a significant milestone in the development of serial femtosecond crystallography, building on the well-coordinated effort of a large, cross-disciplinary, international team and years of developments in disparate fields” emphasizes Zatsepin, former Research Assistant Professor in the ASU Department of Physics and Biodesign CASD, and now Senior Research Fellow at La Trobe University in Australia.
Christopher Gisriel, the paper’s co-first author, worked on the project while a Postdoctoral Researcher in the Fromme laboratory and is excited about the project. “Fast data collection in serial femtosecond crystallography experiments makes this revolutionary technique more accessible to those interested in the structure-function relationship for enzymes. This is exemplified by our new publication in Nature Communications showing that even the most difficult and complex protein structures can be solved by serial femtosecond crystallography while collecting data at megahertz repetition rate.”
“It is very exciting to see the hard work from the many folks that drove this project to materialize,” says Jesse Coe, co-first author who graduated last year with a Ph.D. in Biochemistry from ASU. “This is a huge step in the right direction toward better understanding Nature’s process of electron transfer that has been refined over billions of years. “
An XFEL (for X-ray free-electron laser) delivers X-ray light that is a billion times brighter than conventional X-ray-sources. The brilliant, laser-like X-ray pulses are produced by electrons accelerated to near light speed and fed through the gap between series of alternating magnets, a device known as an undulator. The undulator forces the electrons to jiggle and bunch up into discrete packages. Each of the perfectly synchronized wiggling electron bunches emits a powerful, brief X-ray pulse along the electron flight path.
In serial femtosecond crystallography, a jet of protein crystals is injected into the path of the pulsed XFEL beam at room temperature, yielding structural information in the form of diffraction patterns. From these patterns, scientists can determine atomic scale images of proteins in close-to-native conditions, paving the way toward accurate molecular movies of molecules at work.
X-rays damage biomolecules, a problem that has plagued structure determination efforts for decades, requiring the biomolecules to be frozen to limit the damage. But the X-ray bursts produced by an XFEL are so short—lasting mere femtoseconds—that X-ray scattering from a molecule can be recorded before destruction takes place, akin to using a fast camera shutter. As a point of reference a femtosecond is a millionth of a billionth of a second, the same ratio as a second is to 32 million years.
Due to the sophistication, size and cost of XFEL facilities, only five are currently available for such experiments worldwide—a severe bottleneck for researchers since each XFEL can typically only host one experiment at a time. Most XFELs generate X-ray pulses between 30 and 120 times per second and it can take several hours to days to collect the data required to determine a single structure, let alone a series of frames in a molecular movie. The EuXFEL is the first to employ a superconducting linear accelerator in its design, enabling the fastest succession of X-ray pulses of any XFEL, which can significantly reduce the time it takes to determine each structure or frame of the movie.
High risk, high reward
Because the sample is obliterated by the intense X-ray pulses, it must be replenished in time for the next X-ray pulse, which required PSI crystals to be delivered 9000 times faster at the EuXFEL than at earlier XFELs—at a jet speed of about 50 meters per second (160 feet per second), like a microfluidic fire hose. This was challenging as it requires large amounts of the precious protein contained within uniform crystals to reach these high jet speeds and avoid blocking the sample delivery system. Large membrane proteins are so difficult to isolate, crystallize and deliver to the beam, that it wasn’t known if this important class of proteins could be studied at the EuXFEL.
The team developed new methods that allowed PSI, which is large complex consisting of 36 proteins and 381 cofactors, that include the 288 chlorophylls (the green pigments that absorb the light) and has over 150,000 atoms and is over 20 times larger than previous proteins studied at the EuXFEL, to have its structure determined at room temperature to a remarkable 2.9 angstrom resolution—a significant milestone.
Billions of microcrystals of the PSI membrane protein, derived from cyanobacteria, had to be grown for the new study. Rapid crystal growth from nanocrystal seeds was required to guarantee the essential uniformity of crystal size and shape. PSI is a membrane protein, which is a class of proteins of high importance that have been notoriously tricky to characterize. Their elaborate structures are embedded in the cell membrane’s lipid bilayer. Typically, they must be carefully isolated in fully active form from their native environment and transformed into a crystalline state, where the molecules pack into crystals but maintain all their native function.
In the case of PSI, this is achieved by extracting it with very mild detergents that replace the membrane and surround the protein like a pool inner tube, which mimics the native membrane environment and keeps PSI fully functional once it’s packed within the crystals. So when researchers shine light on the green pigments (chlorophylls) that catch the light by the antenna system of PSI, the energy is used it to shoot an electron across the membrane.
To keep PSI fully functional, the crystals are only weakly packed containing 78% water, which makes them soft like a piece of butter in the sun and makes it difficult handling these fragile crystals . “To isolate, characterize and crystallize one gram of PSI, or one billion billion PSI molecules, for the experiments in their fully active form was a huge effort of the students and researchers in my team” says Fromme.” In the future, with even higher repetition rates and novel sample delivery systems the sample consumption will be dramatically reduced.”
The recording and analysis of the diffraction data was another challenge. A unique X-ray detector was developed by the EuXFEL and DESY to handle the demands of structural biology studies at the EuXFEL: the adaptive-gain integrating pixel detector, or AGIPD. Each of AGIPD’s 1 million pixels are less than a hundredth of an inch across and contain 352 analog memory cells, which enable the AGIPD to collect data at megahertz rates over a large dynamic range. However, to collect accurate crystallographic data from microcrystals of large membrane proteins required a compromise between spatial resolution and sampling of the data.
“Pushing for higher resolution data collection with the current detector size could preclude useful processing of the crystallographic data because the diffraction spots are insufficiently resolved by the X-ray detector pixels” warns Zatsepin, “yet in terms of data rates and dynamic range, what the AGIPD is capable of is incredible.”
The novel data reduction and crystallographic analysis software designed specifically to deal with the challenges unique to the massive datasets in XFEL crystallography, whose development was led by collaborators at CFEL, DESY, and ASU, have come a long way since the first high-resolution XFEL experiment in 2011.
“Our software and DESY’s high-performance computing capabilities are really being put to the test with the unprecedented data volumes generated at the EuXFEL. It is always exciting to push the limits of state-of-the-art technology,” adds Zatsepin.
Membrane proteins: floppy, yet formidable
Membrane proteins like PSI—named because they are embedded into cell membranes—are vital to all life processes including respiration, nerve function, nutrition uptake, and cell-cell signaling. As they are at the surface of each cell they are also the most important pharmaceutical drug targets. More than 60% of all current drugs are targeted to membrane proteins. The design of more effective drugs with fewer side effects is therefore contingent on understanding how particular drugs bind with their target proteins and their highly detailed structural conformations and dynamic activities.
Despite their enormous importance in biology, membrane protein structures make up less than 1% of all protein structures solved to date because they are notoriously tricky to isolate, characterize and crystallize. This is why major advances in crystallographic methods, such as the advent of membrane protein megahertz serial femtosecond crystallography, are undoubtedly going to have a significant impact on the scientific community.
It takes a village
These recent achievements would not be possible without the tireless effort from a dedicated team of nearly 80 researchers from 15 institutions, including ASU, the European XFEL, DESY, the Center for Ultrafast X-ray Science, Hauptman-Woodward Institute, SUNY Buffalo, SLAC, University of Hamburg, University of Goettingen, Hungarian Academy of Sciences, University of Tennessee, Lawrence Livermore National Laboratory, University of Southampton, Hamburg University of Technology, University of Wisconsin. The research group included US collaborators in the NSF BioXFEL Science and Technology Center and a group of international collaborators, including Adrian P. Mancuso and Romain Letrun, lead scientists at the EuXFEL beamline and Oleksandr Yefanov and Anton Barty from CFEL/DESY who worked closely with the ASU team on the complex data analysis.
Later this month a Texus rocket will launch from Esrange, Sweden, that will travel about 260 km upwards and fall back to Earth offering researchers six minutes of zero gravity. Their experiment? Burning metal powder to understand a new type of fire.
So-called discrete burning occurs when a piece of fuel ignites and burns completely due to the heat created by other fuel elements around it. Unlike traditional fires that burn through their fuel continuously, discrete fires spread by jumping from one fuel source to another. There are very few examples of discrete fires on Earth, but sparklers commonly lit on New Year’s Eve are an example.
Another example is forest fires, where trees burn individually and the next tree burns only when the heat from burning trees around it reaches the temperature necessary for combustion.
Discrete metal powder power
Most transport currently relies on gasoline and oil because they have a particularly high energy density. “Despite all the progress with electric cars, the energy efficiency compared to a traditional petrol-based car is less by a factor of a hundred,” says ESA’s Antonio Verga who is leading the sounding rocket experiments, “if we want to keep the range and power of road transport then we need to look for alternatives.”
Metals have high energy density but they do not ignite easily unless in powder form, when they burn in discrete flames. “We need to find the ideal blend of oxygen and metal powder as well the ideal size of the metal dust to create the best conditions for combustion,” explains Antonio, “this is where the Perwaves experiment comes in that will launch this month.”
By setting the metal powder alight during its flight beyond the edges of our atmosphere, researchers can study how it burns in a chamber with evenly spaced metal powder suspended in weightlessness. This is not possible on Earth as the powder clumps together into a pile due to gravity.
The results from the burning will be analysed to create models of discrete burning to extrapolate the ideal conditions.
“Once we know what the ideal mixture is, we can work towards creating it on Earth in a power station, or, possibly, in a car’s engine,” says Antonio, “by injecting the iron powder into a chamber for a brief moment it could be engineered to have the perfect conditions for combustion.”
“The beauty of metal combustion is that it is carbon-free, if one burns iron powder for example, the only ‘waste’ product is rust,” says Antonio, “which can easily be recycled back into the original metal powder. Thanks to the experiments we are doing now, future cars might give a whole new meaning to driving a rust bucket.”
The Perwaves experiment will fly on the Texus-56 rocket and has been conceived and designed by the McGill University in Montreal and the Airbus sounding rockets team in Bremen, Germany.
Anyone who has ever tried to serve a tennis ball or flip a pancake or even play a video game knows, it is hard to perform the same motion over and over again. But don’t beat yourself up—errors resulting from variability in motor function is a feature, not a bug, of our nervous system and play a critical role in learning, research suggests.
Variability in a tennis serve, for example, allows a player to see the effects of changing the toss of the ball, the swing of the racket, or the angle of the serve—all of which may lead to a better performance. But what if you’re serving ace after ace after ace? Variability in this case would not be very helpful.
If variability is good for learning but bad when you want to repeat a successful action, the brain should be able to regulate variability based on recent performance. But how?
That question was at the center of a recent study from Harvard researchers published in the journal Current Biology.
Using an enormous amount of data from about 3 million rat trials, the researchers found that rats regulate their motor variability based on the outcomes of the most recent 10 to 15 attempts at a task. If the previous trials have gone poorly, the rats increased their amount of variability—employing a try-anything approach. However, if the previous trials have gone well, the rats limited their variability, proving that rats too subscribe to the old adage “if it ain’t broke, don’t fix it.”
But is this the best strategy?
“By using simulations to determine what the optimal variability regulation strategy should be, we found that it was very similar to the one used by rats,” said Ashesh Dhawale, a postdoctoral fellow in the Department of Organismic and Evolutionary Biology and first author of the study. “We also found that the degree to which individual rats regulated variability could predict how well they learned and performed on the motor task. This means that regulating variability based on performance is important for doing well both in the short and the long term.
To study performance-dependent variability, the researchers developed a new motor learning task for rats. The researchers, led by Bence Ölveczky, professor of organismic and evolutionary biology, trained rats to press a 2-D joystick towards a target angle. When the rats were successful, they got a sip of water. To keep the task from getting too easy, the researchers changed the target angle whenever the rat learned its location.
The researchers found that when the rats were regularly getting rewarded, they had low variability. If, a few trials later, they did less well, their variability increased. If they continued doing poorly, the variability would increase even more.
“We noticed this was happening on a pretty fast time scale,” said Maurice Smith, the Gordon McKay Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and co-author of the paper. “It was as if the rats were computing their batting average in real time.
But, what about longer-term tasks with less uncertainty? If you grew up playing tennis with your sister, for example, you may know that she has a consistently weak forehand.
The researchers simulated this scenario by keeping the target angle of the joystick fixed over many sessions instead of constantly changing it.
“We found that rats stopped regulating variability in response to recent performance, which matches what we found in our simulations,” said Dhawale. “Variability regulation in this case had a timescale of several thousand trials, which was much slower than the reward-dependent regulation of variability that we had uncovered earlier.”
“Our results demonstrate that the brain flexibly adapts components of its trial-and-error learning algorithm, such as the regulation of variability, to the statistics of the task at hand,” said Ölveczky. “We have shown that the brain uses a sophisticated algorithm to regulate motor variability in a way that improves task performance.”
The research was co-authored by Yohsuke R. Miyamoto.
Stem cells all share the potential of developing into any specific cell in the body. Many researchers are therefore trying to answer the fundamental questions of what determines the cells’ developmental fate as well as when and why the cells lose the potential of developing into any cell.
Now, researchers from the Novo Nordisk Foundation Center for Stem Cell Biology (DanStem) at University of Copenhagen have discovered how stem cells can lose this potential and thus can be said to “forget their past.” It turns out that the proteins called transcription factors play a different role than the scientists thought. For 30 years, the dogma has been that transcription factors are the engines of gene expression, triggering these changes by switching the genes on and off. However, new research results published in Nature reveal something quite different.
“We previously thought that transcription factors drive the process that determines whether a gene is expressed and subsequently translated into the corresponding protein. Our new results show that transcription factors may be more analogous to being the memory of the cell. As long as the transcription factors are connected to a gene, the gene can be read (turned on), but the external signals received by the cells seem to determine whether the gene is turned on or off. As soon as the transcription factors are gone, the cells can no longer return to their point of origin,” explains Josh Brickman, Professor and Group Leader, DanStem, University of Copenhagen.
The question of how a cell slowly develops from one state to another is key to understanding cell behavior in multicellular organisms. Stem cell researchers consider this vital, which is why they are constantly trying to refine techniques to develop the human body’s most basic cells into various specific types of cells that can be used, for example, to regenerate damaged tissue. So far, however, investigating the signals required to make cells switch identity has been extremely difficult, since making all the cells in a dish do the same thing at the same time is very difficult.
A protein centered viewpoint
The researchers developed a stem cell model to mimic a cell’s response to signaling and used it to, for first time, precisely determine the sequence of the events involved in a gene being turned on and off in response to a signal in stem cells. The researchers were able to describe how genes are turned on and off and under what circumstances a cell can develop in a certain direction but then elect to return to the starting-point.
Part of this work involved measuring how proteins in a cell are modified by phosphorylation using advanced mass spectrometry available through an important collaboration with Jesper Olsen’s Group at the Novo Nordisk Foundation Center for Protein Research.
“Combining forces with the Olsen group in the CPR enabled us to provide a unique deep description of how individual proteins in a cell react to signals from the outside,” continues Josh Brickman.
New answers to old scientific questions
These results are surprising. Although the sequence of cell transcription processes could not previously be measured as accurately as in this study, the dogma was that transcription factors comprise the on-off switch that is essential to initiate transcription of the individual gene. This is not so for embryonic stem cells and potentially for other cell types.
“Transcription factors are still a key signal, but they do not drive the process, as previously thought. Once they are there, the gene can be read, and they remain in place for a while after the gene is read. And when they are gone, the window in which the gene can be read can be closed again. You can compare it with the vapour trails you see in the sky when an airplane has passed. They linger for a while but slowly dissipate again,” explains first author, William Hamilton, Assistant Professor at DanStem.
This discovery is first and foremost basic knowledge, which changes fundamental assumptions in molecular biology. The new results are especially important for researchers working on stem cells and cancer biology. They provide new insight into how cells develop, how pathways involved in development determine when cells change, and when the point of no return is reached. These pathways are also found frequently mutated in cancer and the findings in this study will be valuable to the study of malignant development.
“In the project, we focused on the fibroblast growth factor (FGF)-extracellular signal-regulated kinase (ERK) signalling pathway, which is a signalling pathway from a receptor on the surface of a cell to DNA inside the cell nucleus. This pathway is dysregulated in many types of cancer, and we therefore hope that many of the data in this study will help to inform aspects of cancer biology by indicating new ways to specifically target this signalling pathway in cancer cells,” concludes Josh Brickman.
AMOLF researchers and their collaborators from the Advanced Science Research Center (ASRC/CUNY) in New York have created a nanostructured surface capable of performing on-the-fly mathematical operations on an input image. This discovery could boost the speed of existing imaging processing techniques and lower energy usage. The work enables ultrafast object detection and augmented reality applications. The researchers publish their results today in the journal Nano Letters.
Image processing is at the core of several rapidly growing technologies, such as augmented reality, autonomous driving and more general object recognition. But how does a computer find and recognize an object? The initial step is to understand where its boundaries are, hence edge detection in an image becomes the starting point for image recognition. Edge detection is typically performed digitally using integrated electronic circuits implying fundamental speed limitations and high energy consumption, or in an analog fashion which requires bulky optics.
In a completely new approach, AMOLF Ph.D. student Andrea Cordaro and his co-workers created a special “metasurface,” a transparent substrate with a specially designed array of silicon nanobars. When an image is projected onto the metasurface, the transmitted light forms a new image that shows the edges of the original. Effectively, the metasurface performs a mathematical derivative operation on the image, which provides a direct probe of edges in the image. In a first experiment, an image of the AMOLF logo was projected onto the metasurface. At a specially designed wavelength (726 nm), a clear image of the edges is observed. The mathematical transformation results from the fact that each spatial frequency that composes the image has a tailored transmission coefficient through the metasurface. This tailored transmission is the result of a complex interference of light as it propagates through the metasurface.
To demonstrate edge detection experimentally on an image the researchers created a miniature version of the painting Meisje met de parel (Girl with a Pearl Earring, J. Vermeer) by printing tiny chromium dots onto a transparent substrate. If the image is projected onto the metasurface using off-resonant illumination (λ= 750 nm) the original image is clearly recognized. In contrast, if the illumination has the right color (λ= 726 nm) the edges are clearly resolved in the transformed image.
This new optical computing and imaging technique operates at the speed of light and the mathematical operation itself consumes no energy as it involves only passive optical components. The metasurface can be readily implemented by placing it directly onto a standard CCD or CMOS detector chip, opening new opportunities in hybrid optical and electronic computing that operates at low cost, low power, and small dimensions.
Imaging technology has vastly improved over the past 30 years. It’s been about that long since the flow coming off of the base of projectiles, such as ballistic missiles, has been measured. Researchers in the Department of Aerospace Engineering at the University of Illinois at Urbana-Champaign used a modern measurement technique called stereoscopic particle image velocimetry to take high-resolution measurements of the complicated flow field downstream of a blunt-based cylinder moving at supersonic speeds, which is representative of a projectile or an unpowered rocket.
The experiment was done in a Mach 2.5 wind tunnel in the Gas Dynamics Laboratory in The Grainger College of Engineering at Illinois. Researchers mounted a large cylinder model and forced a high-pressure air supply mixed with a large amount of smoke particles across it.
“We shine a laser at the smoke particles to illuminate a desired region and then we can take a picture of those particles from multiple angles. Imaging the same region from different perspectives simultaneously allows us to measure all three components of velocity” said doctoral student Branden Kirchner. “The images are taken 600 nanoseconds apart at high resolution.
“This technique allows us to simultaneously measure velocity at a lot of points very close together, instead of measuring one point and then moving on to the next. We now have a map of velocity throughout the flow field as a snapshot in time.”
Kirchner said the 3,000 snapshots imaged by four cameras aimed at the flow provide much higher spatial resolution measurements than any previous studies. He said computationalists who study this flow will benefit from having these new data to compare with their simulations.
Illinois aerospace engineering Professor J. Craig Dutton, co-author on the study, has been working on this complicated flow for decades, using the same wind tunnel while working on his Ph.D. Kirchner said, “I remember the first time we took data using this technique, I showed Professor Dutton and he said ‘in 90 seconds you took more data than we used to take in six months.'”
When the flow separates off of the cylinder, it creates a wake, like what trails from a boat or an airplane. That’s where the important flow features begin, downstream of the cylinder, which represents the body of a rocket or projectile.
“There’s a thin layer just downstream of separation, called the shear layer, where friction between slow-moving and fast-moving air is really dominant,” he said. “This shear layer extracts fluid particles from the region immediately behind the cylinder base, in a process called entrainment. This process causes really low pressures on the base of the cylinder, and it is something that we don’t currently understand well.
Kirchner said the example he likes to use to explain the physics of what’s happening in the flow is the drafting technique some people use to get better gas mileage on a highway. They drive their car at a certain distance behind a semi-truck to get better fuel economy.
“The pressure right behind the semi-truck is really low, so if you can get the front end of your car in the low-pressure zone and the back end in a high-pressure zone, you actually get thrust out of it, but the aerodynamic drag on the semi-truck is very high because of this low-pressure zone,” Kirchner said.
Having a better understanding of how the flow actually creates this low-pressure region could give other researchers the knowledge they need to come up with a way to change the pressure.
“We’re not changing anything along the cylinder body or the front of the cylinder in this study,” he said. “But if we know what mechanisms could cause a change in the pressure distribution on the base and develop a method to raise that pressure, we can decrease the drag or have better vehicle directional control.”
The study, Three-Component Turbulence Measurements and Analysis of a Supersonic, Axisymmetric Base Flow,” was written by Branden M. Kirchner, James V. Favale, Gregory S. Elliott, and J. Craig Dutton. It is published in the AIAA Journal.