artificial intelligence

Information Technology

Quantum mechanics

Latest Updates

New technique allows rapid screening for new types of solar cells

9:00:00 PM
Approach could bypass the time-consuming steps currently needed to test new photovoltaic materials.
This experimental setup was used by the team to measure the electrical output of a sample of solar cell material, under controlled conditions of varying temperature and illumination. The data from those tests was then used as the basis for computer modeling using statistical methods to predict the overall performance of the material in real-world operating conditions.
Image: Riley Brandt

The worldwide quest by researchers to find better, more efficient materials for tomorrow’s solar panels is usually slow and painstaking. Researchers typically must produce lab samples — which are often composed of multiple layers of different materials bonded together — for extensive testing.

Now, a team at MIT and other institutions has come up with a way to bypass such expensive and time-consuming fabrication and testing, allowing for a rapid screening of far more variations than would be practical through the traditional approach.

The new process could not only speed up the search for new formulations, but also do a more accurate job of predicting their performance, explains Rachel Kurchin, an MIT graduate student and co-author of a paper describing the new process that appears this week in the journal Joule. Traditional methods “often require you to make a specialized sample, but that differs from an actual cell and may not be fully representative” of a real solar cell’s performance, she says.

For example, typical testing methods show the behavior of the “majority carriers,” the predominant particles or vacancies whose movement produces an electric current through a material. But in the case of photovoltaic (PV) materials, Kurchin explains, it is actually the minority carriers — those that are far less abundant in the material — that are the limiting factor in a device’s overall efficiency, and those are much more difficult to measure. In addition, typical procedures only measure the flow of current in one set of directions — within the plane of a thin-film material — whereas it’s up-down flow that is actually harnessed in a working solar cell. In many materials, that flow can be “drastically different,” making it critical to understand in order to properly characterize the material, she says.

“Historically, the rate of new materials development is slow — typically 10 to 25 years,” says Tonio Buonassisi, an associate professor of mechanical engineering at MIT and senior author of the paper. “One of the things that makes the process slow is the long time it takes to troubleshoot early-stage prototype devices,” he says. “Performing characterization takes time — sometimes weeks or months — and the measurements do not always have the necessary sensitivity to determine the root cause of any problems.”

So, Buonassisi says, “the bottom line is, if we want to accelerate the pace of new materials development, it is imperative that we figure out faster and more accurate ways to troubleshoot our early-stage materials and prototype devices.” And that’s what the team has now accomplished. They have developed a set of tools that can be used to make accurate, rapid assessments of proposed materials, using a series of relatively simple lab tests combined with computer modeling of the physical properties of the material itself, as well as additional modeling based on a statistical method known as Bayesian inference.

The system involves making a simple test device, then measuring its current output under different levels of illumination and different voltages, to quantify exactly how the performance varies under these changing conditions. These values are then used to refine the statistical model.

“After we acquire many current-voltage measurements [of the sample] at different temperatures and illumination intensities, we need to figure out what combination of materials and interface variables make the best fit with our set of measurements,” Buonassisi explains. “Representing each parameter as a probability distribution allows us to account for experimental uncertainty, and it also allows us to suss out which parameters are covarying.”

The Bayesian inference process allows the estimates of each parameter to be updated based on each new measurement, gradually refining the estimates and homing in ever closer to the precise answer, he says.

In seeking a combination of materials for a particular kind of application, Kurchin says, “we put in all these materials properties and interface properties, and it will tell you what the output will look like.”

The system is simple enough that, even for materials that have been less well-characterized in the lab, “we’re still able to run this without tremendous computer overhead.” And, Kurchin says, making use of the computational tools to screen possible materials will be increasingly useful because “lab equipment has gotten more expensive, and computers have gotten cheaper. This method allows you to minimize your use of complicated lab equipment.”

The basic methodology, Buonassisi says, could be applied to a wide variety of different materials evaluations, not just solar cells — in fact, it may apply to any system that involves a computer model for the output of an experimental measurement. “For example, this approach excels in figuring out which material or interface property might be limiting performance, even for complex stacks of materials like batteries, thermoelectric devices, or composites used in tennis shoes or airplane wings.” And, he adds, “It is especially useful for early-stage research, where many things might be going wrong at once.”

Going forward, he says, “our vision is to link up this fast characterization method with the faster materials and device synthesis methods we’ve developed in our lab.” Ultimately, he says, “I’m very hopeful the combination of high-throughput computing, automation, and machine learning will help us accelerate the rate of novel materials development by more than a factor of five. This could be transformative, bringing the timelines for new materials-science discoveries down from 20 years to about three to five years.”

The research team also included Riley Brandt '11, SM '13, PhD '16; former postdoc Vera Steinmann; MIT graduate student Daniil Kitchaev and visiting professor Gerbrand Ceder, Chris Roat at Google Inc.; and Sergiu Levcenco and Thomas Unold at Hemholz Zentrum in Berlin. The work was supported by a Google Faculty Research Award, the U.S. Department of Energy, and a Total research grant through the MIT Energy Initiative.
Credit :

Facebook to introduce live streaming, video chats to Messenger games

12:33:00 PM

More than a year after launching "Instant Games" -- a new platform for gaming with friends on the Messenger chat app, Facebook has announced support for live streaming via Facebook Live and video chatting with fellow gamers.

"First, we're launching live streaming, which will start to roll out today, to gamers who love to share their playthroughs and engage in a little smack talk," Facebook wrote in a blog post late Thursday.
The users will be able to record these live streams so that they can be posted to the profile afterwards.
"Over 245 million people video chat every month on Messenger. We're excited to begin a test soon that will enable people to play games with each other while video chatting," the company added.
Meanwhile, the social media giant also announced additions to "Instant Games" with a handful of big-name mobile titles that will be "re-imagined" for the platform.

"Launching globally in early 2018 is none other than Angry Birds, a new game built for Messenger that will feature classic gameplay with an exciting new way to challenge friends," Facebook said.
The immensely popular game will join the recently launched Tetris which includes beloved features like marathon mode and the ability to play with friends in Messenger group chats.

High-speed encryption to secure future internet

5:01:00 PM

In a bid to fight against the future cyber threats, scientists have developed a new system with high-speed encryption properties that drives quantum computers to create theoretically hack-proof forms of data encryption. The novel system is capable of creating and distributing encryption codes at megabit-per-second rates, which is five to 10 times faster than existing methods and on par with current internet speeds when running several systems in parallel. The technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

"We are now likely to have a functioning quantum computer that might be able to start breaking the existing cryptographic codes in the near future," said Daniel Gauthier, Professor at The Ohio State University. "We really need to be thinking hard now of different techniques that we could use for trying to secure the internet," Gauthier added, in the paper appearing in the journal Science Advances. For the new system to work, both the hacker as well as the sender must have access to the same key, and it must be kept secret. The novel system uses a weakened laser to encode information or transmit keys on individual photons of light, but also packs more information onto each photon, making the technique faster. By adjusting the time at which the photon is released, and a property of the photon called the phase, the new system can encode two bits of information per photon instead of one.

This trick, paired with high-speed detectors powers the system to transmit keys five to 10 times faster than other methods. "It was changing these additional properties of the photon that allowed us to almost double the secure key rate that we were able to obtain if we hadn't done that," Gauthier said.

How Filmmakers Manipulate Our Emotions With Color

6:45:00 PM

Most of us don’t think about the color schemes of the films we watch. But for a long time now, movie studios have followed a special formula for each genre. Red tones for romance, blue for horror, and so on. 

The Verge explains how filmmakers manipulate our emotions using color in this trending video.

Manipulating memory with light: Scientists erase specific memories in mice

8:13:00 PM
Just look into the light: not quite, but researchers at the UC Davis Center for Neuroscience and Department of Psychology have used light to erase specific memories in mice, and proved a basic theory of how different parts of the brain work together to retrieve episodic memories.
During memory retrieval, cells in the hippocampus
connect to cells in the brain cortex.
Credit: Photo illustration by Kazumasa Tanaka and 
Brian Wiltgen/UC Davis
Optogenetics, pioneered by Karl Diesseroth at Stanford University, is a new technique for manipulating and studying nerve cells using light. The techniques of optogenetics are rapidly becoming the standard method for investigating brain function.

Kazumasa Tanaka, Brian Wiltgen and colleagues at UC Davis applied the technique to test a long-standing idea about memory retrieval. For about 40 years, Wiltgen said, neuroscientists have theorized that retrieving episodic memories -- memories about specific places and events -- involves coordinated activity between the cerebral cortex and the hippocampus, a small structure deep in the brain.

"The theory is that learning involves processing in the cortex, and the hippocampus reproduces this pattern of activity during retrieval, allowing you to re-experience the event," Wiltgen said. If the hippocampus is damaged, patients can lose decades of memories.

But this model has been difficult to test directly, until the arrival of optogenetics.

Wiltgen and Tanaka used mice genetically modified so that when nerve cells are activated, they both fluoresce green and express a protein that allows the cells to be switched off by light. They were therefore able both to follow exactly which nerve cells in the cortex and hippocampus were activated in learning and memory retrieval, and switch them off with light directed through a fiber-optic cable.

They trained the mice by placing them in a cage where they got a mild electric shock. Normally, mice placed in a new environment will nose around and explore. But when placed in a cage where they have previously received a shock, they freeze in place in a "fear response."

Tanaka and Wiltgen first showed that they could label the cells involved in learning and demonstrate that they were reactivated during memory recall. Then they were able to switch off the specific nerve cells in the hippocampus, and show that the mice lost their memories of the unpleasant event. They were also able to show that turning off other cells in the hippocampus did not affect retrieval of that memory, and to follow fibers from the hippocampus to specific cells in the cortex.

"The cortex can't do it alone, it needs input from the hippocampus," Wiltgen said. "This has been a fundamental assumption in our field for a long time and Kazu’s data provides the first direct evidence that it is true."

They could also see how the specific cells in the cortex were connected to the amygdala, a structure in the brain that is involved in emotion and in generating the freezing response.

Co-authors are Aleksandr Pevzner, Anahita B. Hamidi, Yuki Nakazawa and Jalina Graham, all at the Center for Neuroscience. The work was funded by grants from the Whitehall Foundation, McKnight Foundation, Nakajima Foundation and the National Science Foundation.

Story Source:
The above story is based on materials provided by University of California - Davis. Note: Materials may be edited for content and length.

Journal Reference:
Kazumasa Z. Tanaka, Aleksandr Pevzner, Anahita B. Hamidi, Yuki Nakazawa, Jalina Graham, Brian J. Wiltgen. Cortical Representations Are Reinstated by the Hippocampus during Memory Retrieval. Neuron, 2014 DOI: 10.1016/j.neuron.2014.09.037

Incredibly light, strong materials recover original shape after being smashed

4:10:00 PM
Materials scientists have developed a method for creating new structural materials by taking advantage of the unusual properties that solids can have at the nanometer scale. They have used the method to produce a ceramic (e.g., a piece of chalk or a brick) that contains about 99.9 percent air yet is incredibly strong and can recover its original shape after being smashed by more than 50 percent.

This sequence shows how the Greer Lab's three-dimensional,
ceramic nanolattices can recover after being compressed by
more than 50 percent. Clockwise, from left to right, an alumina
nanolattice before compression, during compression, fully
compressed, and recovered following compression.
Credit: Lucas Meza/Caltech

Imagine a balloon that could float without using any lighter-than-air gas. Instead, it could simply have all of its air sucked out while maintaining its filled shape. Such a vacuum balloon, which could help ease the world's current shortage of helium, can only be made if a new material existed that was strong enough to sustain the pressure generated by forcing out all that air while still being lightweight and flexible. 

Caltech materials scientist Julia Greer and her colleagues are on the path to developing such a material and many others that possess unheard-of combinations of properties. For example, they might create a material that is thermally insulating but also extremely lightweight, or one that is simultaneously strong, lightweight, and nonbreakable -- properties that are generally thought to be mutually exclusive.

Greer's team has developed a method for constructing new structural materials by taking advantage of the unusual properties that solids can have at the nanometer scale, where features are measured in billionths of meters. In a paper published in the September 12 issue of the journal Science, the Caltech researchers explain how they used the method to produce a ceramic (e.g., a piece of chalk or a brick) that contains about 99.9 percent air yet is incredibly strong, and that can recover its original shape after being smashed by more than 50 percent.

"Ceramics have always been thought to be heavy and brittle," says Greer, a professor of materials science and mechanics in the Division of Engineering and Applied Science at Caltech. "We're showing that in fact, they don't have to be either. This very clearly demonstrates that if you use the concept of the nanoscale to create structures and then use those nanostructures like LEGO to construct larger materials, you can obtain nearly any set of properties you want. You can create materials by design."

The researchers use a direct laser writing method called two-photon lithography to "write" a three-dimensional pattern in a polymer by allowing a laser beam to crosslink and harden the polymer wherever it is focused. The parts of the polymer that were exposed to the laser remain intact while the rest is dissolved away, revealing a three-dimensional scaffold. That structure can then be coated with a thin layer of just about any kind of material -- a metal, an alloy, a glass, a semiconductor, etc. Then the researchers use another method to etch out the polymer from within the structure, leaving a hollow architecture.

The applications of this technique are practically limitless, Greer says. Since pretty much any material can be deposited on the scaffolds, the method could be particularly useful for applications in optics, energy efficiency, and biomedicine. For example, it could be used to reproduce complex structures such as bone, producing a scaffold out of biocompatible materials on which cells could proliferate.

In the latest work, Greer and her students used the technique to produce what they call three-dimensional nanolattices that are formed by a repeating nanoscale pattern. After the patterning step, they coated the polymer scaffold with a ceramic called alumina (i.e., aluminum oxide), producing hollow-tube alumina structures with walls ranging in thickness from 5 to 60 nanometers and tubes from 450 to 1,380 nanometers in diameter.

Greer's team next wanted to test the mechanical properties of the various nanolattices they created. Using two different devices for poking and prodding materials on the nanoscale, they squished, stretched, and otherwise tried to deform the samples to see how they held up.

They found that the alumina structures with a wall thickness of 50 nanometers and a tube diameter of about 1 micron shattered when compressed. That was not surprising given that ceramics, especially those that are porous, are brittle. However, compressing lattices with a lower ratio of wall thickness to tube diameter -- where the wall thickness was only 10 nanometers -- produced a very different result.

"You deform it, and all of a sudden, it springs back," Greer says. "In some cases, we were able to deform these samples by as much as 85 percent, and they could still recover."

To understand why, consider that most brittle materials such as ceramics, silicon, and glass shatter because they are filled with flaws -- imperfections such as small voids and inclusions. The more perfect the material, the less likely you are to find a weak spot where it will fail. Therefore, the researchers hypothesize, when you reduce these structures down to the point where individual walls are only 10 nanometers thick, both the number of flaws and the size of any flaws are kept to a minimum, making the whole structure much less likely to fail.

"One of the benefits of using nanolattices is that you significantly improve the quality of the material because you're using such small dimensions," Greer says. "It's basically as close to an ideal material as you can get, and you get the added benefit of needing only a very small amount of material in making them."

The Greer lab is now aggressively pursuing various ways of scaling up the production of these so-called meta-materials.

Story Source:

20 Things You Didn't Know About... Time

8:02:00 PM

The beginning, the end, and the funny habits of our favorite ticking force.

 “Time is an illusion. Lunchtime doubly so,” joked Douglas Adams in The Hitchhiker’s Guide to the Galaxy. Scientists aren’t laughing, though. Some speculative new physics theories suggest that time emerges from a more fundamental—and timeless—reality.
 Try explaining that when you get to work late. The average U.S. city commuter loses 38 hours a year to traffic delays.
 Wonder why you have to set your clock ahead in March?Daylight Saving Time began as a joke by Benjamin Franklin, who proposed waking people earlier on bright summer mornings so they might work more during the day and thus save candles. It was introduced in the U.K. in 1917 and then spread around the world.
4  Green days. The Department of Energy estimates that electricity demand drops by 0.5 percent during Daylight Saving Time, saving the equivalent of nearly 3 million barrels of oil.
5  By observing how quickly bank tellers made change, pedestrians walked, and postal clerks spoke, psychologists determined that the three fastest-paced U.S. cities are Boston, Buffalo, and New York.
 The three slowest? Shreveport, Sacramento, and L.A.
 One second used to be defined as 1/86,400 the length of a day. However, Earth’s rotation isn’t perfectly reliable. Tidal friction from the sun and moon slows our planet and increases the length of a day by 3 milli­seconds per century.
8  This means that in the time of the dinosaurs, the day was just 23 hours long.
9  Weather also changes the day. During El Niño events, strong winds can slow Earth’s rotation by a fraction of a milli­second every 24 hours.
10  Modern technology can do better. In 1972 a network of atomic clocks in more than 50 countries was made the final authority on time, so accurate that it takes 31.7 million years to lose about one second.
11  To keep this time in sync with Earth’s slowing rotation, a “leap second” must be added every few years, most recently this past New Year’s Eve.
12  The world’s most accurate clock, at the National Institute of Standards and Technology in Colorado, measures vibrations of a single atom of mercury. In a billion years it will not lose one second.
13  Until the 1800s, every village lived in its own little time zone, with clocks synchronized to the local solar noon.
14  This caused havoc with the advent of trains and timetables. For a while watches were made that could tell both local time and “railway time.”
15  On November 18, 1883, American railway companies forced the national adoption of standardized time zones.
16  Thinking about how railway time required clocks in different places to be synchronized may have inspired Einstein to develop his theory of relativity, which unifies space and time.
17  Einstein showed that gravity makes time run more slowly. Thus airplane passengers, flying where Earth’s pull is weaker, age a few extra nano­seconds each flight.
18  According to quantum theory, the shortest moment of time that can exist is known as Planck time, or 0.0000000000000000000000000000000000000000001 second.
19  Time has not been around forever. Most scientists believe it was created along with the rest of the universe in the Big Bang, 13.7 billion years ago.
20  There may be an end of time. Three Spanish scientists posit that the observed acceleration of the expanding cosmos is an illusion caused by the slowing of time. According to their math, time may eventually stop, at which point everything will come to a standstill.

Engineers build world's smallest, fastest nanomotor: Can fit inside a single cell

7:47:00 PM
Researchers at the Cockrell School of Engineering at The University of Texas at Austin have built the smallest, fastest and longest-running tiny synthetic motor to date. The team's nanomotor is an important step toward developing miniature machines that could one day move through the body to administer insulin for diabetics when needed, or target and treat cancer cells without harming good cells.
Simple nanomotor. Credit: Image courtesy of University of Texas at Austin

With the goal of powering these yet-to-be invented devices, UT Austin engineers focused on building a reliable, ultra-high-speed nanomotor that can convert electrical energy into mechanical motion on a scale 500 times smaller than a grain of salt.

Mechanical engineering assistant professor Donglei "Emma" Fan led a team of researchers in the successful design, assembly and testing of a high-performing nanomotor in a nonbiological setting. The team's three-part nanomotor can rapidly mix and pump biochemicals and move through liquids, which is important for future applications. The team's study was published in a recent issue of Nature Communications.

Fan and her team are the first to achieve the extremely difficult goal of designing a nanomotor with large driving power.

With all its dimensions under 1 micrometer in size, the nanomotor could fit inside a human cell and is capable of rotating for 15 continuous hours at a speed of 18,000 RPMs, the speed of a motor in a jet airplane engine. Comparable nanomotors run significantly more slowly, from 14 RPMs to 500 RPMs, and have only rotated for a few seconds up to a few minutes.

Looking forward, nanomotors could advance the field of nanoelectromechanical systems (NEMS), an area focused on developing miniature machines that are more energy efficient and less expensive to produce. In the near future, the Cockrell School researchers believe their nanomotors could provide a new approach to controlled biochemical drug delivery to live cells.

To test its ability to release drugs, the researchers coated the nanomotor's surface with biochemicals and initiated spinning. They found that the faster the nanomotor rotated, the faster it released the drugs.

"We were able to establish and control the molecule release rate by mechanical rotation, which means our nanomotor is the first of its kind for controlling the release of drugs from the surface of nanoparticles," Fan said. "We believe it will help advance the study of drug delivery and cell-to-cell communications."

The researchers address two major issues for nanomotors so far: assembly and controls. The team built and operated the nanomotor using a patent-pending technique that Fan invented while studying at Johns Hopkins University. The technique relies on AC and DC electric fields to assemble the nanomotor's parts one by one.

In experiments, the researchers used the technique to turn the nanomotors on and off and propel the rotation either clockwise or counterclockwise. The researchers found that they could position the nanomotors in a pattern and move them in a synchronized fashion, which makes them more powerful and gives them more flexibility.

Fan and her team plan to develop new mechanical controls and chemical sensing that can be integrated into nanoelectromechanical devices. But first they plan to test their nanomotors near a live cell, which will allow Fan to measure how they deliver molecules in a controlled fashion.


Sugar-powered biobattery has 10 times the energy storage of lithium: Your smartphone might soon run on enzymes

7:33:00 PM
As you probably know, from sucking down cans of Coke and masticating on candy, sugar — glucose, fructose, sucrose, dextrose — is an excellent source of energy. Biologically speaking, sugar molecules are energy-dense, easy to transport, and cheap to digest. There is a reason why almost every living cell on Earth generates its energy (ATP) from glucose. Now, researchers at Virginia Tech have successfully created a sugar-powered fuel cell that has an energy storage density of 596 amp-hours per kilo — or “one order of magnitude” higher than lithium-ion batteries. This fuel cell is refillable with a solution of maltodextrin, and its only by products are electricity and water. The chief researcher, Y.H. Percival Zhang, says the tech could be commercialized in as soon as three years.

Now, it’s not exactly news that sugar is an excellent energy source. As a culture we’ve probably known about it since before we were Homo sapiens. The problem is, unless you’re a living organism or some kind of incendiary device, extracting that energy is difficult. In nature, an enzymatic pathway is used — a production line of tailor-made enzymes that meddle with the glucose molecules until they become ATP. Because it’s easy enough to produce enzymes in large quantities, researchers have tried to create fuel cells that use artificial “metabolism” to break down glucose into electricity (biobatteries), but it has historically proven very hard to find the right pathway for maximum efficiency and to keep the enzymes in the right place over a long period of time.

A diagram of the enzymatic fuel cell. The little Pac-Man things are enzymes.
Now, however, Zhang and friends at Virginia Tech appear to have built a high-density fuel cell that uses an enzymatic pathway to create a lot of electricity from glucose. There doesn’t seem to be much information on how stable this biobattery is over multiple refills, but if Zhang thinks it could be commercialized in three years, that’s a very good sign. Curiously, the research paper says that the enzymes are non-immobilized — meaning Zhang found a certain battery chemistry that doesn’t require the enzymes to be kept in place… or, alternatively, that it will only work for a very short time.

The Virginia Tech biobattery uses 13 enzymes, plus air (it’s an air-breathing biobattery), to produce nearly 24 electrons from a single glucose unit. This equates to a power output of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. [Research paper: doi:10.1038/ncomms4026 - "A high-energy-density sugar biobattery based on a synthetic enzymatic pathway"]

If Zhang’s biobatteries pan out, you might soon be recharging your smartphone by pouring in a solution of 15% maltodextrin. That battery would not only be very safe (it produces water and electricity), but very cheap to run and very green. This seems to fit in perfectly with Zhang’s homepage, which talks about how his main goals in life are replacing crude oil with sugar, and feeding the world.

The other area in which biobatteries might be useful is powering implanted devices, such as pacemakers — or, in the future, subcutaneous sensors and computers. Such a biobattery could feed on the glucose in your bloodstream, providing an endless supply of safe electricity for the myriad implants that futuristic technocrats will surely have.

Lithium-sulfur batteries last longer with nanomaterial-packed cathode

8:49:00 PM
Electric vehicles could travel farther and more renewable energy could be stored with lithium-sulfur batteries that use a unique powdery nanomaterial.
Pacific Northwest National Laboratory developed
a nickel-based metal organic framework, shown here
in an illustration, to hold onto polysulfide molecules
in the cathodes of lithium-sulfur batteries and extend
the batteries' lifespans. The colored spheres in this i
mage represent the 3D material's tiny pores into with
the polysulfides become trapped.
Credit: Pacific Northwest National Laboratory
Researchers added the powder, a kind of nanomaterial called a metal organic framework, to the battery's cathode to capture problematic polysulfides that usually cause lithium-sulfur batteries to fail after a few charges. A paper describing the material and its performance was published online April 4 in the American Chemical Society journal Nano Letters.

"Lithium-sulfur batteries have the potential to power tomorrow's electric vehicles, but they need to last longer after each charge and be able to be repeatedly recharged," said materials chemist Jie Xiao of the Department of Energy's Pacific Northwest National Laboratory. "Our metal organic framework may offer a new way to make that happen."

Today's electric vehicles are typically powered by lithium-ion batteries. But the chemistry of lithium-ion batteries limits how much energy they can store. As a result, electric vehicle drivers are often anxious about how far they can go before needing to charge. One promising solution is the lithium-sulfur battery, which can hold as much as four times more energy per mass than lithium-ion batteries. This would enable electric vehicles to drive farther on a single charge, as well as help store more renewable energy. The down side of lithium-sulfur batteries, however, is they have a much shorter lifespan because they can't currently be charged as many times as lithium-ion batteries.

Energy Storage 101

The reason can be found in how batteries work. Most batteries have two electrodes: one is positively charged and called a cathode, while the second is negative and called an anode. Electricity is generated when electrons flow through a wire that connects the two. To control the electrons, positively charged atoms shuffle from one electrode to the other through another path: the electrolyte solution in which the electrodes sit.

The lithium-sulfur battery's main obstacles are unwanted side reactions that cut the battery's life short. The undesirable action starts on the battery's sulfur-containing cathode, which slowly disintegrates and forms molecules called polysulfides that dissolve into the liquid electrolyte. Some of the sulfur—an essential part of the battery's chemical reactions—never returns to the cathode. As a result, the cathode has less material to keep the reactions going and the battery quickly dies.

New materials for better batteries

Researchers worldwide are trying to improve materials for each battery component to increase the lifespan and mainstream use of lithium-sulfur batteries. For this research, Xiao and her colleagues honed in on the cathode to stop polysulfides from moving through the electrolyte.

Many materials with tiny holes have been examined to physically trap polysulfides inside the cathode. Metal organic frameworks are porous, but the added strength of PNNL's material is its ability to strongly attract the polysulfide molecules.

The framework's positively charged nickel center tightly binds the polysulfide molecules to the cathodes. The result is a coordinate covalent bond that, when combined with the framework's porous structure, causes the polysulfides to stay put.

"The MOF's highly porous structure is a plus that further holds the polysulfide tight and makes it stay within the cathode," said PNNL electrochemist Jianming Zheng.

Nanomaterial is key

Metal organic frameworks—also called MOFs—are crystal-like compounds made of metal clusters connected to organic molecules, or linkers. Together, the clusters and linkers assemble into porous 3-D structures. MOFs can contain a number of different elements. PNNL researchers chose the transition metal nickel as the central element for this particular MOF because of its strong ability to interact with sulfur.

During lab tests, a lithium-sulfur battery with PNNL's MOF cathode maintained 89 percent of its initial power capacity after 100 charge-and discharge cycles. Having shown the effectiveness of their MOF cathode, PNNL researchers now plan to further improve the cathode's mixture of materials so it can hold more energy. The team also needs to develop a larger prototype and test it for longer periods of time to evaluate the cathode's performance for real-world, large-scale applications.

PNNL is also using MOFs in energy-efficient adsorption chillers and to develop new catalysts to speed up chemical reactions.

"MOFs are probably best known for capturing gases such as carbon dioxide," Xiao said. "This study opens up lithium-sulfur batteries as a new and promising field for the nanomaterial."

This research was funded by the Department of Energy's Office of Energy Efficiency and Renewable Energy. Researchers analyzed chemical interactions on the MOF cathode with instruments at EMSL, DOE's Environmental Molecular Sciences Laboratory at PNNL.

In January, a Nature Communications paper by Xiao and some of her PNNL colleagues described another possible solution for lithium-sulfur batteries: developing a hybrid anode that uses a graphite shield to block polysulfides.

Copyright © Science and Technology Updates. Designed by OddThemes & Distributed by Blogger Templates