10 New Technologies Of 2014

 

Top 10 emerging technologies for 2014

Technology has become perhaps the greatest agent of change in the modern world. While never without risk, positive technological breakthroughs promise innovative solutions to the most pressing global challenges of our time, from resource scarcity to global environmental change. However, a lack of appropriate investment, outdated regulatory frameworks and gaps in public understanding prevent many promising technologies from achieving their potential.
The World Economic Forum’s Global Agenda Council on Emerging Technologies identifies recent key trends in technological change in its annual list of Top 10 Emerging Technologies. By highlighting the most important technological breakthroughs, the Council aims to raise awareness of their potential and contribute to closing gaps in investment, regulation and public understanding. For 2014, the Council identified ten new technologies that could reshape our society in the future.
The 2014 list is:
  • Body-adapted Wearable Electronics
  • Nanostructured Carbon Composites
  • Mining Metals from Desalination Brine
  • Grid-scale Electricity Storage
  • Nanowire Lithium-ion Batteries
  • Screenless Display
  • Human Microbiome Therapeutics
  • RNA-based Therapeutics
  • Quantified Self (Predictive Analytics)
  • Brain-computer Interfaces

 

Body-adapted Wearable Electronics

 

Jogger runs around Chestnut Hill Reservoir in BostonFrom Google Glass to the Fitbit wristband, wearable technology has generated significant attention over the past year, with most existing devices helping people to better understand their personal health and fitness by monitoring exercise, heart rate, sleep patterns, and so on. The sector is shifting beyond external wearables like wristbands or clip-on devices to “body-adapted” electronics that further push the ever-shifting boundary between humans and technology.
The new generation of wearables is designed to adapt to the human body’s shape at the place of deployment. These wearables are typically tiny, packed with a wide range of sensors and a feedback system, and camouflaged to make their use less intrusive and more socially acceptable. These virtually invisible devices include earbuds that monitor heart rate, sensors worn under clothes to track posture, a temporary tattoo that tracks health vitals and haptic shoe soles that communicate GPS directions through vibration alerts felt by the feet. The applications are many and varied: haptic shoes are currently proposed for helping blind people navigate, while Google Glass has already been worn by oncologists to assist in surgery via medical records and other visual information accessed by voice commands.
Technology analysts consider that success factors for wearable products include device size, non-invasiveness, and the ability to measure multiple parameters and provide real-time feedback that improves user behaviour. However, increased uptake also depends on social acceptability as regards privacy. For example, concerns have been raised about wearable devices that use cameras for facial recognition and memory assistance. Assuming these challenges can be managed, analysts project hundreds of millions of devices in use by 2016.

 

Nanostructured Carbon Composites

 

A car drives into a heavily secured downtown the first day of the G20 Summit in PittsburghEmissions from the world’s rapidly-growing fleet of vehicles are an environmental concern, and raising the operating efficiency of transport is a promising way to reduce its overall impact. New techniques to nanostructure carbon fibres for novel composites are showing the potential in vehicle manufacture to reduce the weight of cars by 10% or more. Lighter cars need less fuel to operate, increasing the efficiency of moving people and goods and reducing greenhouse gas emissions.
However, efficiency is only one concern – another of equal importance is improving passenger safety. To increase the strength and toughness of new composites, the interface between carbon fibres and the surrounding polymer matrix is engineered at the nanoscale to improve anchoring – using carbon nanotubes, for example. In the event of an accident, these surfaces are designed to absorb impact without tearing, distributing the force and protecting passengers inside the vehicle.
A third challenge, which may now be closer to a solution, is that of recycling carbon fibre composites – something which has held back the widespread deployment of the technology. New techniques involve engineering cleavable “release points” into the material at the interface between the polymer and the fibre so that the bonds can be broken in a controlled fashion and the components that make up the composite can be recovered separately and reused. Taken together, these three elements could have a major impact by bringing forward the potential for manufacturing lightweight, super-safe and recyclable composite vehicles to a mass scale.

 

Mining Metals from Desalination Brine

 

Waves crash against a lighthouse during storms that battered Britain and where a 14-year-old boy was swept away to sea, at Newhaven in South East EnglandAs the global population continues to grow and developing countries emerge from poverty, freshwater is at risk of becoming one of the Earth’s most limited natural resources. In addition to water for drinking, sanitation and industry in human settlements, a significant proportion of the world’s agricultural production comes from irrigated crops grown in arid areas. With rivers like the Colorado, the Murray-Darling and the Yellow River no longer reaching the sea for long periods of time, the attraction of desalinating seawater as a new source of freshwater can only increase.
Desalination has serious drawbacks, however. In addition to high energy use (a topic covered in last year’s Top 10 Emerging Technologies), the process produces a reject-concentrated brine, which can have a serious impact on marine life when returned to the sea. Perhaps the most promising approach to solving this problem is to see the brine from desalination not as waste, but as a resource to be harvested for valuable materials. These include lithium, magnesium and uranium, as well as the more common sodium, calcium and potassium elements. Lithium and magnesium are valuable for use in high-performance batteries and lightweight alloys, for example, while rare earth elements used in electric motors and wind turbines – where potential shortages are already a strategic concern – may also be recovered.
New processes using catalyst-assisted chemistry raise the possibility of extracting these metals from reject desalination brine at a cost that may eventually become competitive with land-based mining of ores or lake deposits. This economic benefit may offset the overall cost of desalination, making it more viable on a large scale, in turn reducing the human pressures on freshwater ecosystems.

 

Grid-scale Electricity Storage

 

A tree stands near fog over the central Bosnian town of Vitez on the Lisac mountain range during sunsetElectricity cannot be directly stored, so electrical grid managers must constantly ensure that overall demand from consumers is exactly matched by an equal amount of power fed into the grid by generating stations. Because the chemical energy in coal and gas can be stored in relatively large quantities, conventional fossil-fuelled power stations offer dispatchable energy available on demand, making grid management a relatively simple task. However, fossil fuels also release greenhouse gases, causing climate change – and many countries now aim to replace carbon-based generators with a clean energy mix of renewable, nuclear or other non-fossil sources.
Clean energy sources, in particular wind and solar, can be highly intermittent; instead of producing electricity when consumers and grid managers want it, they generate uncontrollable quantities only when favourable weather conditions allow. A scaled-up nuclear sector might also present challenges due to its preferred operation as always-on baseload. Hence, the development of grid-scale electricity storage options has long been a “holy grail” for clean energy systems. To date, only pumped storage hydropower can claim a significant role, but it is expensive, environmentally challenging and totally dependent on favourable geography.
There are signs that a range of new technologies is getting closer to cracking this challenge. Some, such as flow batteries may, in the future, be able to store liquid chemical energy in large quantities analogous to the storage of coal and gas. Various solid battery options are also competing to store electricity in sufficiently energy-dense and cheaply available materials. Newly invented graphene supercapacitors offer the possibility of extremely rapid charging and discharging over many tens of thousands of cycles. Other options use kinetic potential energy such as large flywheels or the underground storage of compressed air.
A more novel option being explored at medium scale in Germany is CO2 methanation via hydrogen electrolysis, where surplus electricity is used to split water into hydrogen and oxygen, with the hydrogen later being reacted with waste carbon dioxide to form methane for later combustion – if necessary, to generate electricity. While the round-trip efficiency of this and other options may be relatively low, clearly storage potential will have high economic value in the future. It is too early to pick a winner, but it appears that the pace of technological development in this field is moving more rapidly than ever, in our assessment, bringing a fundamental breakthrough more likely in the near term.

 

Nanowire Lithium-ion Batteries

 

A vehicle trapped overnight by an ice storm sits abandoned on the Glenshane Pass in Northern IrelandAs stores of electrical charge, batteries are critically important in many aspects of modern life. Lithium-ion batteries, which offer good energy density (energy per weight or volume) are routinely packed into mobile phones, laptops and electric cars, to name just a few common uses. However, to increase the range of electric cars to match that of petrol-powered competitors – not to mention the battery lifetime between charges of mobile phones and laptops – battery energy density needs to be improved dramatically.
Batteries are typically composed of two electrodes, a positive terminal known as a cathode, and a negative terminal known as an anode, with an electrolyte in between. This electrolyte allows ions to move between the electrodes to produce current. In lithium-ion batteries, the anode is composed of graphite, which is relatively cheap and durable. However, researchers have begun to experiment with silicon anodes, which would offer much greater power capacity.
One engineering challenge is that silicon anodes tend to suffer structural failure from swelling and shrinking during charge-discharge cycle. Over the last year, researchers have developed possible solutions that involve the creation of silicon nanowires or nanoparticles, which seem to solve the problems associated with silicon’s volume expansion when it reacts with lithium. The larger surface area associated with nanoparticles and nanowires further increases the battery’s power density, allowing for fast charging and current delivery.
Able to fully charge more quickly, and produce 30%-40% more electricity than today’s lithium-ion batteries, this next generation of batteries could help transform the electric car market and allow the storage of solar electricity at the household scale. Initially, silicon-anode batteries are expected to begin to ship in smartphones within the next two years.

 

Screenless Display

 

An illustration picture shows a woman looking at the Facebook website on a computer in MunichOne of the more frustrating aspects of modern communications technology is that, as devices have miniaturized, they have become more difficult to interact with – no one would type out a novel on a smartphone, for example. The lack of space on screen-based displays provides a clear opportunity for screenless displays to fill the gap. Full-sized keyboards can already be projected onto a surface for users to interact with, without concern over whether it will fit into their pocket. Perhaps evoking memories of the early Star Wars films, holographic images can now be generated in three dimensions; in 2013, MIT’s Media Lab reported a prototype inexpensive holographic colour video display with the resolution of a standard TV.
Screenless display may also be achieved by projecting images directly onto a person’s retina, not only avoiding the need for weighty hardware, but also promising to safeguard privacy by allowing people to interact with computers without others sharing the same view. By January 2014, one start-up company had already raised a substantial sum via Kickstarter with the aim of commercializing a personal gaming and cinema device using retinal display. In the longer term, technology may allow synaptic interfaces that bypass the eye altogether, transmitting “visual” information directly to the brain.
This field saw rapid progress in 2013 and appears set for imminent breakthroughs of scalable deployment of screenless display. Various companies have made significant breakthroughs in the field, including virtual reality headsets, bionic contact lenses, the development of mobile phones for the elderly and partially blind people, and hologram-like videos without the need for moving parts or glasses.

 

Human Microbiome Therapeutics

 

MRSA bacteria strain is seen in a petri dish in a microbiological laboratory in BerlinThe human body is perhaps more properly described as an ecosystem than as a single organism: microbial cells typically outnumber human cells by 10 to one. This human microbiome has been the subject of intensifying research in the past few years, with the Human Microbiome Project in 2012 reporting results generated from 80 collaborating scientific institutions. They found that more than 10,000 microbial species occupy the human ecosystem, comprising trillions of cells and making up 1%-3% of the body’s mass.
Through advanced DNA sequencing, bioinformatics and culturing technologies, the diverse microbe species that cohabitate with the human body are being identified and characterized, with differences in their abundance correlated with disease and health.
It is increasingly understood that this plethora of microbes plays an important role in our survival: bacteria in the gut, for example, allow humans to digest foods and absorb important nutrients that their bodies would otherwise not be able to access. On the other hand, pathogens that are ubiquitous in humans can sometimes turn virulent and cause sickness or even death.
Attention is being focused on the gut microbiome and its role in diseases ranging from infections to obesity, diabetes and inflammatory bowel disease. It is increasingly understood that antibiotic treatments that destroy gut flora can result in complications such as Clostridium difficile infections, which can in rare cases lead to life-threatening complications. On the other hand, a new generation of therapeutics comprising a subset of microbes found in healthy gut are under clinical development with a view to improving medical treatments. Advances in human microbiome technologies clearly represent an unprecedented way to develop new treatments for serious diseases and to improve general healthcare outcomes in our species.

 

RNA-based Therapeutics

 

Handout image of structures found in the HIV RNA genome as identified by UNC researchersRNA is an essential molecule in cellular biology, translating genetic instructions encoded in DNA into the production of the proteins that enable cells to function. However, as protein production is also a central factor in most human diseases and disorders, RNA-based therapeutics have long been thought to hold the potential to treat a range of problems where conventional drug-based treatments cannot offer much help. The field has been slow to develop, however, with initial high hopes being dented by the sheer complexity of the effort and the need to better understand the variability of gene expression in cells.
Over the past year, there has been a resurgence of interest in this new field of biotech healthcare, with two RNA-based treatments approved as human therapeutics as of 2014. RNA-based drugs for a range of conditions including genetic disorders, cancer and infectious disease are being developed based on the mechanism of RNA interference, which is used to silence the expression of defective or overexpressed genes.
Extending the repertoire of RNA-based therapeutics, an even newer platform based on messenger RNA (mRNA) molecules is now emerging. Specific mRNA sequences injected intramuscularly or intravenously can act as therapeutic agents through the patient’s own cells, translating them into the corresponding proteins that deliver the therapeutic effect. Unlike treatments aimed at changing DNA directly, RNA-based therapeutics do not cause permanent changes to the cell’s genome and so can be increased or discontinued as necessary.
Advances in basic RNA science, synthesis technology and in vivo delivery are combining to enable a new generation of RNA-based drugs that can attenuate the abundance of natural proteins, or allow for the in vivo production of optimized, therapeutic proteins. Working in collaboration with large pharmaceutical companies and academia, several private companies that aim to offer RNA-based treatments have been launched. We expect this field of healthcare to increasingly challenge conventional pharmaceuticals in forging new treatments for difficult diseases in the next few years.

 

Quantified Self (Predictive Analytics)

 

A woman speaks on her iPhone as she walks on a busy street in downtown ShanghaiThe quantified-self movement has existed for many years as a collaboration of people collecting continual data on their everyday activities in order to make better choices about their health and behaviour. But, with today’s Internet of Things, the movement has begun to come into its own and have a wider impact.
Smartphones contain a rich record of people’s activities, including who they know (contact lists, social networking apps), who they talk to (call logs, text logs, e-mails), where they go (GPS, Wi-Fi, and geotagged photos) and what they do (apps we use, accelerometer data). Using this data, and specialized machine-learning algorithms, detailed and predictive models about people and their behaviours can be built to help with urban planning, personalized medicine, sustainability and medical diagnosis.
For example, a team at Carnegie Mellon University has been looking at how to use smartphone data to predict the onset of depression by modelling changes in sleep behaviours and social relationships over time. In another example, the Livehoods project, large quantities of geotagged data created by people’s smartphones (using software such as Instagram and Foursquare) and crawled from the Web have allowed researchers to understand the patterns of movement through urban spaces.
In recent years, sensors have become cheap and increasingly ubiquitous as more manufacturers include them in their products to understand consumer behaviour and avoid the need for expensive market research. For example, cars can record every aspect of a person’s driving habits, and this information can be shown in smartphone apps or used as big data in urban planning or traffic management. As the trend continues towards extensive data gathering to track every aspect of people’s lives, the challenge becomes how to use this information optimally, and how to reconcile it with privacy and other social concerns.

 

Brain-computer Interfaces

 

Chilean software engineer Jorge Alviarez, places head sensors on Jenifer Astorga, who suffers from quadriplegia, during a training session for her in Valparaiso city.The ability to control a computer using only the power of the mind is closer than one might think. Brain-computer interfaces, where computers can read and interpret signals directly from the brain, have already achieved clinical success in allowing quadriplegics, those suffering “locked-in syndrome” or people who have had a stroke to move their own wheelchairs or even drink coffee from a cup by controlling the action of a robotic arm with their brain waves. In addition, direct brain implants have helped restore partial vision to people who have lost their sight.
Recent research has focused on the possibility of using brain-computer interfaces to connect different brains together directly. Researchers at Duke University last year reported successfully connecting the brains of two mice over the Internet (into what was termed a “brain net”) where mice in different countries were able to cooperate to perform simple tasks to generate a reward. Also in 2013, scientists at Harvard University reported that they were able to establish a functional link between the brains of a rat and a human with a non-invasive, computer-to-brain interface.
Other research projects have focused on manipulating or directly implanting memories from a computer into the brain. In mid-2013, MIT researchers reported having successfully implanted a false memory into the brain of a mouse. In humans, the ability to directly manipulate memories might have an application in the treatment of post-traumatic stress disorder, while in the longer term, information may be uploaded into human brains in the manner of a computer file. Of course, numerous ethical issues are also clearly raised by this rapidly advancing field.

No comments:

Post a Comment