Connect with us

Tech

Emerging Technologies of 2015

Published

on

Zero emission cars fuelled by hydrogen and computer chips that mimic the human brain are among the technological breakthroughs recognized as the Top 10 Emerging Technologies of 2015.

 The WEF’s Meta-Council on Emerging Technologies, compiles the list each year to help raise attention for those technologies its members believe possess the greatest potential for addressing chronic global challenges. The purpose is also to initiate a debate on any human, societal, economic or environmental risks the technologies pose, with the aim of addressing any concerns before adoption becomes widespread.

This year’s list offers a glimpse of the power of innovation to improve lives, transform industries and safeguard the planet:

1. Fuel cell vehicles
Zero-emission cars that run on hydrogen

“Fuel cell” vehicles have been long promised, as they potentially offer several major advantages over electric and hydrocarbon-powered vehicles. However, the technology has only now begun to reach the stage where automotive companies are planning to launch them for consumers. Initial prices are likely to be in the range of $70,000, but should come down significantly as volumes increase within the next couple of years.
Unlike batteries, which must be charged from an external source, fuel cells generate electricity directly, using fuels such as hydrogen or natural gas. In practice, fuel cells and batteries are combined, with the fuel cell generating electricity and the batteries storing this energy until demanded by the motors that drive the vehicle. Fuel cell vehicles are therefore hybrids, and will likely also deploy regenerative braking – a key capability for maximizing efficiency and range.
Unlike battery-powered electric vehicles, fuel cell vehicles behave as any conventionally fuelled vehicle. With a long cruising range – up to 650 km per tank (the fuel is usually compressed hydrogen gas) – a hydrogen fuel refill only takes about three minutes. Hydrogen is clean-burning, producing only water vapour as waste, so fuel cell vehicles burning hydrogen will be zero-emission, an important factor given the need to reduce air pollution.

There are a number of ways to produce hydrogen without generating carbon emissions. Most obviously, renewable sources of electricity from wind and solar sources can be used to electrolyse water – though the overall energy efficiency of this process is likely to be quite low. Hydrogen can also be split from water in high-temperature nuclear reactors or generated from fossil fuels such as coal or natural gas, with the resulting CO2 captured and sequestered rather than released into the atmosphere.
As well as the production of cheap hydrogen on a large scale, a significant challenge is the lack of a hydrogen distribution infrastructure that would be needed to parallel and eventually replace petrol and diesel filling stations. Long distance transport of hydrogen, even in a compressed state, is not considered economically feasible today. However, innovative hydrogen storage techniques, such as organic liquid carriers that do not require high-pressure storage, will soon lower the cost of long-distance transport and ease the risks associated with gas storage and inadvertent release.

Mass-market fuel cell vehicles are an attractive prospect, because they will offer the range and fuelling convenience of today’s diesel and petrol-powered vehicles while providing the benefits of sustainability in personal transportation. Achieving these benefits will, however, require the reliable and economical production of hydrogen from entirely low-carbon sources, and its distribution to a growing fleet of vehicles (expected to number in the many millions within a decade).

2. Next-generation robotics
Rolling away from the production line

The popular imagination has long foreseen a world where robots take over all manner of everyday tasks.
This robotic future has stubbornly refused to materialize, however, with robots still limited to factory assembly lines and other controlled tasks. Although heavily used (in the automotive industry, for instance) these robots are large and dangerous to human co-workers; they have to be separated by safety cages.
Advances in robotics technology are making human-machine collaboration an everyday reality. Better and cheaper sensors make a robot more able to understand and respond to its environment. Robot bodies are becoming more adaptive and flexible, with designers taking inspiration from the extraordinary flexibility and dexterity of complex biological structures, such as the human hand. And robots are becoming more connected, benefiting from the cloud-computing revolution by being able to access instructions and information remotely, rather than having to be programmed as a fully autonomous unit.

The new age of robotics takes these machines away from the big manufacturing assembly lines, and into a wide variety of tasks. Using GPS technology, just like smartphones, robots are beginning to be used in precision agriculture for weed control and harvesting. In Japan, robots are being trialled in nursing roles: they help patients out of bed and support stroke victims in regaining control of their limbs. Smaller and more dextrous robots, such as Dexter Bot, Baxter and LBR iiwa, are designed to be easily programmable and to handle manufacturing tasks that are laborious or uncomfortable for human workers.
Indeed, robots are ideal for tasks that are too repetitive or dangerous for humans to undertake, and can work 24 hours a day at a lower cost than human workers. In reality, new-generation robotic machines are likely to collaborate with humans rather than replace them. Even considering advances in design and artificial intelligence, human involvement and oversight will remain essential.

There remains the risk that robots may displace human workers from jobs, although previous generations of automation have tended to lead to higher productivity and growth with benefits throughout the economy. Decades-old fears of networked robots running out of control may become more salient with next generation robotics linked into the web – but more likely familiarisation as people employ domestic robots to do household chores will reduce fears rather than fan them. And new research into social robots – that know how to collaborate and build working alliances with humans – means that a future where robots and humans work together, each to do what it does best – is a strong likelihood. Nevertheless, however, the next generation of robotics poses novel questions for fields from philosophy to anthropology about the human relationship to machines.
 
3. Recyclable thermoset plastics
A new kind of plastic to cut landfill waste

Plastics are divided into thermoplastics and thermoset plastics. The former can be heated and shaped many times, and are ubiquitous in the modern world, comprising everything from children’s toys to lavatory seats. Because they can be melted down and reshaped, thermoplastics are generally recyclable. Thermoset plastics however can only be heated and shaped once, after which molecular changes mean that they are “cured”, retaining their shape and strength even when subject to intense heat and pressure.
Due to this durability, thermoset plastics are a vital part of our modern world, and are used in everything from mobile phones and circuit boards to the aerospace industry. But the same characteristics that have made them essential in modern manufacturing also make them impossible to recycle. As a result, most thermoset polymers end up as landfill. Given the ultimate objective of sustainability, there has long been a pressing need for recyclability in thermoset plastics.

In 2014 critical advances were made in this area, with the publication of a landmark paper in the journal Science announcing the discovery of new classes of thermosetting polymers that are recyclable. Called poly(hexahydrotriazine)s, or PHTs, these can be dissolved in strong acid, breaking apart the polymer chains into component monomers that can then be reassembled into new products. Like traditional unrecyclable thermosets, these new structures are rigid, resistant to heat and tough, with the same potential applications as their unrecyclable forerunners.
Although no recycling is 100% efficient, this innovation – if widely deployed – should speed up the move towards a circular economy with a big reduction in landfill waste from plastics. We expect recyclable thermoset polymers to replace unrecyclable thermosets within five years, and to be ubiquitous in newly manufactured goods by 2025.

4. Precise genetic-engineering techniques
A breakthrough offers better crops with less controversy

Conventional genetic engineering has long caused controversy. However, new techniques are emerging that allow us to directly “edit” the genetic code of plants to make them, for example, more nutritious or better able to cope with a changing climate.
Currently, the genetic engineering of crops relies on the bacterium agrobacterium tumefaciens to transfer desired DNA into the target genome. The technique is proven and reliable, and despite widespread public fears, there is a consensus in the scientific community that genetically modifying organisms using this technique is no more risky than modifying them using conventional breeding. However, while agrobacterium is useful, more precise and varied genome-editing techniques have been developed in recent years.

These include ZFNs, TALENS and, more recently, the CRISPR-Cas9 system, which evolved in bacteria as a defence mechanism against viruses. CRISPR-Cas9 system uses an RNA molecule to target DNA, cutting to a known, user-selected sequence in the target genome. This can disable an unwanted gene or modify it in a way that is functionally indistinguishable from a natural mutation. Using “homologous recombination”, CRISPR can also be used to insert new DNA sequences, or even whole genes, into the genome in a precise way.
Another aspect of genetic engineering that appears poised for a major advance is the use of RNA interference (RNAi) in crops. RNAi is effective against viruses and fungal pathogens, and can also protect plants against insect pests, reducing the need for chemical pesticides. Viral genes have been used to protect papaya plants against the ringspot virus, for example, with no sign of resistance evolving in over a decade of use in Hawaii. RNAi may also benefit major staple-food crops, protecting wheat against stem rust, rice against blast, potato against blight and banana against fusarium wilt.

Many of these innovations will be particularly beneficial to smaller farmers in developing countries. As such, genetic engineering may become less controversial, as people recognize its effectiveness at boosting the incomes and improving the diets of millions of people. In addition, more precise genome editing may allay public fears, especially if the resulting plant or animal is not considered transgenic because no foreign genetic material is introduced.
Taken together, these techniques promise to advance agricultural sustainability by reducing input use in multiple areas, from water and land to fertilizer, while also helping crops to adapt to climate change.

5. Additive manufacturing
The future of making things, from printable organs to intelligent clothes

As the name suggests, additive manufacturing is the opposite of subtractive manufacturing. The latter is how manufacturing has traditionally been done: starting with a larger piece of material (wood, metal, stone, etc), layers are removed, or subtracted, to leave the desired shape. Additive manufacturing instead starts with loose material, either liquid or powder, and then builds it into a three-dimensional shape using a digital template.
3D products can be highly customized to the end user, unlike mass-produced manufactured goods. An example is the company Invisalign, which uses computer imaging of customers’ teeth to make near-invisible braces tailored to their mouths. Other medical applications are taking 3D printing in a more biological direction: by directly printing human cells, it is now possible to create living tissues that may find potential application in drug safety screening and, ultimately, tissue repair and regeneration. An early example of this bioprinting is Organovo’s printed liver-cell layers, which are aimed at drug testing, and may eventually be used to create transplant organs. Bioprinting has already been used to generate skin and bone, as well as heart and vascular tissue, which offer huge potential in future personalized medicine.

An important next stage in additive manufacturing would be the 3D printing of integrated electronic components, such as circuit boards. Nano-scale computer parts, like processors, are difficult to manufacture this way because of the challenges of combining electronic components with others made from multiple different materials. 4D printing now promises to bring in a new generation of products that can alter themselves in response to environmental changes, such as heat and humidity. This could be useful in clothes or footwear, for example, as well as in healthcare products, such as implants designed to change in the human body.
Like distributed manufacturing, additive manufacturing is potentially highly disruptive to conventional processes and supply chains. But it remains a nascent technology today, with applications mainly in the automotive, aerospace and medical sectors. Rapid growth is expected over the next decade as more opportunities emerge and innovation in this technology brings it closer to the mass market.

6. Emergent artificial intelligence
What happens when a computer can learn on the job?

Artificial intelligence (AI) is, in simple terms, the science of doing by computer the things that people can do. Over recent years, AI has advanced significantly: most of us now use smartphones that can recognize human speech, or have travelled through an airport immigration queue using image-recognition technology. Self-driving cars and automated flying drones are now in the testing stage before anticipated widespread use, while for certain learning and memory tasks, machines now outperform humans. Watson, an artificially intelligent computer system, beat the best human candidates at the quiz game Jeopardy.
Artificial intelligence, in contrast to normal hardware and software, enables a machine to perceive and respond to its changing environment. Emergent AI takes this a step further, with progress arising from machines that learn automatically by assimilating large volumes of information. An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future.

Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over – and even perform better – at certain tasks than humans. There is substantial evidence that self-driving cars will reduce collisions, and resulting deaths and injuries, from road transport, as machines avoid human errors, lapses in concentration and defects in sight, among other problems. Intelligent machines, having faster access to a much larger store of information, and able to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases. The Watson system is now being deployed in oncology to assist in diagnosis and personalized, evidence-based treatment options for cancer patients.

Long the stuff of dystopian sci-fi nightmares, AI clearly comes with risks – the most obvious being that super-intelligent machines might one day overcome and enslave humans. This risk, while still decades away, is taken increasingly seriously by experts, many of whom signed an open letter coordinated by the Future of Life Institute in January 2015 to direct the future of AI away from potential pitfalls. More prosaically, economic changes prompted by intelligent computers replacing human workers may exacerbate social inequalities and threaten existing jobs. For example, automated drones may replace most human delivery drivers, and self-driven short-hire vehicles could make taxis increasingly redundant.
On the other hand, emergent AI may make attributes that are still exclusively human – creativity, emotions, interpersonal relationships – more clearly valued. As machines grow in human intelligence, this technology will increasingly challenge our view of what it means to be human, as well as the risks and benefits posed by the rapidly closing gap between man and machine.

7. Distributed manufacturing
The factory of the future is online – and on your doorstep

Distributed manufacturing turns on its head the way we make and distribute products. In traditional manufacturing, raw materials are brought together, assembled and fabricated in large centralized factories into identical finished products that are then distributed to the customer. In distributed manufacturing, the raw materials and methods of fabrication are decentralized, and the final product is manufactured very close to the final customer.
In essence, the idea of distributed manufacturing is to replace as much of the material supply chain as possible with digital information. To manufacture a chair, for example, rather than sourcing wood and fabricating it into chairs in a central factory, digital plans for cutting the parts of a chair can be distributed to local manufacturing hubs using computerized cutting tools known as CNC routers. Parts can then be assembled by the consumer or by local fabrication workshops that can turn them into finished products. One company already using this model is the US furniture company AtFAB.

Current uses of distributed manufacturing rely heavily on the DIY “maker movement”, in which enthusiasts use their own local 3D printers and make products out of local materials. There are elements of open-source thinking here, in that consumers can customize products to their own needs and preferences. Instead of being centrally driven, the creative design element can be more crowdsourced; products may take on an evolutionary character as more people get involved in visualizing and producing them.
Distributed manufacturing is expected to enable a more efficient use of resources, with less wasted capacity in centralized factories. It also lowers the barriers to market entry by reducing the amount of capital required to build the first prototypes and products. Importantly, it should reduce the overall environmental impact of manufacturing: digital information is shipped over the web rather than physical products over roads or rails, or on ships; and raw materials are sourced locally, further reducing the amount of energy required for transportation.
If it becomes more widespread, distributed manufacturing will disrupt traditional labour markets and the economics of traditional manufacturing. It does pose risks: it may be more difficult to regulate and control remotely manufactured medical devices, for example, while products such as weapons may be illegal or dangerous. Not everything can be made via distributed manufacturing, and traditional manufacturing and supply chains will still have to be maintained for many of the most important and complex consumer goods.

Distributed manufacturing may encourage broader diversity in objects that are today standardized, such as smartphones and automobiles. Scale is no object: one UK company, Facit Homes, uses personalized designs and 3D printing to create customized houses to suit the consumer. Product features will evolve to serve different markets and geographies, and there will be a rapid proliferation of goods and services to regions of the world not currently well served by traditional manufacturing.

8. ‘Sense and avoid’ drones
Flying robots to check power lines or deliver emergency aid

Unmanned aerial vehicles, or drones, have become an important and controversial part of military capacity in recent years. They are also used in agriculture, for filming and multiple other applications that require cheap and extensive aerial surveillance. But so far all these drones have had human pilots; the difference is that their pilots are on the ground and fly the aircraft remotely.
The next step with drone technology is to develop machines that fly themselves, opening them up to a wider range of applications. For this to happen, drones must be able to sense and respond to their local environment, altering their height and flying trajectory in order to avoid colliding with other objects in their path. In nature, birds, fish and insects can all congregate in swarms, each animal responding to its neighbour almost instantaneously to allow the swarm to fly or swim as a single unit. Drones can emulate this.

With reliable autonomy and collision avoidance, drones can begin to take on tasks too dangerous or remote for humans to carry out: checking electric power lines, for example, or delivering medical supplies in an emergency. Drone delivery machines will be able to find the best route to their destination, and take into account other flying vehicles and obstacles. In agriculture, autonomous drones can collect and process vast amounts of visual data from the air, allowing precise and efficient use of inputs such as fertilizer and irrigation.
In January 2014, Intel and Ascending Technologies showcased prototype multi-copter drones that could navigate an on-stage obstacle course and automatically avoid people who walked into their path. The machines use Intel’s RealSense camera module, which weighs just 8g and is less than 4mm thick. This level of collision avoidance will usher in a future of shared airspace, with many drones flying in proximity to humans and operating in and near the built environment to perform a multitude of tasks. Drones are essentially robots operating in three, rather than two, dimensions; advances in next-generation robotics technology will accelerate this trend.

Flying vehicles will never be risk-free, whether operated by humans or as intelligent machines. For widespread adoption, sense and avoid drones must be able to operate reliably in the most difficult conditions: at night, in blizzards or dust storms. Unlike our current digital mobile devices (which are actually immobile, since we have to carry them around), drones will be transformational as they are self-mobile and have the capacity of flying in the three-dimensional world that is beyond our direct human reach. Once ubiquitous, they will vastly expand our presence, productivity and human experience.

9. Neuromorphic technology
Computer chips that mimic the human brain

Even today’s best supercomputers cannot rival the sophistication of the human brain. Computers are linear, moving data back and forth between memory chips and a central processor over a high-speed backbone. The brain, on the other hand, is fully interconnected, with logic and memory intimately cross-linked at billions of times the density and diversity of that found in a modern computer. Neuromorphic chips aim to process information in a fundamentally different way from traditional hardware, mimicking the brain’s architecture to deliver a huge increase in a computer’s thinking and responding power.
Miniaturization has delivered massive increases in conventional computing power over the years, but the bottleneck of shifting data constantly between stored memory and central processors uses large amounts of energy and creates unwanted heat, limiting further improvements. In contrast, neuromorphic chips can be more energy efficient and powerful, combining data-storage and data-processing components into the same interconnected modules. In this sense, the system copies the networked neurons that, in their billions, make up the human brain.
Neuromorphic technology will be the next stage in powerful computing, enabling vastly more rapid processing of data and a better capacity for machine learning. IBM’s million-neuron TrueNorth chip, revealed in prototype in August 2014, has a power efficiency for certain tasks that is hundreds of times superior to a conventional CPU (Central Processing Unit), and more comparable for the first time to the human cortex. With vastly more compute power available for far less energy and volume, neuromorphic chips should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence.

Potential applications include: drones better able to process and respond to visual cues, much more powerful and intelligent cameras and smartphones, and data-crunching on a scale that may help unlock the secrets of financial markets or climate forecasting. Computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways.

10. Digital genome
Healthcare for an age when your genetic code is on a USB stick

While the first sequencing of the 3.2 billion base pairs of DNA that make up the human genome took many years and cost tens of millions of dollars, today your genome can be sequenced and digitized in minutes and at the cost of only a few hundred dollars. The results can be delivered to your laptop on a USB stick and easily shared via the internet. This ability to rapidly and cheaply determine our individual unique genetic make-up promises a revolution in more personalized and effective healthcare.
Many of our most intractable health challenges, from heart disease to cancer, have a genetic component. Indeed, cancer is best described as a disease of the genome. With digitization, doctors will be able to make decisions about a patient’s cancer treatment informed by a tumour’s genetic make-up. This new knowledge is also making precision medicine a reality by enabling the development of highly targeted therapies that offer the potential for improved treatment outcomes, especially for patients battling cancer.

Like all personal information, a person’s digital genome will need to be safeguarded for privacy reasons. Personal genomic profiling has already raised challenges, with regard to how people respond to a clearer understanding of their risk of genetic disease, and how others – such as employers or insurance companies – might want to access and use the information. However, the benefits are likely to outweigh the risks, because individualized treatments and targeted therapies can be developed with the potential to be applied across all the many diseases that are driven or assisted by changes in DNA.

Tech

The Artificial Intelligence Race: U.S. China and Russia

Ecatarina Garcia

Published

on

Artificial intelligence (AI), a subset of machine learning, has the potential to drastically impact a nation’s national security in various ways. Coined as the next space race, the race for AI dominance is both intense and necessary for nations to remain primary in an evolving global environment. As technology develops so does the amount of virtual information and the ability to operate at optimal levels when taking advantage of this data. Furthermore, the proper use and implementation of AI can facilitate a nation in the achievement of information, economic, and military superiority – all ingredients to maintaining a prominent place on the global stage. According to Paul Scharre, “AI today is a very powerful technology. Many people compare it to a new industrial revolution in its capacity to change things. It is poised to change not only the way we think about productivity but also elements of national power.”AI is not only the future for economic and commercial power, but also has various military applications with regard to national security for each and every aspiring global power.

While the U.S. is the birthplace of AI, other states have taken a serious approach to research and development considering the potential global gains. Three of the world’s biggest players, U.S., Russia, and China, are entrenched in non-kinetic battle to out-pace the other in AI development and implementation. Moreover, due to the considerable advantages artificial intelligence can provide it is now a race between these players to master AI and integrate this capability into military applications in order to assert power and influence globally. As AI becomes more ubiquitous, it is no longer a next-generation design of science fiction. Its potential to provide strategic advantage is clear. Thus, to capitalize on this potential strategic advantage, the U.S. is seeking to develop a deliberate strategy to position itself as the permanent top-tier of AI implementation.

Problem

The current AI reality is near-peer competitors are leading or closing the gap with the U.S. Of note, Allen and Husain indicate the problem is exacerbated by a lack of AI in the national agenda, diminishing funds for science and technology funding, and the public availability of AI research. The U.S. has enjoyed a technological edge that, at times, enabled military superiority against near-peers. However, there is argument that the U.S. is losing grasp of that advantage. As Flournoy and Lyons indicate, China and Russia are investing massively in research and development efforts to produce technologies and capabilities “specifically designed to blunt U.S. strengths and exploit U.S. vulnerabilities.”

The technological capabilities once unique to the U.S. are now proliferated across both nation-states and other non-state actors. As Allen and Chan indicate, “initially, technological progress will deliver the greatest advantages to large, well-funded, and technologically sophisticated militaries. As prices fall, states with budget-constrained and less technologically-advanced militaries will adopt the technology, as will non-state actors.” As an example, the American use of unmanned aerial vehicles in Iraq and Afghanistan provided a technological advantage in the battle space. But as prices for this technology drop, non-state actors like the Islamic State is making noteworthy use of remotely-controlled aerial drones in its military operations. While the aforementioned is part of the issue, more concerning is the fact that the Department of Defense (DoD) and U.S. defense industry are no longer the epicenter for the development of next-generation advancements. Rather, the most innovative development is occurring more with private commercial companies. Unlike China and Russia, the U.S. government cannot completely direct the activities of industry for purely governmental/military purposes. This has certainly been a major factor in closing the gap in the AI race.

Furthermore, the U.S. is falling short to China in the quantity of studies produced regarding AI, deep-learning, and big data. For example, the number of AI-related papers submitted to the International Joint Conferences on Artificial Intelligence (IJCAI) in 2017 indicated China totaled a majority 37 percent, whereas the U.S. took third position at only 18 percent. While quantity is not everything (U.S. researchers were awarded the most awards at IJCAI 2017, for example), China’s industry innovations were formally marked as “astonishing.”For these reasons, there are various strategic challenges the U.S. must seek to overcome to maintain its lead in the AI race.

Perspectives

Each of the three nations have taken divergent perspectives on how to approach and define this problem. However, one common theme among them is the understanding of AI’s importance as an instrument of international competitiveness as well as a matter of national security. Sadler writes, “failure to adapt and lead in this new reality risks the U.S. ability to effectively respond and control the future battlefield.” However, the U.S. can longer “spend its way ahead of these challenges.” The U.S. has developed what is termed the third offset, which Louth and Taylor defined as a policy shift that is a radical strategy to reform the way the U.S. delivers defense capabilities to meet the perceived challenges of a fundamentally changed threat environment. The continuous development and improvement of AI requires a comprehensive plan and partnership with industry and academia. To cage this issue two DOD-directed studies, the Defense Science Board Summer Study on Autonomy and the Long-Range Research and Development Planning Program, highlighted five critical areas for improvement: (1) autonomous deep-learning systems,(2) human-machine collaboration, (3) assisted human operations, (4) advanced human-machine combat teaming, and (5) network-enabled semi-autonomous weapons.

Similar to the U.S., Russian leadership has stated the importance of AI on the modern battlefield. Russian President Vladimir Putin commented, “Whoever becomes the leader in this sphere (AI) will become the ruler of the world.” Not merely rhetoric, Russia’s Chief of General Staff, General Valery Gerasimov, also predicted “a future battlefield populated with learning machines.” As a result of the Russian-Georgian war, Russia developed a comprehensive military modernization plan. Of note, a main staple in the 2008 modernization plan was the development of autonomous military technology and weapon systems. According to Renz, “The achievements of the 2008 modernization program have been well-documented and were demonstrated during the conflicts in Ukraine and Syria.”

China, understanding the global impact of this issue, has dedicated research, money, and education to a comprehensive state-sponsored plan.  China’s State Council published a document in July of 2017 entitled, “New Generation Artificial Intelligence Development Plan.” It laid out a plan that takes a top-down approach to explicitly mapout the nation’s development of AI, including goals reaching all the way to 2030.  Chinese leadership also highlights this priority as they indicate the necessity for AI development:

AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security; intensifying the introduction of plans and strategies for this core technology, top talent, standards and regulations, etc.; and trying to seize the initiative in the new round of international science and technology competition. (China’s State Council 2017).

The plan addresses everything from building basic AI theory to partnerships with industry to fostering educational programs and building an AI-savvy society.

Recommendations

Recommendations to foster the U.S.’s AI advancement include focusing efforts on further proliferating Science, Technology, Engineering and Math (STEM)programs to develop the next generation of developers. This is similar to China’s AI development plan which calls to “accelerate the training and gathering of high-end AI talent.” This lofty goal creates sub-steps, one of which is to construct an AI academic discipline. While there are STEM programs in the U.S., according to the U.S. Department of Education, “The United States is falling behind internationally, ranking 29th in math and 22nd in science among industrialized nations.” To maintain the top position in AI, the U.S. must continue to develop and attract the top engineers and scientists. This requires both a deliberate plan for academic programs as well as funding and incentives to develop and maintain these programs across U.S. institutions. Perhaps most importantly, the United States needs to figure out a strategy to entice more top American students to invest their time and attention to this proposed new discipline. Chinese and Russian students easily outpace American students in this area, especially in terms of pure numbers.

Additionally, the U.S. must research and capitalize on the dual-use capabilities of AI. Leading companies such as Google and IBM have made enormous headway in the development of algorithms and machine-learning. The Department of Defense should levy these commercial advances to determine relevant defense applications. However, part of this partnership with industry must also consider the inherent national security risks that AI development can present, thus introducing a regulatory role for commercial AI development. Thus, the role of the U.S. government with AI industry cannot be merely as a consumer, but also as a regulatory agent. The dangerous risk, of course, is this effort to honor the principles of ethical and transparent development will not be mirrored in the competitor nations of Russia and China.

Due to the population of China and lax data protection laws, the U.S. has to develop innovative ways to overcome this challenge in terms of machine-learning and artificial intelligence. China’s large population creates a larger pool of people to develop as engineers as well as generates a massive volume of data to glean from its internet users. Part of this solution is investment. A White House report on AI indicated, “the entire U.S. government spent roughly $1.1 billion on unclassified AI research and development in 2015, while annual U.S. government spending on mathematics and computer science R&D is $3 billion.” If the U.S. government considers AI an instrument of national security, then it requires financial backing comparable to other fifth-generation weapon systems. Furthermore, innovative programs such as the DOD’s Project Maven must become a mainstay.

Project Maven, a pilot program implemented in April 2017, was mandated to produce algorithms to combat big data and provide machine-learning to eliminate the manual human burden of watching full-motion video feeds. The project was expected to provide algorithms to the battlefield by December of 2018 and required partnership with four unnamed startup companies. The U.S. must implement more programs like this that incite partnership with industry to develop or re-design current technology for military applications. To maintain its technological advantage far into the future the U.S. must facilitate expansive STEM programs, seek to capitalize on the dual-use of some AI technologies, provide fiscal support for AI research and development, and implement expansive, innovative partnership programs between industry and the defense sector. Unfortunately, at the moment, all of these aspects are being engaged and invested in only partially. Meanwhile, countries like Russia and China seem to be more successful in developing their own versions, unencumbered by ‘obstacles’ like democracy, the rule of law, and the unfettered free-market competition. The AI Race is upon us. And the future seems to be a wild one indeed.

References

Allen, Greg, and Taniel Chan. “Artificial Intelligence and National Security.” Publication. Belfer Center for Science and International Affairs, Harvard University. July 2017. Accessed April 9, 2018. https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf

Allen, John R., and Amir Husain. “The Next Space Race is Artificial Intelligence.” Foreign Policy. November 03, 2017. Accessed April 09, 2018. http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/.

China. State Council. Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. July 20, 2017. Translated by RogierCreemers, Graham Webster, Paul, Paul Triolo and Elsa Kania.

Doubleday, Justin. 2017. “Project Maven’ Sending First FMV Algorithms to Warfighters in December.” Inside the Pentagon’s Inside the Army 29 (44). Accessed April 1, 2018.https://search-proquest-com.ezproxy2.apus.edu/docview/1960494552?accountid=8289.

Flournoy, Michèle A., and Robert P. Lyons. “Sustaining and Enhancing the US Military’s Technology Edge.” Strategic Studies Quarterly 10, no. 2 (2016): 3-13. Accessed April 12, 2018. http://www.jstor.org/stable/26271502.

Gams, Matjaz. 2017. “Editor-in-chief’s Introduction to the Special Issue on “Superintelligence”, AI and an Overview of IJCAI 2017.” Accessed April 14, 2018. Informatica 41 (4): 383-386.

Louth, John, and Trevor Taylor. 2016. “The US Third Offset Strategy.” RUSI Journal 161 (3): 66-71. DOI: 10.1080/03071847.2016.1193360.

Sadler, Brent D. 2016. “Fast Followers, Learning Machines, and the Third Offset Strategy.” JFQ: Joint Force Quarterly no. 83: 13-18. Accessed April 13, 2018. Academic Search Premier, EBSCOhost.

Scharre, Paul, and SSQ. “Highlighting Artificial Intelligence: An Interview with Paul Scharre Director, Technology and National Security Program Center for a New American Security Conducted 26 September 2017.” Strategic Studies Quarterly 11, no. 4 (2017): 15-22. Accessed April 10, 2018.http://www.jstor.org/stable/26271632.

“Science, Technology, Engineering and Math: Education for Global Leadership.” Science, Technology, Engineering and Math: Education for Global Leadership. U.S. Department of Education. Accessed April 15, 2018. https://www.ed.gov/stem.

Continue Reading

Tech

Global anxiety deepens over online data and privacy protection

MD Staff

Published

on

Internet users worldwide are becoming more worried about their privacy online and many question the protections offered by Internet and social media companies, a new United Nations survey has found.

This waning of confidence could imperil the spread of online shopping even as newcomers to the Internet may be especially vulnerable to abuses because they are unaware of the risks.

Trust is essential for the successful expansion and use of e-commerce platforms and mobile payment systems in developing nations,” said Fen Osler Hampson, Director of Global Security and Politics at Centre for International Governance Innovation (CIGI), a think tank that helped conduct the study.

The survey was carried out by CIGI and Ipsos, in collaboration with the UN Conference for Trade and Development (UNCTAD) and the Internet Society.

Users in large emerging economies expressed the most “trust” in Internet firms with nine in ten expressing such faith in China, India and Indonesia and more than eight in ten doing so in Pakistan and Mexico.

To the contrary, fewer than 60 percent of consumers in Japan and Tunisia expressed such “trust.”

Privacy concerns

The evidence of mounting privacy concerns coincides with sharper public scrutiny of the protection policies of major Internet firms – over concerns fuelled by the revelation that a political data firm gained access to millions of Facebook users’ personal data without their consent.

“The survey underlines the importance of adopting and adapting policies to cope with the evolving digital economy” said Shamika Sirimanne, the Director of Technology and Logistics Division at the UN agency, which deals with the economics of globalization.

“The challenge for policymakers is to deal holistically with a number of areas – from connectivity and payment solutions to skills and regulations,” she explained.

Is technology worth the cost? Yes and No

As e-commerce soars, there is also a general increase in the number of people using mobile payments and non-traditional means of paying for services, such as tapping one’s smart phone to board trains or scanning it to pay for a cup of coffee.

The use of smart phones to make cashless purchases is in fact far higher in many developing countries than it is in the United States and much of Europe, the study noted.

In addition, many people, especially in the developing world, expressed the view that new technology is “worth what it costs.”

At the same time, some users in developed countries expressed views to the contrary. Their main worry, the survey found, is that technology will result in the loss of employment.

The launch of the survey coincides with UNCTAD’s E-Commerce Week – the leading forum for Governments, private sector, development banks, academia and the civil society to discuss development opportunities and challenges before the evolving digital economy.

Continue Reading

Tech

Embracing Technology is Key for the Jobs of Tomorrow in Latin America and the Caribbean

MD Staff

Published

on

New technologies provide a pathway to poverty reduction and could usher in a wave of higher productivity and growth across Latin America and the Caribbean, according to a new World Bank report.

At a time of growing fears of a future where automation replaces employees, technological innovation could create more and better jobs in the coming years—for both for skilled and unskilled workers in the region, the report Jobs of Tomorrow: Technology, Productivity, and Prosperity in Latin America and the Caribbean finds.

“We should adopt and promote technology and innovation to boost economic growth, poverty reduction and increase opportunities for all, rather than creating barriers,” said Jorge Familiar, World Bank Vice-President for Latin America and Caribbean. “Better education and training will be key to ensure youth can take full advantage of the digital world and be prepared for the work of tomorrow.”

According to the report, Latin America and the Caribbean has lower rates of digital technology adoption than similar countries in the Organization for Economic Co-operation and Development (OECD), providing ample space to increase productivity. Barriers also often drive up the price of productivity-enhancing technology. For example, smartphones and tablets in some countries in the region are the most expensive in the world. Tariffs and taxes on technology may be holding back per capita GDP growth by more than 1 percentage point a year across the region.

“With more technology comes more productivity,” said report author Mark Dutz, World Bank Lead Economist of the Macroeconomics, Trade and Investment Global Practice. “Companies can lower variable costs, expand production, reach more markets, make more money and in the process create more and better jobs.”

Studies on Argentina, Brazil, Chile, Colombia and Mexico find that lower-skilled workers can, and often do, benefit from adopting digital technologies. In addition, technology can have a strong impact on worker mobility, making it easier for job seekers to find information about job opportunities. It works both ways, making for better employer-employee matches.

Online trading platforms also level the playing field between small and large firms seeking access to international markets. International transactions over the Internet disproportionately benefit smaller firms – the same firms that tend to hire relatively more lower-skilled workers.

The report recommends some key areas where policies can help harness the productive power of this digital revolution. They include:

  • Making technologies available to local firms at globally-competitive prices. In Colombia, for example, manufacturing firms who adopted the use of high speed internet saw a direct increase in demand for laborers and lower-skilled production workers as well as higher-skilled professional workers.
  • Ensuring that firms have incentives to invest in technology upgrading and exports rather than seeking protection from competition. Policies and institutions that encourage firms to compete lead them to invest in improving their product quality and lowering costs and prices rather than investing in obtaining government privileges. Firms can also benefit from adopting better management practices to increase production and distribution – an area with huge potential in the region.
  • Educating workers to prepare them for the jobs of tomorrow that will demand new, more sophisticated skills. In Brazil, for instance, more technology-intensive industries increasingly rely on employees to do more cognitive and analytical tasks in which communication and interpersonal skills are in particularly high demand.

Turning away from technology because of fears about technological change would be a costly mistake. New technologies can and should be embraced to support shared prosperity across Latin America and the Caribbean, the report concludes.

Continue Reading

Latest

Newsletter

Trending

Copyright © 2018 Modern Diplomacy