Artificial intelligence (AI), a subset of machine learning, has the potential to drastically impact a nation’s national security in various ways. Coined as the next space race, the race for AI dominance is both intense and necessary for nations to remain primary in an evolving global environment. As technology develops so does the amount of virtual information and the ability to operate at optimal levels when taking advantage of this data. Furthermore, the proper use and implementation of AI can facilitate a nation in the achievement of information, economic, and military superiority – all ingredients to maintaining a prominent place on the global stage. According to Paul Scharre, “AI today is a very powerful technology. Many people compare it to a new industrial revolution in its capacity to change things. It is poised to change not only the way we think about productivity but also elements of national power.”AI is not only the future for economic and commercial power, but also has various military applications with regard to national security for each and every aspiring global power.
While the U.S. is the birthplace of AI, other states have taken a serious approach to research and development considering the potential global gains. Three of the world’s biggest players, U.S., Russia, and China, are entrenched in non-kinetic battle to out-pace the other in AI development and implementation. Moreover, due to the considerable advantages artificial intelligence can provide it is now a race between these players to master AI and integrate this capability into military applications in order to assert power and influence globally. As AI becomes more ubiquitous, it is no longer a next-generation design of science fiction. Its potential to provide strategic advantage is clear. Thus, to capitalize on this potential strategic advantage, the U.S. is seeking to develop a deliberate strategy to position itself as the permanent top-tier of AI implementation.
The current AI reality is near-peer competitors are leading or closing the gap with the U.S. Of note, Allen and Husain indicate the problem is exacerbated by a lack of AI in the national agenda, diminishing funds for science and technology funding, and the public availability of AI research. The U.S. has enjoyed a technological edge that, at times, enabled military superiority against near-peers. However, there is argument that the U.S. is losing grasp of that advantage. As Flournoy and Lyons indicate, China and Russia are investing massively in research and development efforts to produce technologies and capabilities “specifically designed to blunt U.S. strengths and exploit U.S. vulnerabilities.”
The technological capabilities once unique to the U.S. are now proliferated across both nation-states and other non-state actors. As Allen and Chan indicate, “initially, technological progress will deliver the greatest advantages to large, well-funded, and technologically sophisticated militaries. As prices fall, states with budget-constrained and less technologically-advanced militaries will adopt the technology, as will non-state actors.” As an example, the American use of unmanned aerial vehicles in Iraq and Afghanistan provided a technological advantage in the battle space. But as prices for this technology drop, non-state actors like the Islamic State is making noteworthy use of remotely-controlled aerial drones in its military operations. While the aforementioned is part of the issue, more concerning is the fact that the Department of Defense (DoD) and U.S. defense industry are no longer the epicenter for the development of next-generation advancements. Rather, the most innovative development is occurring more with private commercial companies. Unlike China and Russia, the U.S. government cannot completely direct the activities of industry for purely governmental/military purposes. This has certainly been a major factor in closing the gap in the AI race.
Furthermore, the U.S. is falling short to China in the quantity of studies produced regarding AI, deep-learning, and big data. For example, the number of AI-related papers submitted to the International Joint Conferences on Artificial Intelligence (IJCAI) in 2017 indicated China totaled a majority 37 percent, whereas the U.S. took third position at only 18 percent. While quantity is not everything (U.S. researchers were awarded the most awards at IJCAI 2017, for example), China’s industry innovations were formally marked as “astonishing.”For these reasons, there are various strategic challenges the U.S. must seek to overcome to maintain its lead in the AI race.
Each of the three nations have taken divergent perspectives on how to approach and define this problem. However, one common theme among them is the understanding of AI’s importance as an instrument of international competitiveness as well as a matter of national security. Sadler writes, “failure to adapt and lead in this new reality risks the U.S. ability to effectively respond and control the future battlefield.” However, the U.S. can longer “spend its way ahead of these challenges.” The U.S. has developed what is termed the third offset, which Louth and Taylor defined as a policy shift that is a radical strategy to reform the way the U.S. delivers defense capabilities to meet the perceived challenges of a fundamentally changed threat environment. The continuous development and improvement of AI requires a comprehensive plan and partnership with industry and academia. To cage this issue two DOD-directed studies, the Defense Science Board Summer Study on Autonomy and the Long-Range Research and Development Planning Program, highlighted five critical areas for improvement: (1) autonomous deep-learning systems,(2) human-machine collaboration, (3) assisted human operations, (4) advanced human-machine combat teaming, and (5) network-enabled semi-autonomous weapons.
Similar to the U.S., Russian leadership has stated the importance of AI on the modern battlefield. Russian President Vladimir Putin commented, “Whoever becomes the leader in this sphere (AI) will become the ruler of the world.” Not merely rhetoric, Russia’s Chief of General Staff, General Valery Gerasimov, also predicted “a future battlefield populated with learning machines.” As a result of the Russian-Georgian war, Russia developed a comprehensive military modernization plan. Of note, a main staple in the 2008 modernization plan was the development of autonomous military technology and weapon systems. According to Renz, “The achievements of the 2008 modernization program have been well-documented and were demonstrated during the conflicts in Ukraine and Syria.”
China, understanding the global impact of this issue, has dedicated research, money, and education to a comprehensive state-sponsored plan. China’s State Council published a document in July of 2017 entitled, “New Generation Artificial Intelligence Development Plan.” It laid out a plan that takes a top-down approach to explicitly mapout the nation’s development of AI, including goals reaching all the way to 2030. Chinese leadership also highlights this priority as they indicate the necessity for AI development:
AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security; intensifying the introduction of plans and strategies for this core technology, top talent, standards and regulations, etc.; and trying to seize the initiative in the new round of international science and technology competition. (China’s State Council 2017).
The plan addresses everything from building basic AI theory to partnerships with industry to fostering educational programs and building an AI-savvy society.
Recommendations to foster the U.S.’s AI advancement include focusing efforts on further proliferating Science, Technology, Engineering and Math (STEM)programs to develop the next generation of developers. This is similar to China’s AI development plan which calls to “accelerate the training and gathering of high-end AI talent.” This lofty goal creates sub-steps, one of which is to construct an AI academic discipline. While there are STEM programs in the U.S., according to the U.S. Department of Education, “The United States is falling behind internationally, ranking 29th in math and 22nd in science among industrialized nations.” To maintain the top position in AI, the U.S. must continue to develop and attract the top engineers and scientists. This requires both a deliberate plan for academic programs as well as funding and incentives to develop and maintain these programs across U.S. institutions. Perhaps most importantly, the United States needs to figure out a strategy to entice more top American students to invest their time and attention to this proposed new discipline. Chinese and Russian students easily outpace American students in this area, especially in terms of pure numbers.
Additionally, the U.S. must research and capitalize on the dual-use capabilities of AI. Leading companies such as Google and IBM have made enormous headway in the development of algorithms and machine-learning. The Department of Defense should levy these commercial advances to determine relevant defense applications. However, part of this partnership with industry must also consider the inherent national security risks that AI development can present, thus introducing a regulatory role for commercial AI development. Thus, the role of the U.S. government with AI industry cannot be merely as a consumer, but also as a regulatory agent. The dangerous risk, of course, is this effort to honor the principles of ethical and transparent development will not be mirrored in the competitor nations of Russia and China.
Due to the population of China and lax data protection laws, the U.S. has to develop innovative ways to overcome this challenge in terms of machine-learning and artificial intelligence. China’s large population creates a larger pool of people to develop as engineers as well as generates a massive volume of data to glean from its internet users. Part of this solution is investment. A White House report on AI indicated, “the entire U.S. government spent roughly $1.1 billion on unclassified AI research and development in 2015, while annual U.S. government spending on mathematics and computer science R&D is $3 billion.” If the U.S. government considers AI an instrument of national security, then it requires financial backing comparable to other fifth-generation weapon systems. Furthermore, innovative programs such as the DOD’s Project Maven must become a mainstay.
Project Maven, a pilot program implemented in April 2017, was mandated to produce algorithms to combat big data and provide machine-learning to eliminate the manual human burden of watching full-motion video feeds. The project was expected to provide algorithms to the battlefield by December of 2018 and required partnership with four unnamed startup companies. The U.S. must implement more programs like this that incite partnership with industry to develop or re-design current technology for military applications. To maintain its technological advantage far into the future the U.S. must facilitate expansive STEM programs, seek to capitalize on the dual-use of some AI technologies, provide fiscal support for AI research and development, and implement expansive, innovative partnership programs between industry and the defense sector. Unfortunately, at the moment, all of these aspects are being engaged and invested in only partially. Meanwhile, countries like Russia and China seem to be more successful in developing their own versions, unencumbered by ‘obstacles’ like democracy, the rule of law, and the unfettered free-market competition. The AI Race is upon us. And the future seems to be a wild one indeed.
Allen, Greg, and Taniel Chan. “Artificial Intelligence and National Security.” Publication. Belfer Center for Science and International Affairs, Harvard University. July 2017. Accessed April 9, 2018. https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf
Allen, John R., and Amir Husain. “The Next Space Race is Artificial Intelligence.” Foreign Policy. November 03, 2017. Accessed April 09, 2018. http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/.
China. State Council. Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. July 20, 2017. Translated by RogierCreemers, Graham Webster, Paul, Paul Triolo and Elsa Kania.
Doubleday, Justin. 2017. “Project Maven’ Sending First FMV Algorithms to Warfighters in December.” Inside the Pentagon’s Inside the Army 29 (44). Accessed April 1, 2018.https://search-proquest-com.ezproxy2.apus.edu/docview/1960494552?accountid=8289.
Flournoy, Michèle A., and Robert P. Lyons. “Sustaining and Enhancing the US Military’s Technology Edge.” Strategic Studies Quarterly 10, no. 2 (2016): 3-13. Accessed April 12, 2018. http://www.jstor.org/stable/26271502.
Gams, Matjaz. 2017. “Editor-in-chief’s Introduction to the Special Issue on “Superintelligence”, AI and an Overview of IJCAI 2017.” Accessed April 14, 2018. Informatica 41 (4): 383-386.
Louth, John, and Trevor Taylor. 2016. “The US Third Offset Strategy.” RUSI Journal 161 (3): 66-71. DOI: 10.1080/03071847.2016.1193360.
Sadler, Brent D. 2016. “Fast Followers, Learning Machines, and the Third Offset Strategy.” JFQ: Joint Force Quarterly no. 83: 13-18. Accessed April 13, 2018. Academic Search Premier, EBSCOhost.
Scharre, Paul, and SSQ. “Highlighting Artificial Intelligence: An Interview with Paul Scharre Director, Technology and National Security Program Center for a New American Security Conducted 26 September 2017.” Strategic Studies Quarterly 11, no. 4 (2017): 15-22. Accessed April 10, 2018.http://www.jstor.org/stable/26271632.
“Science, Technology, Engineering and Math: Education for Global Leadership.” Science, Technology, Engineering and Math: Education for Global Leadership. U.S. Department of Education. Accessed April 15, 2018. https://www.ed.gov/stem.
Ten Ways the C-Suite Can Protect their Company against Cyberattack
Cyberattacks are one of the top 10 global risks of highest concern in the next decade, with an estimated price tag of $90 trillion if cybersecurity efforts do not keep pace with technological change. While there is abundant guidance in the cybersecurity community, the application of prescribed action continues to fall short of what is required to ensure effective defence against cyberattacks. The challenges created by accelerating technological innovation have reached new levels of complexity and scale – today responsibility for cybersecurity in organizations is no longer one Chief Security Officer’s job, it involves everyone.
The Cybersecurity Guide for Leaders in Today’s Digital World was developed by the World Economic Forum Centre for Cybersecurity and several of its partners to assist the growing number of C-suite executives responsible for setting and implementing the strategy and governance of cybersecurity and resilience. The guide bridges the gap between leaders with and without technical backgrounds. Following almost one year of research, it outlines 10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design.
“With effective cyber-risk management, business executives can achieve smarter, faster and more connected futures, driving business growth,” said Georges De Moura, Head of Industry Solutions, Centre for Cybersecurity, World Economic Forum. “From the steps necessary to think more like a business leader and develop better standards of cyber hygiene, through to the essential elements of crisis management, the report offers an excellent cybersecurity playbook for leaders in public and private sectors.”
“Practicing good cybersecurity is everyone’s responsibility, even if you don’t have the word “security” in your job title,” said Paige H. Adams, Global Chief Information Security Officer, Zurich Insurance Group. “This report provides a practical guide with ten basic tenets for business leaders to incorporate into their company’s day-to-day operations. Diligent application of these tenets and making them a part of your corporate culture will go a long way toward reducing risk and increasing cyber resilience.”
“The recommendation to foster internal and external partnerships is one of the most important, in my view,” said Sir Rob Wainwright, Senior Cyber Partner, Deloitte. “The dynamic nature of the threat, not least in terms of how it reflects the recent growth of an integrated criminal economy, calls on us to build a better global architecture of cyber cooperation. Such cooperation should include more effective platforms for information sharing within and across industries, releasing the benefits of data integration and analytics to build better levels of threat awareness and response capability for all.”
The Ten Tenets
1. Think Like a Business Leader – Cybersecurity leaders are business leaders first and foremost. They have to position themselves, teams and operations as business enablers. Transforming cybersecurity from a support function into a business-enabling function requires a broader view and a stronger communication skill set than was required previously.
2. Foster Internal and External Partnerships – Cybersecurity is a team sport. Today, information security teams need to partner with many internal groups and develop a shared vision, objectives and KPIs to ensure that timelines are met while delivering a highly secure and usable product to customers.
3. Build and Practice Strong Cyber Hygiene – Five core security principles are crucial: a clear understanding of the data supply chain, a strong patching strategy, organization-wide authentication, a secure active directory of contacts, and encrypted critical business processes.
4. Protect Access to Mission-Critical Assets – Not all user access is created equal. It is essential to have strong processes and automated systems in place to ensure appropriate access rights and approval mechanisms.
5. Protect Your Email Domain Against Phishing – Email is the most common point of entry for cyber attackers, with the median company receiving over 90% of their detected malware via this channel. The guide highlights six ways to protect employees’ emails.
6. Apply a Zero-Trust Approach to Securing Your Supply Chain – The high velocity of new applications developed alongside the adoption of open source and cloud platforms is unprecedented. Security-by-design practices must be embedded in the full lifecycle of the project.
7. Prevent, Monitor and Respond to Cyber Threats – The question is not if, but when a significant breach will occur. How well a company manages this inevitability is ultimately critical. Threat intelligence teams should perform proactive hunts throughout the organization’s infrastructure and keep the detection teams up to date on the latest trends.
8. Develop and Practice a Comprehensive Crisis Management Plan – Many organizations focus primarily on how to prevent and defend while not focusing enough on institutionalizing the playbook of crisis management. The guide outlines 12 vital components any company’s crisis plan should incorporate.
9. Build a Robust Disaster Recovery Plan for Cyberattacks – A disaster recovery and continuity plan must be tailored to security incident scenarios to protect an organization from cyberattacks and to instruct on how to react in case of a data breach. Furthermore, it can reduce the amount of time it takes to identify breaches and restore critical services for the business.
10. Create a Culture of Cybersecurity – Keeping an organization secure is every employee’s job. Tailoring trainings, incentivizing employees, building elementary security knowledge and enforcing sanctions on repeat offenders could aid thedevelopment of a culture of cybersecurity.
In the Fourth Industrial Revolution, all businesses are undergoing transformative digitalization of their industries that will open new markets. Cybersecurity leaders need to take a stronger and more strategic leadership role. Inherent to this new role is the imperative to move beyond the role of compliance monitors and enforcers.
Moving First on AI Has Competitive Advantages and Risks
Financial institutions that implement AI early have the most to gain from its use, but also face the largest risks. The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty, and more have left implementation of the most cutting-edge AI uses at a standstill. However, a newly released report from the World Economic Forum, Navigating Uncharted Waters, shows how financial services firms and regulators can overcome these risks.
Using AI responsibly is about more than mitigating risks; its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.
“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” says Matthew Blake, Head of Financial Services, World Economic Forum. “This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”
Across several dimensions, AI introduces new complexities to age-old challenges in the financial services industry, and the governance frameworks of the past will not adequately address these new concerns.
Explaining AI decisions
Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain. This uncertainty has left the implementation of cutting-edge AI tools at a standstill. The Forum offers a solution: evolve past “one-size-fits-all” governance ideas to specific transparency requirements that consider the AI use case in question.
For example, it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life. It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.
Beyond “explainability”, the report explores new challenges surrounding bias and fairness, systemic risk, fiduciary duty, and collusion as they relate to the use of AI.
Bias and fairness
Algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raise the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analyzing alternative data that can be used to assess ‘thin file’ customers that traditional systems cannot understand due to a lack of information.
The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk. As the volume and velocity of interactions grow through automated agents, emerging risks may become increasingly difficult to detect, spread across various financial institutions, Fintechs, large technology companies, and other market participants. These new dynamics will require supervisory authorities to reinvent themselves as hubs of system-wide intelligence, using AI themselves to supervise AI systems.
As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held “responsible” for their actions – and if not, who should be held accountable.
Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.
“Using AI in financial services will require an openness to new ways of safeguarding the ecosystem, different from the tools of the past,” says Rob Galaski, Global Leader, Banking & Capital Markets, Deloitte Consulting. “To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way.”
For each of the above described concerns, the report outlines the key underlying root causes of the issue and highlights the most pressing challenges, identifies how those challenges might be addressed through new tools and governance frameworks, and what opportunities might be unlocked by doing so.
The report was prepared in collaboration with Deloitte and follows five previous reports on financial innovation. The World Economic Forum will continue its work in Financial Services, with a particular focus on AI’s connections to other emerging technologies in its next phase of research through mid-2020.
US Blacklist of Chinese Surveillance Companies Creates Supply Chain Confusion
The United States Department of Commerce’s decision to blacklist 28 Chinese public safety organizations and commercial entities hit at some of China’s most dominant vendors within the security industry. Of the eight commercial entities added to the blacklist, six of them are some of China’s most successful digital forensics, facial recognition, and AI companies. However, the two surveillance manufacturers who made this blacklist could have a significant impact on the global market at large—Dahua and Hikvision.
Putting geopolitics aside, Dahua’s and Hikvision’s positions within the overall global digital surveillance market makes their blacklisting somewhat of a shock, with the immediate effects touching off significant questions among U.S. partners, end users, and supply chain partners.
Frost & Sullivan’s research finds that, currently, Hikvision and Dahua rank second and third in total global sales among the $20.48 billion global surveillance market but are fast-tracking to become the top two vendors among IP surveillance camera manufacturers. Their insurgent rise among IP surveillance camera providers came about due to both companies’ aggressive growth pipelines, significant product libraries of high-quality surveillance cameras and new imaging technologies, and low-cost pricing models that provide customers with higher levels of affordability.
This is also not the first time that these two vendors have found themselves in the crosshairs of the U.S. government. In 2018, the U.S. initiated a ban on the sale and use of Hikvision and Dahua camera equipment within government-owned facilities, including the Department of Defense, military bases, and government-owned buildings. However, the vague language of the ban made it difficult for end users to determine whether they were just banned from new purchases of Dahua or Hikvision cameras or if they needed to completely rip-and-replace existing equipment with another brand. Systems integrators, distributors, and even technology partners themselves remained unsure of how they should handle the ban’s implications, only serving to sow confusion among U.S. customers.
In addition to confusion over how end users in the government space were to proceed regarding their Hikvision and Dahua equipment came the realization that both companies held significant customer share among commercial companies throughout the U.S. market—so where was the ban’s line being drawn for these entities? Were they to comply or not? If so, how? Again, these questions have remained unanswered since 2018.
Hikvision and Dahua each have built a strong presence within the U.S. market, despite the 2018 ban. Both companies are seen as regular participants in industry tradeshows and events, and remain active among industry partners throughout the surveillance ecosystem. Both companies have also attempted to work with the U.S. government to alleviate security concerns and draw clearer guidelines for their sales and distribution partners throughout the country. They even established regional operations centers and headquarters in the country.
While blacklisting does send a clearer message to end users, integrators, and distributors—for sales and usage of these companies’ technologies—remedies for future actions still remain unclear. When it comes to legacy Hikvision and Dahua cameras, the onus appears to be on end users and integrators to decide whether rip-and-replace strategies are the best way to comply with government rulings or to just leave the solutions in place and hope for the best.
As far as broader global impacts of this action, these will remain to be seen. While the 2018 ban did bring about talks of similar bans in other regions, none of these bans ever materialized. Dahua and Hikvision maintained their strong market positioning, even achieving higher-than-average growth rates in the past year. Blacklisting does send a stronger message to global regulators though, so market participants outside the U.S. will just have to adopt a wait-and-see posture to see how, if at all, they may need to prepare their own surveillance equipment supply chains for changes to come.
Africa-Europe Alliance: Two new financial guarantees under the EU External Investment Plan
Today in the margins of the 2019 Africa Investment Forum in Johannesburg, South Africa, the European Commission signed two guarantee...
Psychological programming and political organization
Contemporary politics and the ensuing organization of consensus currently employ techniques and methods never used before. We are going through...
Will European Parliament make a genuine political force within the EU?
The approval of the make-up of the European Commission is stalling – quite unexpectedly for most politicians and experts in...
UNIDO Project in Kyrgyzstan named “Project of the Year”
The United Nations Industrial Development Organization (UNIDO) received the “Project of the Year” award for its activities related to “Linking...
Elijah, you are a beautiful book. Just an imprint burned on my brain like a ghost. I miss you more...
Americans return to Syria for oil
Soon after the adoption of the Russian-Turkish Memorandum on Syria, President Trump, known for his “consistency” in decision-making, made it...
US-Iran confrontation amid Lebanon, Iraq protests
The U.S welcomes to spread uprising to Iran and weakening Iran`s influence in Lebanon and Iraq, whereas Iran seeks up...
Eastern Europe3 days ago
Monument Dispute in South Caucasus: Why Should It Be Given More Attention?
Middle East3 days ago
The Russian Federation in the new Middle East
Intelligence3 days ago
Baghdadi Dead : What it means for Terrorism in West and South Asia?
Economy2 days ago
China’s Descending Rise
Europe2 days ago
Macron needs to reign in the anti-Bulgarian crazy talk for the sake of French national security
Terrorism2 days ago
The Rise OF ISIS and its Aftermath in Afghanistan
Reports3 days ago
Time to act as global energy efficiency progress drops to slowest rate since start of decade
International Law2 days ago
Schweitzer’s ‘Reverence for Life’ In the Age of Trump and Modi