Connect with us

Science & Technology

A Brave New World without Work

Nikolay Markotkin

Published

on

What’s the first thing that comes to mind when you think about the soon-to-come widespread introduction of robots and artificial intelligence (AI)? Endless queues of people waiting to get unemployment benefits? Skynet drones ploughing the sky over burnt-out slums? Or the opposite: idleness and equality provided by the labour of mechanical slaves? In all likelihood the reality will be less flashy, though that doesn’t mean we should ignore the social consequences of the technological changes taking place before our very eyes.

Revolution on the March

The Fourth Industrial Revolution with its robotics, bio and nanotechnologies, 3D printing, Internet of things, genetics, and artificial intelligence is rapidly spreading across the world [1]. The coming technological changes will have direct consequences for a number of existing professions and promise in the very least to transform the labour market in developed countries.

The high speed of change (suffice it to say that 10 of the most popular professions of 2010 did not exist in 2004) makes it difficult to predict the impact on society. In this regard, the assessments of experts and international organizations range from optimistic to alarmist. However, even if we were to eliminate the most extreme case scenarios, we could still say with certainty that a fundamental restructuring of the global economy, comparable to the one that took place in the 18th–19th centuries during the First Industrial Revolution, awaits us in the foreseeable future.

According to the World Economic Forum (WEF) Future of Jobs report, 65% of today’s primary school students will have hitherto unheard-of professions. McKinsey came to the same conclusion, highlighting in their report that at the current level of technological development, 30% of the functions of 60% of professions can be automated. M. Osborne and C. Frey of Oxford University give an even more pessimistic forecast. According to their research, 47% of jobs in the US risk being automated within 20 years.

Who will robots replace?

What professions are at risk? First at risk is, of course, unskilled labour. The Osborne and Frey study found clerks, data entry workers, librarians, machine operators, plumbers, sales specialists, and equipment adjusters among others to be those most vulnerable.

According to WEF, from 2015 to 2020, job reductions will have the greatest effect on office professions (4.91%) and the manufacturing sector (1.63%). Employment in areas such as design, entertainment, construction, and sales should also decline by 1%. In turn, the most significant growth in jobs is predictably expected in the field of computer technology (3.21%), architectural and engineering specialties (2.71%), and management (just under 1%).

Predictably, professions related to transport risk automation in the medium term. The development of self-driving vehicles could radically change both the passenger and freight traffic markets. In the US alone, 8.7 million people are employed in the long-distance freight traffic industry. If you take into account all of the business operations connected to trucking (motels, roadside cafes, etc.), the number increases to 15 million or about 10% of the country’s labour force. Reductions in passenger transport and the public transport sector are likely to be even more significant. It is also probable that self-guiding technologies will be introduced into sea freight traffic in the near future. The development of artificial intelligence should also bring down hard times on lawyers, teachers, miners, middle management, and journalists among others.

It can be said that on the whole, employment will gradually move from services to other sectors of the economy, many of which have yet to be created. The possibility is a confirmation of the revolutionary nature of the changes that are taking place rather than something unique. Before the First Industrial Revolution, over 70% of the population was occupied with agriculture, whereas nowadays the number hovers around a few percent in developed countries. The percentage of those employed in manufacturing continued to grow until the mid-twentieth century, though it has now fallen to 24% in the EU and 19% in the US (27% in Russia) as a result of the Digital Revolution. Meanwhile, although there are fewer workers, production volume continues to rise steadily. It would now appear to be time to automate services.

The Golden Age of Engineers and Psychiatrists?

Professions associated with intellectual work or direct personal contact with clients are least likely to suffer in the short term. According to the study from Oxford University, professions least susceptible to automation include various jobs in medicine and psychology, as well as coaches, social workers, programmers, engineers, representatives of higher management and creative professionals.

In other words, those whose work requires a creative approach and is not limited to the performance of predictable combinations will be best prepared to deal with the new reality. If we were to speak of engineers in this regard, it would have to be clarified that design engineers are generally safe, while operating engineers, on the contrary, are at risk.

Three key factors are keeping automation away from the creative professions. To successfully perform their tasks, artificial intelligence must possess intuition and an ability to manipulate material objects (touch) and make use of creative and social intelligence. Technology at its current level of development does not actually allow for the resolution of these problems. However, as strong AI continues to develop, the range of jobs available to it will invariably increase as well. It will expand the limits of automation that have already been achieved with existing technologies and will make it possible for computers to make managerial decisions and even, perhaps, engage in creative activity. Therefore, it cannot be ruled out that in the medium or long term, machines might successfully replace writers and artists along with engineers and managers. Furthermore, precedents do exist for AI’s successfully composing literary texts.

Thus, it is quite conceivable that the majority of the labour force will find itself back in school in the foreseeable future. The problem, however, is that no one really knows what to study. It has been estimated, that as many as 85% of the professions that will be in demand in 2030 do not yet exist. Even in developed countries, the education systems have yet to adapt to the new reality.

What will become of our country and of us?

Today, most researchers have little doubt that developed countries will successfully adapt to the changes coming one way or another (which does not rule out the possibility of social tension and growth in income inequality). New technologies could help create additional jobs to replace those that have been lost, as it was not long ago following the rapid development of the Internet. It is assumed that the new professions will be more creative and better paid.

A new balance will gradually be established in the labour market. The nature of manufacturing will also change. The development of automation and 3D printing will make it possible to create efficient local production facilities focused on the specific needs of consumers. This will facilitate the return of a part of production from developing countries to developed (so-called reshoring).

In turn, the consequences of automation could be much more negative for countries of the third world. The percentage of non-skilled jobs in developing countries decreased by 8% between 1995 and 2012. Reshoring could significantly accelerate this process in the short term. Since the proportion of people engaged in low-skilled work in low and middle-income countries is much higher, the growth of unemployment would threaten to become a major global problem. The situation would be further aggravated by the underdevelopment of labour protection institutions in these countries.

It must be noted that risks of this sort are endemic to Russia as well. Despite the significantly higher level of education of its citizens in comparison to that in developing countries, the Russian economy could hardly be called high-tech. A significant part of the working population is engaged in routine low-skilled labour, and productivity remains low as well. At the present time, Russia lags significantly behind other developed countries in regards to this indicator (and behind the US by more than 100%), and according to some estimates falls below the world average. What’s more, factory jobs are not the only ones at stake – an army of many millions of bureaucrats and clerks is also under threat of redundancy as a result of digitalization.

Another disaster waiting to happen to the Russian economy is related to outdated industry and the decline of domestic engineering. At present, institutions of higher education produce mainly operational engineers trained to maintain tools and machines. What’s more, even the limited innovative potential of Russian engineers is not needed by Russian industry.

Furthermore, it cannot be ruled out that in the near future Russia will launch a massive programme to introduce robotic automation and artificial intelligence. All the more since it fits in perfectly with the desire to modernize and digitalize the national economy repeatedly spoken of by the Russian leadership. Because of the lack of a strong trade union movement and the prevalence of hybrid and grey forms of employment, labour automation could lead to much more severe social consequences in Russia than in Western countries. Finally, it is entirely possible that the catch-me-if-you-can nature of such modernization will result in Russia introducing more primitive technologies than in more developed countries. Editor-in-Chief of Russia in Global Affairs magazine and RIAC Member Fyodor Lukyanov cleverly described a similar scenario in his article.

Saving the Rank and File

Ways to reduce the social consequences of labour automation have long been at the heart discussions surrounding the Fourth Industrial Revolution and the development of AI. The Robot Tax is one measure being considered. Microsoft Founder Bill Gates supports the idea and has proposed collecting income tax and social payments on robot labour to slow down the pace of automation. “Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level,” he declared in an interview for the Internet publication Quartz. It is his opinion that the funds received from payments of this sort should be used by governments to create social security systems for those who have lost their jobs as a result of automation.

The first country to resort to this measure is South Korea, which introduced an indirect tax on robots in August 2017. The European Union also discussed the introduction of a similar tax, though the clause proposed by Progressive Alliance of Socialists and Democrats Representative Mady Delvaux was rejected by the European Parliament was rejected by the European Parliament because it could slow the development of innovations. At the same time, the parliament approved the resolution itself, which calls for granting robots the status of legal entities.

A universal basic income could also soften the effect of rising unemployment and inequality. Elon Musk supports the initiative together with numerous other businessmen and experts. At the same time, a lack of work to afford one the opportunity to fulfil one’s potential poses a significant social risk. Significant unemployment, even in the absence of poverty, can contribute to the marginalization of the population and the growth of crime – the first jobs to go are those of low-skilled employees, who are unlikely to spend all of their permanent free time engaged in yoga and self-improvement activities.

Possible ways of mitigating the consequences of the upcoming restructuring of the world economy include a change in the nature of employment. Technological changes and expanding access to the Internet allow more and more people to work remotely. Thus, some of those who lose their jobs will be able to find themselves a place in the new economy without having to change their place of residence.

Some believe that automation will increase and not reduce the total number of jobs by accelerating the pace of economic development over the long term. Amazon is one example of how automation has not resulted in staff reduction. While increasing the number of robots employed in its warehouses from 1,400 to 45,000, it has managed to retain the same number of jobs. It has also been noted that automation is becoming increasingly necessary due to a decrease in the working-age population (primarily in developed countries).

It should be noted that these measures are all limited in nature and hardly correspond to the scale of changes that stand to be swept in by the Fourth Industrial Revolution. To avoid mass unemployment and social instability, governments must develop comprehensive short-term strategies for adapting the population to the new reality. It is very likely that new programs will be needed to retrain citizens en masse for new professions.

Russia is no exception here; on the contrary, it is of vital importance that our country reform its education system in the near future, especially as regards technical education. It is equally important to develop targeted support programs for those parts of the population that are most vulnerable to automation and digitalization. Moreover, it would seem advisable to make use of existing experience to mitigate the social consequences of factory closures in Russian single-industry towns. If we continue to move as sluggishly as we are moving at present, we risk turning into a kind of reserve for yesterday’s technologies with a population becoming ever more rapidly marginalized.

First published in our partner RIAC

[1] Marsh, P. The New Industrial Revolution. Consumers, Globalization, and the End of Mass Production. M.: Gaidar Institute Press, 2015.

Continue Reading
Comments

Science & Technology

Ten Ways the C-Suite Can Protect their Company against Cyberattack

MD Staff

Published

on

Cyberattacks are one of the top 10 global risks of highest concern in the next decade, with an estimated price tag of $90 trillion if cybersecurity efforts do not keep pace with technological change. While there is abundant guidance in the cybersecurity community, the application of prescribed action continues to fall short of what is required to ensure effective defence against cyberattacks. The challenges created by accelerating technological innovation have reached new levels of complexity and scale – today responsibility for cybersecurity in organizations is no longer one Chief Security Officer’s job, it involves everyone.

The Cybersecurity Guide for Leaders in Today’s Digital World was developed by the World Economic Forum Centre for Cybersecurity and several of its partners to assist the growing number of C-suite executives responsible for setting and implementing the strategy and governance of cybersecurity and resilience. The guide bridges the gap between leaders with and without technical backgrounds. Following almost one year of research, it outlines 10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design.

“With effective cyber-risk management, business executives can achieve smarter, faster and more connected futures, driving business growth,” said Georges De Moura, Head of Industry Solutions, Centre for Cybersecurity, World Economic Forum. “From the steps necessary to think more like a business leader and develop better standards of cyber hygiene, through to the essential elements of crisis management, the report offers an excellent cybersecurity playbook for leaders in public and private sectors.”

“Practicing good cybersecurity is everyone’s responsibility, even if you don’t have the word “security” in your job title,” said Paige H. Adams, Global Chief Information Security Officer, Zurich Insurance Group. “This report provides a practical guide with ten basic tenets for business leaders to incorporate into their company’s day-to-day operations. Diligent application of these tenets and making them a part of your corporate culture will go a long way toward reducing risk and increasing cyber resilience.”

“The recommendation to foster internal and external partnerships is one of the most important, in my view,” said Sir Rob Wainwright, Senior Cyber Partner, Deloitte. “The dynamic nature of the threat, not least in terms of how it reflects the recent growth of an integrated criminal economy, calls on us to build a better global architecture of cyber cooperation. Such cooperation should include more effective platforms for information sharing within and across industries, releasing the benefits of data integration and analytics to build better levels of threat awareness and response capability for all.”

The Ten Tenets

1. Think Like a Business Leader – Cybersecurity leaders are business leaders first and foremost. They have to position themselves, teams and operations as business enablers. Transforming cybersecurity from a support function into a business-enabling function requires a broader view and a stronger communication skill set than was required previously.

2. Foster Internal and External Partnerships – Cybersecurity is a team sport. Today, information security teams need to partner with many internal groups and develop a shared vision, objectives and KPIs to ensure that timelines are met while delivering a highly secure and usable product to customers.

3. Build and Practice Strong Cyber Hygiene – Five core security principles are crucial: a clear understanding of the data supply chain, a strong patching strategy, organization-wide authentication, a secure active directory of contacts, and encrypted critical business processes.

4. Protect Access to Mission-Critical Assets – Not all user access is created equal. It is essential to have strong processes and automated systems in place to ensure appropriate access rights and approval mechanisms.

5. Protect Your Email Domain Against Phishing – Email is the most common point of entry for cyber attackers, with the median company receiving over 90% of their detected malware via this channel. The guide highlights six ways to protect employees’ emails.

6. Apply a Zero-Trust Approach to Securing Your Supply Chain – The high velocity of new applications developed alongside the adoption of open source and cloud platforms is unprecedented. Security-by-design practices must be embedded in the full lifecycle of the project.

7. Prevent, Monitor and Respond to Cyber Threats – The question is not if, but when a significant breach will occur. How well a company manages this inevitability is ultimately critical. Threat intelligence teams should perform proactive hunts throughout the organization’s infrastructure and keep the detection teams up to date on the latest trends.

8. Develop and Practice a Comprehensive Crisis Management Plan – Many organizations focus primarily on how to prevent and defend while not focusing enough on institutionalizing the playbook of crisis management. The guide outlines 12 vital components any company’s crisis plan should incorporate.

9. Build a Robust Disaster Recovery Plan for Cyberattacks – A disaster recovery and continuity plan must be tailored to security incident scenarios to protect an organization from cyberattacks and to instruct on how to react in case of a data breach. Furthermore, it can reduce the amount of time it takes to identify breaches and restore critical services for the business.

10. Create a Culture of Cybersecurity – Keeping an organization secure is every employee’s job. Tailoring trainings, incentivizing employees, building elementary security knowledge and enforcing sanctions on repeat offenders could aid thedevelopment of a culture of cybersecurity.

In the Fourth Industrial Revolution, all businesses are undergoing transformative digitalization of their industries that will open new markets. Cybersecurity leaders need to take a stronger and more strategic leadership role. Inherent to this new role is the imperative to move beyond the role of compliance monitors and enforcers.

Continue Reading

Science & Technology

Moving First on AI Has Competitive Advantages and Risks

MD Staff

Published

on

Financial institutions that implement AI early have the most to gain from its use, but also face the largest risks. The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty, and more have left implementation of the most cutting-edge AI uses at a standstill. However, a newly released report from the World Economic Forum, Navigating Uncharted Waters, shows how financial services firms and regulators can overcome these risks.

Using AI responsibly is about more than mitigating risks; its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.

“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” says Matthew Blake, Head of Financial Services, World Economic Forum. “This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”

Across several dimensions, AI introduces new complexities to age-old challenges in the financial services industry, and the governance frameworks of the past will not adequately address these new concerns.

Explaining AI decisions

Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain. This uncertainty has left the implementation of cutting-edge AI tools at a standstill. The Forum offers a solution: evolve past “one-size-fits-all” governance ideas to specific transparency requirements that consider the AI use case in question.

For example, it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life. It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.

Beyond “explainability”, the report explores new challenges surrounding bias and fairness, systemic risk, fiduciary duty, and collusion as they relate to the use of AI.

Bias and fairness

Algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raise the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analyzing alternative data that can be used to assess ‘thin file’ customers that traditional systems cannot understand due to a lack of information.

Systemic risk

The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk. As the volume and velocity of interactions grow through automated agents, emerging risks may become increasingly difficult to detect, spread across various financial institutions, Fintechs, large technology companies, and other market participants. These new dynamics will require supervisory authorities to reinvent themselves as hubs of system-wide intelligence, using AI themselves to supervise AI systems.

Fiduciary duty

As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held “responsible” for their actions – and if not, who should be held accountable.

Algorithmic collusion

Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.

“Using AI in financial services will require an openness to new ways of safeguarding the ecosystem, different from the tools of the past,” says Rob Galaski, Global Leader, Banking & Capital Markets, Deloitte Consulting. “To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way.”

For each of the above described concerns, the report outlines the key underlying root causes of the issue and highlights the most pressing challenges, identifies how those challenges might be addressed through new tools and governance frameworks, and what opportunities might be unlocked by doing so.

The report was prepared in collaboration with Deloitte and follows five previous reports on financial innovation. The World Economic Forum will continue its work in Financial Services, with a particular focus on AI’s connections to other emerging technologies in its next phase of research through mid-2020.

Continue Reading

Science & Technology

US Blacklist of Chinese Surveillance Companies Creates Supply Chain Confusion

Published

on

The United States Department of Commerce’s decision to blacklist 28 Chinese public safety organizations and commercial entities hit at some of China’s most dominant vendors within the security industry. Of the eight commercial entities added to the blacklist, six of them are some of China’s most successful digital forensics, facial recognition, and AI companies. However, the two surveillance manufacturers who made this blacklist could have a significant impact on the global market at large—Dahua and Hikvision.

Putting geopolitics aside, Dahua’s and Hikvision’s positions within the overall global digital surveillance market makes their blacklisting somewhat of a shock, with the immediate effects touching off significant questions among U.S. partners, end users, and supply chain partners.

Frost & Sullivan’s research finds that, currently, Hikvision and Dahua rank second and third in total global sales among the $20.48 billion global surveillance market but are fast-tracking to become the top two vendors among IP surveillance camera manufacturers. Their insurgent rise among IP surveillance camera providers came about due to both companies’ aggressive growth pipelines, significant product libraries of high-quality surveillance cameras and new imaging technologies, and low-cost pricing models that provide customers with higher levels of affordability.

This is also not the first time that these two vendors have found themselves in the crosshairs of the U.S. government. In 2018, the U.S. initiated a ban on the sale and use of Hikvision and Dahua camera equipment within government-owned facilities, including the Department of Defense, military bases, and government-owned buildings. However, the vague language of the ban made it difficult for end users to determine whether they were just banned from new purchases of Dahua or Hikvision cameras or if they needed to completely rip-and-replace existing equipment with another brand. Systems integrators, distributors, and even technology partners themselves remained unsure of how they should handle the ban’s implications, only serving to sow confusion among U.S. customers.

In addition to confusion over how end users in the government space were to proceed regarding their Hikvision and Dahua equipment came the realization that both companies held significant customer share among commercial companies throughout the U.S. market—so where was the ban’s line being drawn for these entities? Were they to comply or not? If so, how? Again, these questions have remained unanswered since 2018.

Hikvision and Dahua each have built a strong presence within the U.S. market, despite the 2018 ban. Both companies are seen as regular participants in industry tradeshows and events, and remain active among industry partners throughout the surveillance ecosystem. Both companies have also attempted to work with the U.S. government to alleviate security concerns and draw clearer guidelines for their sales and distribution partners throughout the country. They even established regional operations centers and headquarters in the country.

While blacklisting does send a clearer message to end users, integrators, and distributors—for sales and usage of these companies’ technologies—remedies for future actions still remain unclear. When it comes to legacy Hikvision and Dahua cameras, the onus appears to be on end users and integrators to decide whether rip-and-replace strategies are the best way to comply with government rulings or to just leave the solutions in place and hope for the best.

As far as broader global impacts of this action, these will remain to be seen. While the 2018 ban did bring about talks of similar bans in other regions, none of these bans ever materialized. Dahua and Hikvision maintained their strong market positioning, even achieving higher-than-average growth rates in the past year. Blacklisting does send a stronger message to global regulators though, so market participants outside the U.S. will just have to adopt a wait-and-see posture to see how, if at all, they may need to prepare their own surveillance equipment supply chains for changes to come.

Continue Reading

Latest

Trending