The year is 2040. Drones buzz over neighbourhoods, delivering packages. Smart homes, with interconnected Wi-Fi devices, eliminate the need for housework. Driverless vehicles take us from A to B at great speed. Wars are still fought but digitally, with lines of code and armies of robots. We vacation in space, and share stories about the moon.
In this intelligent machine age, what role will we play? Some reports, examining the implications of the digital revolution for labour markets, are forecasting a bleak future.
The concerns relate to the potential for labour displacement, as systems of artificial intelligence and automation gain increasing traction in the workplace. As these systems evolve and become ever more sophisticated, the argument goes that they will be able to outperform humans, offering greater degrees of precision, efficiency, competitiveness and reliability. Over time, a larger share of our operations is likely to be outsourced to machines.
Does this hypothesis have merit? Will capital soon no longer be able to cohabit in harmony with labour? Should we be concerned about the prospect of mass ‘technological unemployment’?
The man vs. machine debate is centuries-old. John Maynard Keynes first popularised the term ‘technological unemployment’ in his 1930 essay Economic Possibilities for our Grandchildren. Keynes regarded the phenomenon as a “temporary phase of maladjustment” for countries at the frontier of progress. On the other side of the debate, techno-pessimists, such as the classical economist, David Ricardo, instead, believed that the introduction of new technologies could lead to a sustained decline of the working population.
To understand which argument aligns better with today’s technological and labour market landscape, let’s consider some recent developments.
It is undeniable that the world and our role within it is rapidly changing. Just look at the staggering developments taking place in the transportation sector. In the Jetsons, an animated sitcom which first aired six decades ago, the inhabitants of an imaginary future commuted to work in flying cars. Today, we are on the brink of turning that vision into reality. UBER has plans to establish an aerial taxi service by 2023, and other companies have already developed flying car prototypes. Many projects under development today weren’t even anticipated by the science fiction of the past. For instance, Elon Musk, the man behind both Tesla and SpaceX, is building an underground network of tunnels that run many layers deep across the eastern United States, to transport cars and alleviate congestion challenges. In addition, in several countries, driverless cars are currently being tested. Automakers anticipate that fully-autonomous vehicles will be chauffeuring us around within the next three years.
These are just a small selection of the numerous examples of comprehensive transformation taking place today. But will we really benefit from such change? We have to wonder whether there is some irrational exuberance.
The long view of innovation, however, provides good reason for optimism. During each era of revolutionary change, innovation has lifted productivity, reduced the prices of goods and services, created new industries, stimulated output and generated fresh employment opportunities.
The first industrial revolution brought with it the power of steam and machine-based manufacturing. The new industries and jobs it generated more than offset the displacement of skilled workers producing hand-made goods. The advent of the automobile in the 19th century did the same, relative to the jobs that were lost from the horse and carriage economy. More recently, the silicon revolution gave us the power of computing, and the internet. These technologies created new businesses, tore down geographical barriers and massively disrupted the ways in which we interact. Like those that preceded it, the silicon revolution, generated far more jobs than were lost, for example in basic administrative operations.
In other words, the available body of empirical evidence indicates that short-term labour displacement, arising from technological change, has always been more than offset by the expansion of labour markets in the long-term. There is also some evidence of a similar pattern taking shape today. Since the global financial crisis the rate of unemployment has fallen sharply, and the main reason behind this decline has been very strong rates of new job creation. In the UK, technology has recently contributed to the loss of 800,000 jobs but has helped to create at least 3.5 million jobs. Each of these jobs is paying, on average, almost £10,000 more per annum compared to those that have been lost. Business sentiment, additionally, remains largely positive regarding the impact of technology on labour markets. A recent survey, undertaken by KPMG, of chief executive officers (CEOs) in the UK, reveals that seventy-one per cent believe that artificial intelligence will create more jobs than it destroys.
OK, let’s pause for a bit.
The past is not always a reliable indicator of the future. So could this time be different? There is reason to think so. Technological change is progressing at an unprecedented rate. New advancements are taking place almost daily, and their diffusion into the workplace is accelerating.
Last year, over 40 per cent of adults in the UK managed their bank accounts using smartphones. Within the next five years, this figure is projected to rise to 70 per cent, reflecting increasing numbers of mobile users in rural areas. By that time, analysts believe that customers will only visit their bank only twice a year. These trends have driven a heavy consolidation of banks around the world. In 2017, major UK banks shut, or announced plans to shut, nearly 1,000 branches. Thousands of jobs have already been lost.
A shift to driverless vehicles, likewise, could impact significant numbers of people, from lorry drivers to bus drivers to the various constituents of the gig economy. In the UK alone, over a half million people are currently employed in road transportation. Relative to earlier anxieties regarding the potential of systems like UBER to reduce jobs for ‘black cab’ drivers, these new developments surely provide greater grounds for unease.
Workers in the fast food industry could also be at risk, owing to technologies that enable self-service. McDonald’s, for instance, recently piloted “create your taste” touchscreens in its US-based restaurants. Through this system, customers could craft their own burger, and place orders at the touch of a button. The need for human interaction was eliminated. In America alone, almost 4 million people are currently employed in fast food restaurants.
Even recruiters are finding themselves threatened. Based on social media activity, work tenure, and purchasing history, algorithms can now predict when someone will be ready for a job. Text analysis can identify skills and experience many times faster than humans can. As a result, some estimates are giving the existing HR recruitment industry two to four years more at best. Hiring, for now, will still require a human touch. But that may change over time too. It is not implausible to imagine software capable of assessing personality, which scrutinises candidates on factors such as tone, facial movements and body language.
The list of impacted industries goes on and on and on. All are in the same boat.
So was Keynes right, or was Ricardo? Before we jump to conclusions regarding the nature of the relationship between technological innovation and labour markets, let’s try a little thought experiment. Take it as given that, in line with empirical evidence, the disruption being observed in labour markets today will in the future be overshadowed by an expansion in output and jobs. That being the case, would you be prepared to forego your employment now to enable a higher standard of living for your children and your grandchildren tomorrow?
If the evidence checks out, then our view on technology and the value of innovation really boils down to this one question.
G2C e-Governance & e-Frauds: A Perspective for Digital Pakistan Policy
e-Governance, sometimes referred as e-government, online-government or digital government, is the use of information and communication technologies (ICTs) to assist in the transformation of government structures and operations for cooperative and integrated service delivery for citizens and government agencies. e-Governance involves using ICT tools to improve the delivery of government services to citizens, businesses, and other government agencies. e-Governance encompasses a wide range of activities and actors, these include government-to-government (G2G), government-to-business (G2B), and government-to-citizen (G2C).
The benefits to be expected from e-Governance initiatives can be put into three major categories:
- Improve transparency, accountability, and democracy, which reduced levels of corruption,
- Citizen and business satisfaction and confidence with public services, and
- Improve achievement of economic and social policy outcomes (e.g. education, health, justice, welfare, industry development etc.)
e-Governance not only plays a critical role in building inclusive, resilient societies but also, enables citizens to interact and receive services from the federal government and local governments 24 hours a day, seven days a week – 24/7. In many respects, the government-to-citizen (G2C) segment represents the backbone of e-Governance. The G2C initiatives are designed to facilitate citizen interaction with government, which is recognize to be principal objective for good governance.
Despite the opportunities e-Governance offers, it also introduces new challenges. In recent times, Government of Pakistan (GoP) has demonstrated a real willingness to transform relationships between government services and citizens, particularly by strengthening the use of ICT and by offering services online, – (Digital Pakistan Vision). Civil society is also committed to implementing such initiatives to improve democratic governance using ICT.
On the other hand, despite the possible opportunities for implementation of e-Governance, Digital Pakistan initiatives, there are a number of challenges that could prevent the recognition of anticipated benefits. Some of the challenges, for instance disparities in computer and internet access, whether due to lack of financial resources or necessary skills, pre-existing systems and conditions, digital literacy (e-literacy) and more importantly electronic frauds (e-frauds).
The term ‘fraud’ commonly includes activities such as theft, corruption, embezzlement, money laundering, bribery and extortion. From the perspective of e-fraud, it may be described as “Inducing a course of action by deceit or dishonest conduct, involving acts of omissions or the making of false statements, with the object of obtaining money or other benefit.” e-Fraud is also defined as a deception deliberately practiced to secure unfair or unlawful gain where some part of the communication between the victim and the fraudster is via a network and/or some action of the victim and/or the fraudster is performed on a computer network. As a matter of fact, e-fraud is not only technical and management problem but also a social problem.
In Pakistan, a citizen-centric approach (G2C e-governance) will enable the government to provide improved service qualities, which in turn develop the citizen satisfaction in democracy. However, due to a variety of technical, economic, and political reasons, e-Governance initiatives will take time to evolve into their full potential. Similarly, the exact scale of e-frauds (online or offline) being committed in Pakistan is currently unknown. Nevertheless, there are certain areas of concern regarding the “Digital Pakistan Policy – 2018”, for which the following recommendations are put forwarded for consideration in future reviews.
Digital Pakistan Policy must be practicable, outcome-focused, risk-based, citizen-centric, locally and globally relevant.
Policy makers must first educate themselves better with the respect to Internet of Things (IoTs), internet and cyber security along with electronic frauds (e-frauds), and formulate an effective anti e-fraud strategy within Digital Pakistan Policy.
Government must support the necessary research and development (R&D) to address digital issues (e.g. e-frauds and cyber-space ethics, network and cloud security etc.), and establish a program to educate citizenry about the digital ecosystem (e-literacy).
Government must overcome the obstacles to realistic, timely, actionable information sharing with all government institutions/departments and stakeholders.
Government must get its own house in order, continue its efforts to strengthen good governance with emphasis on merit-based institutional development and rule of law. And, exceptionally eliminate corruption and nepotism from the society.
Ten Ways the C-Suite Can Protect their Company against Cyberattack
Cyberattacks are one of the top 10 global risks of highest concern in the next decade, with an estimated price tag of $90 trillion if cybersecurity efforts do not keep pace with technological change. While there is abundant guidance in the cybersecurity community, the application of prescribed action continues to fall short of what is required to ensure effective defence against cyberattacks. The challenges created by accelerating technological innovation have reached new levels of complexity and scale – today responsibility for cybersecurity in organizations is no longer one Chief Security Officer’s job, it involves everyone.
The Cybersecurity Guide for Leaders in Today’s Digital World was developed by the World Economic Forum Centre for Cybersecurity and several of its partners to assist the growing number of C-suite executives responsible for setting and implementing the strategy and governance of cybersecurity and resilience. The guide bridges the gap between leaders with and without technical backgrounds. Following almost one year of research, it outlines 10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design.
“With effective cyber-risk management, business executives can achieve smarter, faster and more connected futures, driving business growth,” said Georges De Moura, Head of Industry Solutions, Centre for Cybersecurity, World Economic Forum. “From the steps necessary to think more like a business leader and develop better standards of cyber hygiene, through to the essential elements of crisis management, the report offers an excellent cybersecurity playbook for leaders in public and private sectors.”
“Practicing good cybersecurity is everyone’s responsibility, even if you don’t have the word “security” in your job title,” said Paige H. Adams, Global Chief Information Security Officer, Zurich Insurance Group. “This report provides a practical guide with ten basic tenets for business leaders to incorporate into their company’s day-to-day operations. Diligent application of these tenets and making them a part of your corporate culture will go a long way toward reducing risk and increasing cyber resilience.”
“The recommendation to foster internal and external partnerships is one of the most important, in my view,” said Sir Rob Wainwright, Senior Cyber Partner, Deloitte. “The dynamic nature of the threat, not least in terms of how it reflects the recent growth of an integrated criminal economy, calls on us to build a better global architecture of cyber cooperation. Such cooperation should include more effective platforms for information sharing within and across industries, releasing the benefits of data integration and analytics to build better levels of threat awareness and response capability for all.”
The Ten Tenets
1. Think Like a Business Leader – Cybersecurity leaders are business leaders first and foremost. They have to position themselves, teams and operations as business enablers. Transforming cybersecurity from a support function into a business-enabling function requires a broader view and a stronger communication skill set than was required previously.
2. Foster Internal and External Partnerships – Cybersecurity is a team sport. Today, information security teams need to partner with many internal groups and develop a shared vision, objectives and KPIs to ensure that timelines are met while delivering a highly secure and usable product to customers.
3. Build and Practice Strong Cyber Hygiene – Five core security principles are crucial: a clear understanding of the data supply chain, a strong patching strategy, organization-wide authentication, a secure active directory of contacts, and encrypted critical business processes.
4. Protect Access to Mission-Critical Assets – Not all user access is created equal. It is essential to have strong processes and automated systems in place to ensure appropriate access rights and approval mechanisms.
5. Protect Your Email Domain Against Phishing – Email is the most common point of entry for cyber attackers, with the median company receiving over 90% of their detected malware via this channel. The guide highlights six ways to protect employees’ emails.
6. Apply a Zero-Trust Approach to Securing Your Supply Chain – The high velocity of new applications developed alongside the adoption of open source and cloud platforms is unprecedented. Security-by-design practices must be embedded in the full lifecycle of the project.
7. Prevent, Monitor and Respond to Cyber Threats – The question is not if, but when a significant breach will occur. How well a company manages this inevitability is ultimately critical. Threat intelligence teams should perform proactive hunts throughout the organization’s infrastructure and keep the detection teams up to date on the latest trends.
8. Develop and Practice a Comprehensive Crisis Management Plan – Many organizations focus primarily on how to prevent and defend while not focusing enough on institutionalizing the playbook of crisis management. The guide outlines 12 vital components any company’s crisis plan should incorporate.
9. Build a Robust Disaster Recovery Plan for Cyberattacks – A disaster recovery and continuity plan must be tailored to security incident scenarios to protect an organization from cyberattacks and to instruct on how to react in case of a data breach. Furthermore, it can reduce the amount of time it takes to identify breaches and restore critical services for the business.
10. Create a Culture of Cybersecurity – Keeping an organization secure is every employee’s job. Tailoring trainings, incentivizing employees, building elementary security knowledge and enforcing sanctions on repeat offenders could aid thedevelopment of a culture of cybersecurity.
In the Fourth Industrial Revolution, all businesses are undergoing transformative digitalization of their industries that will open new markets. Cybersecurity leaders need to take a stronger and more strategic leadership role. Inherent to this new role is the imperative to move beyond the role of compliance monitors and enforcers.
Moving First on AI Has Competitive Advantages and Risks
Financial institutions that implement AI early have the most to gain from its use, but also face the largest risks. The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty, and more have left implementation of the most cutting-edge AI uses at a standstill. However, a newly released report from the World Economic Forum, Navigating Uncharted Waters, shows how financial services firms and regulators can overcome these risks.
Using AI responsibly is about more than mitigating risks; its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.
“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” says Matthew Blake, Head of Financial Services, World Economic Forum. “This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”
Across several dimensions, AI introduces new complexities to age-old challenges in the financial services industry, and the governance frameworks of the past will not adequately address these new concerns.
Explaining AI decisions
Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain. This uncertainty has left the implementation of cutting-edge AI tools at a standstill. The Forum offers a solution: evolve past “one-size-fits-all” governance ideas to specific transparency requirements that consider the AI use case in question.
For example, it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life. It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.
Beyond “explainability”, the report explores new challenges surrounding bias and fairness, systemic risk, fiduciary duty, and collusion as they relate to the use of AI.
Bias and fairness
Algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raise the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analyzing alternative data that can be used to assess ‘thin file’ customers that traditional systems cannot understand due to a lack of information.
The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk. As the volume and velocity of interactions grow through automated agents, emerging risks may become increasingly difficult to detect, spread across various financial institutions, Fintechs, large technology companies, and other market participants. These new dynamics will require supervisory authorities to reinvent themselves as hubs of system-wide intelligence, using AI themselves to supervise AI systems.
As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held “responsible” for their actions – and if not, who should be held accountable.
Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.
“Using AI in financial services will require an openness to new ways of safeguarding the ecosystem, different from the tools of the past,” says Rob Galaski, Global Leader, Banking & Capital Markets, Deloitte Consulting. “To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way.”
For each of the above described concerns, the report outlines the key underlying root causes of the issue and highlights the most pressing challenges, identifies how those challenges might be addressed through new tools and governance frameworks, and what opportunities might be unlocked by doing so.
The report was prepared in collaboration with Deloitte and follows five previous reports on financial innovation. The World Economic Forum will continue its work in Financial Services, with a particular focus on AI’s connections to other emerging technologies in its next phase of research through mid-2020.
The hi-tech war between China and the United States
The new directive of the Central Office of the Communist Party of China (CPC), issued on December 8, 2019, ordered...
ADB’s Transport Investments in Pacific Projected at Over $1 Billion for 2017–2020
The Asian Development Bank’s (ADB) transport investments in the Pacific, comprised of 15 projects, is expected to reach over $1...
Hong Kong: No more China’s disheartened capitalism, please
Hong Kong’s unrest started in June 2019. It was triggered by the plans to allow extradition to mainland China. Critics...
The growing power of tourism
International tourist arrivals grew by a further 4% between January and September of 2019, the latest issue of the UNWTO...
More Companies and Government Ambition Required to Meet the “Net Zero” Challenge
Four years after the Paris Climate Agreement, tangible action from governments is falling short of trajectories needed to restrict global...
You, 16-year-old with the sad eyes
I think of you all the time, with love, with respect, with admiration. Why am I so sad? I lost...
Corporate Tax Havens
We’ve all heard the term in the media, or tossed around by savvy financial planners or accountants. But what are...
Middle East3 days ago
Turkey’s presence in Syria
Economy3 days ago
Banking on action: How ADB achieved 2020 climate finance milestone one year ahead of time
Europe2 days ago
Crossroads or a dead end: Do Germany and Europe face a triumph of indecision?
Middle East2 days ago
Iran, CPEC and regional connectivity
Middle East2 days ago
Algerian people shouted: No to corruption and mismanagement
South Asia3 days ago
The Torn Red Carpet: Welcome to Nepal in 2020
Energy2 days ago
Diverse notions of Energy Security in a Multi-polar World
Newsdesk2 days ago
Bangladesh Needs Climate Smart Investments for Higher Agricultural Growth