Connect with us

Science & Technology

How AI will shape the domestic, diplomatic and military landscape

Published

on

Wagner and Furst exhaustively explore the inner workings and implications of AI in their new book, “AI Supremacy: Winning in the Era of Machine Learning”.  Each chapter focuses on the current and future state of AI within a specific industry, country or society in general.  Special emphasis is placed on how AI will shape the domestic, diplomatic and military landscapes of the US, EU and China.

Here is an interview with Daniel Wagner

Can you briefly explain the differences between artificial intelligence, machine learning, and deep learning?

Artificial intelligence (AI) is the overarching science and engineering associated with intelligent algorithms, whether or not they learn from data. However, the definition of intelligence is subject to philosophical debate-even the terms algorithms can be interpreted in a wide context. This is one of the reasons why there is some confusion about what AI is and what is not, because people use the word loosely and have their own definition of what they believe AI is. People should understand AI to be a catch-all term for technology which tends to imply the latest advances in intelligent algorithms, but the context in how the phrase is used determines its meaning, which can vary quite widely.

Machine learning (ML) is a subfield of AI that focuses on intelligent algorithms that can learn automatically (without being explicitly programmed) from data. There are three general categories of ML: supervised machine learning, unsupervised machine learning, and reinforcement learning.

Deep learning (DL) is a subfield of ML that imitates the workings of the human brain (or neural networks) in the processing of data and creating patterns for use in decision-making. It is true that the way the human brain processes information was one of the main inspirations behind DL, but it only mimics the functioning of neurons. This doesn’t mean that consciousness is being replicated, because we really do not understand all the underlying mechanics driving consciousness. Since DL is a rapidly evolving field there are other more general definitions of it, such as a neural network with more than two layers. The idea of layers is that information is processed by the DL algorithm at one level and then passes information on to the next level so that higher levels of abstraction and conclusions can be drawn about data.

Is China’s Social Credit Score system about to usher in an irreversible Orwellian nightmare there? How likely is it to spread to other dictatorships?

The social credit system that the Chinese government is in the process of unleashing is creating an Orwellian nightmare for some of China’s citizens. We say “some” because many Chinese citizens do not necessarily realize that it is being rolled out. This is because the government has been gradually implementing versions of what has become the social credit system over a period of years without calling it that. Secondly, most Chinese citizens have become numb to the intrusive nature of the Chinese state. They have been poked and prodded in various forms for so long that they have become accustomed to, and somewhat accepting, of it. That said, the social credit system has real consequences for those who fall afoul of it; they will soon learn about the consequences of having done so, if they have not learned already.

As we note in the book, the Chinese government has shared elements of its social credit system technology with a range of states across the world. There is every reason to believe that authoritarian governments will wish to adopt the technology and use it for their own purposes. Some have already done so.

How can we stop consumer drones from being used to aid in blackmail, burglary, assassination, and terrorist attacks?

As Daniel notes in his book Virtual Terror, governments are having a difficult time keeping track of the tens of millions of drones that are in operation in societies around the world. Registering them is largely voluntary and there are too few regulations in place governing their use. Given this, there is little that can be done, at this juncture, to prevent them from being used for nefarious purposes. Moreover, drones’ use on the battlefield is transforming the way individual battles will be fought, and wars will be waged. We have a chapter in the book devoted to this subject.

Google, YouTube, Twitter and Facebook have been caught throttling/ending traffic to many progressive (TeleSur, TJ Kirk) and conservative (InfoWars, PragerU) websites and channels. Should search engines and social media platforms be regulated as public utilities, to lend 1st Amendment protections to the users of these American companies?

The current battle being waged–in the courts, legislatures, and the battlefield of social media itself- are already indicative of how so many unanswered questions associated with the rise of social media are being addressed out of necessity. It seems that no one–least of all the social media firms–wants to assume responsibility when things go wrong or uncomfortable questions must be answered. Courts and legislatures will ultimately have to find a middle ground response to issues such as first amendment protections, but this will likely remain a moving target for some time to come, as there is no single black or white answer, and, as each new law comes into effect, its ramifications will become known, which means the laws will undoubtedly need to become subsequently modified.

Do you think blockchain will eventually lead to a golden era of fiscal transparency?

This is hard to say. On one hand, the rise of cryptocurrencies brought with them the promise of money outside the control of governments and large corporations. However, cryptocurrencies have been subject to a number of high-profile heists and there are still some fundamental issues with them, such as the throughput of Bitcoin which is only able to process around a few transactions per second. This makes some cryptocurrencies less viable for real world transactions and everyday commerce.

The financial services industry has jumped on the blockchain bandwagon, but they have taken the open concept of some cryptocurrencies and reinvented it as distributed ledger technology (DLT). To be part of DLTs created by financial institutions, a joining member must be a financial institution. For this reason, the notion of transparency is not relevant, since the DLT will be controlled by a limited number of members and only they will determine what information is public and what is not.

The other issue with the crypto space right now is that is filled with fraud. At the end of the day, crypto is an asset class like gold or any other precious metal. It does not actually produce anything; The only real value it has is the willingness of another person to pay more for it in the future. It is possible that a few cryptocurrencies will survive long-term and become somewhat viable, but the evolution of blockchain will likely continue to move towards DLT that more people will trust. Also, governments are likely to issue their own cryptocurrencies in the future, which will bring it into the mainstream.

Taiwan has recently started using online debate forums to help draft legislation, in a form of direct democracy. Kenya just announced that they will post presidential election results on a blockchain. How can AI and blockchain enhance democracy?

Online debate forums are obviously a good thing, because having the average person engage in political debate and being able to record and aggregate voting results will create an opportunity for more transparency. The challenge becomes how to verify the identities of the people submitting their feedback. Could an AI program be designed to submit feedback millions of times to give a false representation of the public’s concerns?

Estonia has long been revered as the world’s most advanced digital society, but researchers have pointed out serious security flaws in its electronic voting system, which could be manipulated to influence election outcomes. AI can help by putting in place controls to verify that the person providing feedback for legislation is a citizen. Online forums could force users to take a pic of their face next to their passport to verify their identity with facial recognition algorithms.

Should an international statute be passed banning scientists from installing emotions-specially pain and fear-into AI?

Perhaps, for now at least, the question should be: should scientists ban the installation of robots or other forms of AI to imitate human emotions? The short answer to this is that it depends. On one hand, AI imitating human emotions could be a good thing, such as when caring for the elderly or teaching a complex concept to a student. However, a risk is that when AI can imitate human emotions very well, people may believe they have gained a true friend who understands them. It is somewhat paradoxical that the rise of social media has connected more of us, but some people still admit that they lack meaningful relationships with others.

You don’t talk much about India in your book. How far behind are they in the AI race, compared to China, the US & EU?

Surprisingly, many of the world’s countries have only adopted a formal AI strategy in the last year. India is one of them; It only formally adopted an AI strategy in 2018 and lags well behind China, the EU, the US, and variety of other countries. India has tremendous potential to meaningfully enter the race for AI supremacy and become a viable contender, but it still lacks a military AI strategy. India already contributes to advanced AI-oriented technology through its thriving software, engineering, and consulting sectors. Once it ramps up a national strategy, it should quickly become a leader in the AI arena–to the extent that it devotes sufficient resources to that strategy and swiftly and effectively implements it. That is not a guaranteed outcome, based on the country’s prior history with some prior national initiatives. We must wait and see if India lives up to its potential in this arena.

On page 58 you write, “Higher-paying jobs requiring creativity and problem-solving skills, often assisted by computers, have proliferated… Demand has increased for lower skilled restaurant workers, janitors, home health aides, and others providing services that cannot be automated.” How will we be able to stop this kind of income inequality?

In all likelihood, the rise of AI will, at least temporarily, increased the schism between highly paid white-collar jobs and lower paid blue-collar jobs, however, at the same time, AI will, over decades, dramatically alter the jobs landscape. Entire industries will be transformed to become more efficient and cost effective. In some cases this will result in a loss of jobs while in others it will result in job creation. What history has shown is that, even in the face of transformational change, the job market has a way of self-correcting; Overall levels of employment tend to stay more or less the same. We have no doubt that this will prove to be the case in this AI-driven era. While income inequality will remain a persistent threat, our expectation is that, two decades from now, it will be no worse than it is right now.

AI systems like COMPAS and PredPol have been exposed for being racially biased. During YouTube’s “Adpocalypse”, many news and opinion videos got demonetized by algorithms indiscriminately targeting keywords like ‘war’ and ‘racism”. How can scientists and executives prevent their biases from influencing their AI?

This will be an ongoing debate. Facebook removed a PragerU video where a woman was describing the need for strong men in society and the problem with feminizing them. Ultimately, Facebook said it was a mistake and put the video back up. So the question becomes who decides what constitutes “racist” or “hate speech” content? The legal issues seem to emerge, if it can be argued that the content being communicated are calling on people to act in a violent way.

Could the political preferences of a social media company’s executives overrule the sensibilities of the common person to make up their own mind? On the other hand, India has a string of mob killings from disinformation campaigns on WhatsApp, mostly from people who were first time smartphone users. Companies could argue that some people are not able to distinguish between real and fake videos so content must be censored in that case.

Ultimately, executives and scientists will need to have an open and ongoing debate about content censorship. Companies must devise a set of principles and adhere to them to the best of their ability. As AI becomes more prevalent in monitoring and censoring online content there will have to be more transparency about the process and the algorithms will need to be adjusted following a review by the company. In other words, companies cannot prevent algorithmic biases, but they can monitor them and be transparent with the public about steps to make them better over time.

Amper is an AI music composer. Heliograf has written about 1000 news blurbs for WaPo. E-sports and e-bands are starting to sell out stadiums. Are there any human careers that you see as being automation-proof?

In theory, nearly any cognitive or physical task can be automated. We do not believe that people should be too worried, at least for the time being, about the implications of doing so because the costs to automate even basic tasks to the level of human performance is extremely high, and we are a good ways away from being technically capable of automating most tasks. However, AI should spark conversations about how we want to structure our society in the future and what it means to be human because AI will improve over time and become more dominant in the economy.

In Chapter 1 you briefly mention digital amnesia (outsourcing the responsibility of memorizing stuff to one’s devices). How else do you anticipate consumer devices will change us psychologically in the next few decades?

We could see a spike in schizophrenia because the immersive nature of virtual, augmented, and mixed reality that will increasingly blur the lines between reality and fantasy. In the 1960s there was a surge of interest in mind-expanding drugs such as psychedelics. However, someone ingesting LSD knew there was a time limit associated with the effects of the drug. These technologies do not end. Slowly, the real world could become less appealing and less real for heavy users of extended reality technology. This could affect relationships between other humans and increase the nature and commonality of mental illness. Also, as discussed in the book, we are already seeing people who cannot deal with risk in the real world. There have been several cases of animal mauling, cliff falls, and car crashes among individuals in search of the perfect “selfie”. This tendency to want to perfect our digital personas should be a topic of debate in schools and at the dinner table.

Ready Player One is the most recent sci-fi film positing the gradual elimination of corporeal existence through Virtual Reality. What do you think of the transcension hypothesis on Fermi’s paradox?

The idea that our consciousness can exist independently from our bodies has occurred  throughout humanity’s history. It appears that our consciousness is a product of our own living bodies. No one knows if a person’s consciousness can exist after the body dies, but some have suggested that a person’s brain still functions for a few minutes after the body dies. It seems we need to worry about the impact of virtual reality on our physical bodies before it will be possible for us to transcend our bodies and exist on a digital plane. This is a great thought experiment, but there is not enough evidence to suggest that this is even remotely possible in the future.

What role will AI play in climate change?

AI will become an indispensable tool for helping to predict the impacts of climate change in the future. The field of “Climate Informatics” is already blossoming, harnessing AI to fundamentally transform weather forecasting (including the prediction of extreme events) and to improve our understanding of the effects of climate change. Much more thought and research needs to be devoted to exploring the linkages between the technology revolution and other important global trends, including demographic changes such as ageing and migration, climate change, and sustainable development, but AI should make a real difference in enhancing our general understanding of the impacts of these, and other, phenomena going forward.

Russell Whitehouse is Executive Editor at IntPolicyDigest. He’s also a freelance social media manager/producer, 2016 Iowa Caucus volunteer and a policy essayist.

Continue Reading
Comments

Science & Technology

First Quantum Computing Guidelines Launched as Investment Booms

Published

on

National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals have closed in the past year – more than the past three years combined. Quantum computing promises to disrupt the future of business, science, government, and society itself, but an equitable framework is crucial to address future risks.

A new Insight Report released today at the World Economic Forum Annual Meeting 2022 provides a roadmap for these emerging opportunities across public and private sectors. The principles have been co-designed by a global multistakeholder community composed of quantum experts, emerging technology ethics and law experts, decision makers and policy makers, social scientists and academics.

“The critical opportunity at the dawn of this historic transformation is to address ethical, societal and legal concerns well before commercialization,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “This report represents an early intervention and the beginning of a multi-disciplinary, global conversation that will guide the development of quantum computing to the benefit of all society.”

“Quantum computing holds the potential to help solve some of society’s greatest challenges, and IBM has been at the forefront of bringing quantum hardware and software to communities of discovery worldwide,” said Dr. Heike Riel, IBM Fellow, Head of Science and Technology and Lead, Quantum, IBM Research Europe. “This report is a key step in initiating the discussion around how quantum computing should be shaped and governed, for the benefit of all.”

Professor Bronwyn Fox, Chief Scientist at CSIRO, Australia’s science national agency said, “the Principles reflect conversations CSIRO’s scientists have had with partners from around the world who share an ambition for a responsible quantum future. Embedding responsible innovation in quantum computing is key to its successful deployment and uptake for generations to come. CSIRO is committed to ensuring these Principles are used to support a strong quantum industry in Australia and generate significant social and public good.”

In adapting to the coming hybrid model of classical, multi-cloud, and soon quantum computing, the Forum’s framework establishes best-practice principles and core values. These guidelines set the foundation and give rise to a new information-processing paradigm while ensuring stakeholder equity, risk mitigation, and consumer benefit.

The governance principles are grouped into nine themes and underpinned by a set of seven core values. Themes and respective goals defining the principles:

1. Transformative capabilities: Harness the transformative capabilities of this technology and the applications for the good of humanity while managing the risks appropriately.

2. Access to hardware infrastructure: Ensure wide access to quantum computing hardware.

3. Open innovation: Encourage collaboration and a precompetitive environment, enabling faster development of the technology and the realization of its applications.

4. Creating awareness: Ensure the general population and quantum computing stakeholders are aware, engaged and sufficiently informed to enable ongoing responsible dialogue and communication; stakeholders with oversight and authority should be able to make informed decisions about quantum computing in their respective domains.

5. Workforce development and capability-building: Build and sustain a quantum-ready workforce.

6. Cybersecurity: Ensure the transition to a quantum-secure digital world.

7. Privacy: Mitigate potential data-privacy violations through theft and processing by quantum computers.

8. Standardization: Promote standards and road-mapping mechanisms to accelerate the development of the technology.

9. Sustainability: Develop a sustainable future with and for quantum computing technology

Quantum computing core values that hold across the themes and principles:

Common good: The transformative capabilities of quantum computing and its applications are harnessed to ensure they will be used to benefit humanity.

Accountability: Use of quantum computing in any context has mechanisms in place to ensure human accountability, both in its design and in its uses and outcomes. All stakeholders in the quantum computing community are responsible for ensuring that the intentional misuse of quantum computing for harmful purposes is not accepted or inadvertently positively sanctioned.

Inclusiveness: In the development of quantum computing, insofar as possible, a broad and truly diverse range of stakeholder perspectives are engaged in meaningful dialogue to avoid narrow definitions of what may be considered a harmful or beneficial use of the technology.

Equitability: Quantum computing developers and users ensure that the technology is equitable by design, and that quantum computing-based technologies are fairly and evenly distributed insofar as possible. Particular consideration is given to any specific needs of vulnerable populations to ensure equitability.

Non-maleficence: All stakeholders use quantum computing in a safe, ethical and responsible manner. Furthermore, all stakeholders ensure quantum computing does not put humans at risk of harm, either in the intended or unintended outcomes of its use, and that it is not used for nefarious purposes.

Accessibility: Quantum computing technology and knowledge are actively made widely accessible. This includes the development, deployment and use of the technology. The aim is to cultivate a general ability among the population, societal actors, corporations and governments to understand the main principles of quantum computing, the ways in which it differs from classical computing and the potential it brings.

Transparency: Users, developers and regulators are transparent about their purpose and intentions with regard to quantum computing.

“Governments and industries are accelerating their investments in quantum computing research and development worldwide,” said Derek O’Halloran, Head of Digital Economy, World Economic Forum. “This report starts the conversation that will help us understand the opportunities, set the premise for ethical guidelines, and pre-empt socioeconomic, political and legal risks well ahead of global deployment.”

The Quantum Computing Governance Principles is an initiative of the World Economic Forum’s Quantum Computing Network, a multi-stakeholder initiative focused on accelerating responsible quantum computing.

Next steps for the Quantum Computing Governance Initiative will be to work with wider stakeholder groups to adopt these principles as part of broader governance frameworks and policy approaches. With this framework, business and investment communities along with policy makers and academia will be better equipped to adopt to the coming paradigm shift. Ultimately, everyone will be better prepared to harness the transformative capabilities of quantum sciences – perhaps the most exciting emergent technologies of the 21st Century.

Continue Reading

Science & Technology

Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes

Published

on

The global digital economy has surged off the back of the COVID-19 pandemic, but so has cybercrime – ransomware attacks rose 151% in 2021. There were on average 270 cyberattacks per organization during 2021, a 31% increase on 2020, with each successful cyber breach costing a company $3.6m. After a breach becomes public, the average share price of the hacked company underperforms the NASDAQ by -3% even six months after the event.

According to the World Economic Forum’s new annual report, The Global Cybersecurity Outlook 2022, 80% of cyber leaders now consider ransomware a “danger” and “threat” to public safety and there is a large perception gap between business executives who think their companies are secure and security leaders who disagree.

Some 92% of business executives surveyed agree that cyber resilience is integrated into enterprise risk-management strategies, only 55% of cyber leaders surveyed agree. This gap between leaders can leave firms vulnerable to attacks as a direct result of incongruous security priorities and policies.

Even after a threat is detected, our survey, written in collaboration with Accenture, found nearly two-thirds would find it challenging to respond to a cybersecurity incident due to the shortage of skills within their team. Perhaps even more troubling is the growing trend that companies need 280 days on average to identify and respond to a cyberattack. To put this into perspective, an incident which occurs on 1 January may not be fully contained until 8 October.

“Companies must now embrace cyber resilience – not only defending against cyberattacks but also preparing for swift and timely incident response and recovery when an attack does occur,” said Jeremy Jurgens, Managing Director at the World Economic Forum.

“Organizations need to work more closely with ecosystem partners and other third parties to make cybersecurity part of an organization’s ecosystem DNA, so they can be resilient and promote customer trust,” said Julie Sweet, Chair and CEO, Accenture. “This report underscores key challenges leaders face – collaborating with ecosystem partners and retaining and recruiting talent. We are proud to work with the World Economic Forum on this important topic because cybersecurity impacts every organization at all levels.”

Chief Cybersecurity Officers kept up at night by three things

Less than one-fifth of cyber leaders feel confident their organizations are cyber resilient. Three major concerns keep them awake at night:

– They don’t feel consulted on business decisions, and they struggle to gain the support of decision-makers in prioritizing cyber risks – 7 in 10 see cyber resilience featuring prominently in corporate risk management

– Recruiting and retaining the right talent is their greatest concern – 6 in 10 think it would be challenging to respond to a cybersecurity incident because they lack the skills within their team

– Nearly 9 in 10 see SMEs as the weakest link in the supply chain – 40% of respondents have been negatively affected by a supply chain cybersecurity incident

Training and closing the cyber gap are key solutions

Solutions include employee cyber training, offline backups, cyber insurance and platform-based cybersecurity solutions that stop known ransomware threats across all attack vectors.

Above all, there is an urgent need to close the gap of understanding between business and security leaders. It is impossible to attain complete cybersecurity, so the key objective must be to reinforce cyber resilience.

Including cyber leaders into the corporate governance process will help close this gap.

Continue Reading

Science & Technology

Ethical aspects relating to cyberspace: Self-regulation and codes of conduct

Published

on

Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.

In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.

At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.

In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.

It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.

Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.

The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.

Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.

The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.

Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.

Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).

The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can  become the basis for the future creation of international information law, adapted to the realities of the electronic age.

Continue Reading

Publications

Latest

Trending