Connect with us

Tech

GDPR Clock is Ticking for the US Companies as Well: Top 7 Tips to Get Ready

Jasna Čošabić, PhD

Published

on

General Data Protection Regulation is about to be applicable as from 25 May 2018. Its long-arm teritorrial reach brings obligations not only to EU establishements, but to US based companies as well. Global connection through internet especially underlines the likelihood of such broad application and it will impact US businesses.One of the prerequisits for safe transfer of data between the EU and US is already accomplished by the EU-US Privacy Shield agreement. The European Commission has considered this agreement as providing adequate guarantees for transfer of data. Under Privacy Shield scheme companies may self-certify and adhere to principles stated therein. Yet, there is still less then 3000 companies in the US participating in the Privacy Shield. But GDPR safeguards have still to be followed. Below, we shall look at some of the most profound aspects of compliance with GDPR for the US (non-EU) based companies.

Data protection officer

Although it is not obligatory pursuant the GDPR, it is advisable that a company appoints a data protection officer (‘DPO’) or designate that role to a specific position in the company. DPOcan also be externally appointed. There may be a single DPO for several companies or several persons designated with DPO role in one company. The position needs not necessarily to follow such a title, but it may be a privacy officer, compliance officer, etc. Such person should possess expert knowledge about the GDPR and data privacy, and may have legal, technical or similar background. GDPR was not specific as to requirements of that person, apart from possesing expert knowledge. Role of DPO is toinform, monitor, advise, the controller, processor or employees, to cooperate with supervisory authority, provide training of staff, help in performing data protection impact assesment.

Data Protection Impact Assesment

The further step that companies affected by the GDPR including US companies should do in order to evaluate the risk of data breach is to perform a data protection impact assesment (‘DPIA’). DPIA is a thorough overview of the processes of the company, and can be done with the help of data protection officer. It may include a form or a template with a series of questions, which have to be answered for each processing activity. DPIA has to be detailed and cover all operations in the company. The function of DPIA is to predict situations in which data breaches may occur, and which include processing of private data. DPIA should contain, pursuant to Article 35 of the GDPR, a systematic description of the envisaged processing operations and the purposes of the processing, an assessment of the necessity and proportionality of the processing operations in relation to the purposes, an assessment of the risks to the rights and freedoms of data subjects referred to in paragraph, the measures envisaged to address the risks, including safeguards and security measures. DPIA is a very useful way of showing compliance and it is also a tool that would help to company at the first place, to have an overview of processing activities and an indication of where a breach could happen.

EU representative

A US company (non-EU based company) has to appoint an EU representative if its businessrelates to offering of goods or services to natural persons in the EU, including even free goods or services, or when processing is related to monitoring of behaviour of data subjects in the EU. Behaviour may include monitoring internet activity of data subjects in order to evaluate or predict her or his personal preferences, behaviors and attitudes. EU representative is not obligatory when the processing is occasional or does not include processing on a large scale of special categories of data such as genetic data, biometric data, data concerning health, ethnic origin, political opinions, etc. and when it is unlikely to result in a risk to the rights and freedoms of natural persons. However, given that the exceptions from the duty of designation of EU representative are pretty vague, in most cases companies whose operations are not neglectable towards persons in the EU would have to appoint a reprsentative. Location of such representative would be in one of the EU Member states where the data subjects are located. Representative should perform its tasks according to the mandate received from the controller or processor, including cooperating with the competent supervisory authorities regarding any action taken to ensure compliance with this Regulation, and he/she is also liable and subject to enforcement in case of non-compliance.

Consent matters

GDPR is overwhelmed with one key word of respect the privacy:consent. If companies wish to process data of natural persons that are in the EU, they must first obtain consent to do that. Consent must be freely given, informed, specific and unambigous.

Freely givenconsent presupposes that data subject must not feel pressured, or urged to consent, or subjected to non-negotiable terms. Consent is not considered as freely given if the data subject has no genuine or free choice.Data subject must not feel reluctant to refuse consent fearing that such refusal will bring detrimental effect to him/her. If the consent is preformulated by the controller, which is usually the case, the language of the consent must be clear and plain and easily understandable for the data subject. Further, if there are several purposes for the processing of certain data, consent must be given for every purpose separately. Consent must be specific and not abstract or vague. Silence, pre-ticked boxes or inactivity is not to be considered as consent under GDPR.

Informed consent means that data subject must know what the consent is for. He/she must be informed about what the consent will bring and there must not be any unknown or undeterminedissues. It is a duty of controller to inform data subject about scope and purpose of consent, and such information must be in clear and plain language. But, one must be careful that, as today in the world of fast moving technologies we face overflow of consentsa person has to give in short period of time, there may be an occurrence of ‘click fatigue []1’, which would result in persons not reading the information about the consent and clicking routinely without any thorough thinking. So, the controllers would have to make, by their technical design, such form of a consent, that would make the person read and understand his or her consent. It could be a combination of yes and no questions, changing of place of ticking boxes, visually appealing text accompanying consent, etc.

Consent must be unambiguous, or clearly given. There must not be space for interpretation whether consent is given for certain purpose or not. As to the form of the consent, it may be by ticking a box, choosing technical settings and similar (Recital 32 GDPR).

Data subject gives his consent for the processing of his personal data. However, companies have to bear in mind that data concept in the EU is broadly understood, and that it includes all personally identifiable information (PII), ranging from obvious data such as name and postal address, to less obvious data, but still PII covered by GDPR, such as IP address [2]. On the other hand the IP address is not that clearly considered as PII in the US. In that regard, the protection in the US must be stricter, obliging US based companies to also apply broader EU standards.

Privacy by design implemented

Privacy by design is a concept which brings together the legal requirements and technical measures. It is a nice and smooth way of incorporating law into technical structure of business. Privacy by design, if applied properly at the outset, shall ensure the compliance with the GDPR requirements. It should point out to principles of data minimisation, where only data which is necesssary should be processed, storage limitation, which would provide for a periodic overview of storage and automatic erasure of data no longer necessary.

One of the ways of showing compliance through the privacy by design is ‘pseudonymisation’. Pseudonymization is, according to GDPR, referred to as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information. Such additional information must be kept separately, so that it cannot be connected to identified or identifiable natural person.Pseudonymisation is not anonymisation and should not be mixed with it. Anonymisation is a technique which results in irreversible deidentification, and since it completely disables identification it is not subject of data protection under GDPR. Pseudonymisation only reduces the likability of a dataset with the original identity of a data subject, and is accordingly a useful security measure [3].

Binding corporate rules

Binding corporate rules (‘BCR’) include set of principles, procedures andpersonal data protection policies as well as a binding clause adopted by the company and approved by competent supervisory authority. Adopting binding corporate rules is not a simple process but means being on a safe track. It is one of the safeguards envisaged by the GDPR. BCR should include according to Article 47 of the GDPR, the structure and contact details of company, categories of personal data, the type of processing and its purposes, application of general data protection principles (such as purpose limitation, data minimisation, limited storage periods, data quality, data protection by design and by default, legal basis for processing, processing of special categories of personal data, ..), rights of data subjects, the tasks of data protection officer, complaint procedures, mechanisms for reporting to the competent supervisory authority, appropriate data protection training to personnel, indication that BCR are legally binding. BCR should additionally be accompanied with privacy policies, guidelines for employees, data protection audit plan, examples of the training program, description of the internal complaint system, security policy, certification process to make sure that all new IT applications processing data are compliant with BCR, job description of data protection officers or other persons in charge of data protection in the company.

Make your compliance visible

Well, if your company has performed all of the above, it has to make it visible. Companies, that are covered with the GDPR, not only do they have to comply, they have to show that they comply. GDPR puts an obligation on controllers to demonstrate their compliance.

From the first contact with the controller, the website must give the impression of compliance. BCR, privacy policies,DPO contact details must be visible in order that data subject may address him in case of data risk or breach. EU representative’s name and contact must be put forward in order to be accessible by the supervisory authority in the EU. Contact form for data subjects with options for access, right to object, erasure, rectification, restriction, should be there.Organisational chart of the company, flow of data transfer demonstrated by data flow mapp.These are only some of the most imporant features that have to be followed.

Non-compliance is a very costly adventure. The adventure that businesses will try to avoid. With systematic planning and duly analysing the necessity of compliance with GDPR, and with clearly defined processes, US companies can put many benefits for the business and attract and encourage data subjects in the EU to freely entrust their datato them. This is a thorough process, but worth accomplishing.

[1] Article 29 Working Party Guidelines on consent,p. 17

[2] According to judgment of the Court of Justice of the EU of 19 October 2016,in case C 582/14,

[3] Article 29 Data Protection Working Party, Opinion 05/2014 on Anonymisation Techniques adopted on 10 April 2014 p. 3

Continue Reading
Comments

Tech

Asia Needs a Region-Wide Approach to Harness Fintech’s Full Potential

MD Staff

Published

on

The importance of a region-wide approach to harness the potentials of fintech was emphasized at the High-Level Policy Dialogue: Regional Cooperation to Support Innovation, Inclusion and Stability in Asia on 11 October in Bali, Indonesia. Photo: ADB

Asia’s policy makers should strengthen cooperation to harness the potential of new financial technologies for inclusive growth. At the same time, they should work together to ensure they can respond better to the challenges posed by fintech.

New technologies such as mobile banking, big data, and peer-to-peer transfer networks are already extending the reach of financial services to those who were previously unbanked or out of reach, boosting incomes and living standards. Yet, fintech also comes with the risk of cyber fraud, data security, and privacy breaches. Disintermediation of fintech services or concentration of services among a few providers could also pose a risk to financial stability.

These and other issues were discussed at the High-Level Policy Dialogue on Regional Cooperation to Support Innovation, Inclusion, and Stability in Asia, organized by the Asian Development Bank (ADB), Bank Indonesia, and the ASEAN+3 Macroeconomic Research Office (AMRO).

The panel comprised Ms. Neav Chanthana, Deputy Governor of the National Bank of Cambodia; Mr. Diwa Guinigundo, Deputy Governor of Bangko Sentral ng Pilipinas; Ms. Mary Ellen Iskenderian, President and Chief Executive Officer of Women’s World Banking; Mr. Ravi Menon, Managing Director of the Monetary Authority of Singapore; Mr. Takehiko Nakao, President of ADB; Mr. Abdul Rasheed, Deputy Governor, Bank Negara Malaysia, and Mr. Veerathai Santiprabhob, Governor of the Bank of Thailand. Mr. Mirza Adityaswara, Senior Deputy Governor of Bank Indonesia, gave the opening remarks at the conference and Ms. Junhong Chang, Director of AMRO, gave the welcome remarks.

“Rapidly spreading new financial technologies hold huge promise for financial inclusion,” said Mr. Nakao. “We must foster an enabling environment for the technologies to flourish and strengthen regional cooperation to build harmonized regulatory standards and surveillance systems to prevent international money laundering, terrorism financing, and cybercrimes.”

“Technology is an enabler that weaves our economies and financial systems together, transmitting benefits but also risks across borders,” said Ms. Chang. “Given East Asia’s rapid economic growth, understanding and managing the impact of technology in our financial systems is essential for policymakers to maintain financial stability.”

“Asia, including Indonesia, is an ideal place for fintech to flourish,” said Mr. Adityaswara. “In Indonesia’s case, there are more than a quarter of a billion people living on thousand of islands, waiting to be integrated with the new technology; young people eager to enter the future digital world; more than fifty million small and medium-sized enterprises which can’t wait to get on board with e-commerce; a new society driven by a dynamic, democratic middle class which views the digital economy as something as inevitable as evolution.”

Despite Asia’s high economic growth in recent years, the financial sector is still under-developed in some countries. Fewer than 27% of adults in developing Asia have a bank account, well below the global median of 38%. Meanwhile, just 84% of firms have a checking or savings account, on a par with Africa but below Latin America’s 89% and emerging Europe’s 92%.

Financial inclusion could be increased through policies to promote financial innovation, by boosting financial literacy, and by expanding and upgrading digital infrastructure and networks. Regulations to prevent illegal activities, enhance cyber security, and protect consumers’ rights and privacy, would also build confidence in new financial technologies.

Continue Reading

Tech

Cutting-edge tech a ‘double-edged sword for developing countries’

MD Staff

Published

on

The latest technological advances, from artificial intelligence to electric cars, can be a “double-edged sword”, says the latest UN World Economic and Social Survey (WESS 2018), released on Monday.

The over-riding message of the report is that appropriate, effective policies are essential, if so-called “frontier technologies” are to change the world for the better, helping us to achieve the Sustainable Development Goals (SDGs) and addressing climate change: without good policy, they risk exacerbating existing inequality.

Amongst several positive indicators, WESS 2018 found that the energy sector is becoming more sustainable, with renewable energy technology and efficient energy storage systems giving countries the opportunity to “leapfrog” existing, often fossil fuel-based solutions.

The wellbeing of the most vulnerable is being enhanced through greater access to medicines, and millions in developing countries now have access to low-cost financial services via their mobile phones.

Referring to the report, UN Secretary-General António Guterres said that “good health and longevity, prosperity for all and environmental sustainability are within our reach if we harness the full power of these innovations.”

However, the UN chief warned of the importance of properly managing the use of new technologies, to ensure there is a net benefit to society: the report demonstrates that unmanaged implementation of developments such as artificial intelligence and automation can improve efficiency but also destroy quality jobs.

“Clearly, we need policies that can ensure frontier technologies are not only commercially viable but also equitable and ethical. This will require a rigorous, objective and transparent ongoing assessment, involving all stakeholders,” Mr. Guterres added

The Survey says that proactive and effective policies can help countries to avoid pitfalls and minimize the economic and social costs of technology-related disruption. It calls for regulation and institutions that promote innovation, and the use of new technologies for sustainable development.

With digital technology frequently crossing borders, international cooperation, the Survey shows, is needed to bring about harmonized standards, greater flexibility in the area of intellectual property rights and ensuring that the market does not remain dominated by a tiny number of extremely powerful companies.

Here, the UN has a vital role to play, by providing an objective assessment of the impact that emerging technologies have on sustainable development outcomes – including their effects on employment, wages and income distribution – and bringing together people, business and organizations from across the world to build strong consensus-led agreements.

Continue Reading

Tech

Our Trust Deficit with Artifical Intelligence Has Only Just Started

Eleonore Pauwels

Published

on

“We suffer from a bad case of trust-deficit disorder,” said UN Secretary-General António Guterres in his recent General Assembly speech. His diagnosis is right, and his focus on new technological developments underscores their crucial role shaping the future global political order. Indeed, artificial intelligence (AI) is poised to deepen the trust-deficit across the world.

The Secretary-General, echoing his recently released Strategy on New Technologies, repeatedly referenced rapidly developing fields of technology in his speech, rightly calling for greater cooperation between countries and among stakeholders, as well as for more diversity in the technology sector. His trust-deficit diagnosis reflects the urgent need to build a new social license and develop incentives to ensure that technological innovation, in particular AI, is deployed safely and aligned with the public interest.

However, AI-driven technologies do not easily fit into today’s models of international cooperation, and will in fact tend to undermine rather than enforce global governance mechanisms. Looking at three trends in AI, the UN faces an enormous set of interrelated challenges.

AI and Reality

First, AI is a potentially dominating technology whose powerful – both positive and negative –implications will be increasingly difficult to isolate and contain. Engineers design learning algorithms with a specific set of predictive and optimizing functions that can be used to both empower or control populations. Without sophisticated fail-safe protocols, the potential for misuse or weaponization of AI is pervasive and can be difficult to anticipate.

Take Deepfake as an example. Sophisticated AI programs can now manipulate sounds, images and videos, creating impersonations that are often impossible to distinguish from the original. Deep-learning algorithms can, with surprising accuracy, read human lips, synthetize speech, and to some extent simulate facial expressions. Once released outside of the lab, such simulations could easily be misused with wide-ranging impacts (indeed, this is already happening at a low level). On the eve of an election, Deepfake videos could falsely portray public officials being involved in money-laundering or human rights abuses; public panic could be sowed by videos warning of non-existent epidemics or cyberattacks; forged incidents could potentially lead to international escalation.

The capacity of a range of actors to influence public opinion with misleading simulations could have powerful long-term implications for the UN’s role in peace and security. By eroding the sense of trust and truth between citizens and the state—and indeed amongst states—truly fake news could be deeply corrosive to our global governance system.

AI Reading Us

Second, AI is already connecting and converging with a range of other technologies—including biotech—with significant implications for global security. AI systems around the world are trained to predict various aspects of our daily lives by making sense of massive data sets, such as cities’ traffic patterns, financial markets, consumer behaviour trend data, health records and even our genomes.

These AI technologies are increasingly able to harness our behavioural and biological data in innovative and often manipulative ways, with implications for all of us. For example, the My Friend Cayla smart doll sends voice and emotion data of the children who play with it to the cloud, which led to a US Federal Trade Commission complaint and its ban in Germany. In the US, emotional analysis is already being used in the courtroom to detect remorse in deposition videos. It could soon be part of job interviews to assess candidates’ responses and their fitness for a job.

The ability of AI to intrude upon—and potentially control—private human behaviour has direct implications for the UN’s human rights agenda. New forms of social and bio-control could in fact require a reimagining of the framework currently in place to monitor and implement the Universal Declaration of Human Rights, and will certainly require the multilateral system to better anticipate and understand this quickly emerging field.

AI as a Conflict Theatre

Finally, the ability of AI-driven technologies to influence large populations is of such immediate and overriding value that it is almost certain to be the theatre for future conflicts. There is a very real prospect of a “cyber race” in which powerful nations and large technology platforms enter into open competition for our collective data as the fuel to generate economic, medical and security supremacy across the globe. Forms of “cyber-colonization” are increasingly likely, as powerful states are able to harness AI and biotech together to understand and potentially control other countries’ populations and ecosystems.

Towards Global Governance of AI

Politically, legally and ethically, our societies are not prepared for the deployment of AI. The UN, established many decades before the emergence of these technologies, is in many ways poorly placed to develop the kind of responsible governance that will channel AI’s potential away from these risks and towards our collective safety and wellbeing. In fact, the resurgence of nationalist agendas across the world may point to a dwindling capacity of the multilateral system to play a meaningful role in the global governance of AI. Major corporations and powerful member states may see little value in bringing multilateral approaches to bear on what they consider lucrative and proprietary technologies.

There are, however, some important ways in which the UN can help build the kind of collaborative, transparent networks that may begin to treat our “trust-deficit disorder.” The Secretary-General’s recently-launched High-Level Panel on Digital Cooperation, is already working to build a collaborative partnership with the private sector and establish a common approach to new technologies. Such an initiative could eventually find ways to reward cooperation over competition, and to put in place common commitments to using AI-driven technologies for the public good.

Perhaps the most important challenge for the UN in this context is one of relevance, of re-establishing a sense of trust in the multilateral system. But if the above trends tell us anything, it is that AI-driven technologies are an issue for every individual and every state, and that without collective, collaborative forms of governance, there is a real risk that it will be a force that undermines global stability.

Continue Reading

Latest

Trending

Copyright © 2018 Modern Diplomacy