Connect with us

Tech

How Artificial Intelligence Uses Social Media Data to Machine Humanity

Published

on

In the process of artificial intelligence development, AI+ all big data seems to be able to increase efficiency and get a good result. As a matter of course, the combination of social media data and AI will have a negative impact.

We are used to expressing emotions, expressing interests and hobbies in social media, and even inadvertently revealing our personal information. As an ordinary person, even if you know that this is an open space, you will not have the impression of data leakage. After all, most of the information display weights of social media are based on the popularity of information, and individual users’ ordinary speeches are difficult to be unfamiliar. People deliberately see.

But with the super power and data analysis capabilities that AI brings, everything becomes different. After crawling data, AI+ social media data can see more than just one person’s reaction, emotion and expression to things, but can see information about a whole group or even a whole ethnic group. The previous series of information disclosures that almost ran through Facebook’s doorway revealed how much of the scattered personal information that AI might have had.

However, social media data is not only dark, but there are already many neurological and psychological studies that have begun to use AI and social media data to use machines to deal with human nature.

“Twitter big data tells you that people around the world are the same”

Recently, the University of Bristol used machine learning to analyze 800 million tweets in 57 cities in the UK within four years and reached a conclusion that we have long pondered about. Humans generally have high emotions in the morning and low emotions in the night.

The whole analysis process is like this. The research team sampled through the Twitter search API and collected 800 million tweets. All the # topics, emojis, holiday greetings, etc. are washed away, and the words are tagged according to the psychometric method.

For example: emotional positive emotions and negative emotions; time-oriented attention now, attention to the past, attention to the future; personal attention to work, family, money, society, religion, and so on.

With this strictly machine-learning model based on the dimensions of psychological research, the research on social media data will be more specialized, rather than simply relying on NLP to analyze the emotions in the language.

The conclusion of the final study is: In the 24 hours of the day, human emotions will not only change, thinking patterns will also change.

From the 5-6 pm of the day, people began to enter the peak period of expression in social media, and at this time people’s emotional expression is more positive, and the focus is also more on the individual status. As time goes by from 7-9, people’s emotions tend to be biased towards anger, but if it is on non-working days, this positive and enjoyable state will continue.

At this time, people’s thinking patterns tend to be class-thinking, thinking is more clear and direct, logical, and stereotyped tendencies appear.

At night, people’s emotional expression will turn negative, and the focus will shift from individuals to society. As time goes on, the closer to the next day’s 3-4 a.m., the more people focus their attention on religion. In this period of time, people’s thinking patterns tend to be existentialism, which reflects the state of confusion, anxiety, irrationality, and willingness to participate and share.

In plain words, one’s general state is to rise in the morning with enthusiasm and self-confidence, to plan his own life in the blood of chickens, and to wait until the evening when he starts to fall into a state of whimper, paying attention to every corner of the world. People who are sad or touched, if they can’t sleep at night, begin to seek religious salvation. Look, is this process the same for Chinese foreigners?

When social media becomes a research assistant in psychology, can it be diagnosed from a selfie?

In fact, the time-cycle changes in human emotions have long been confirmed. Because of the physiological causes of nerve fatigue, melatonin secretion, etc., our emotions will show different states during the day.

Although this study of social media big data just confirmed this change again and did not uncover more reasons for emotional change, it is the first time that the relationship between mood cycles and thinking patterns has changed. In fact, there are many researches on social media data and psychology, and many interesting information have been discovered.

For example, last year, the University of Pittsburgh conducted a survey to investigate the social media use of depression patients. The results showed that the average duration of social media usage for depression patients was much higher than that of ordinary people.

Harvard University research shows that people with depression prefer to use cool, faded, or black-and-white filters when they publish photos on social media.

Unbreakable Ethics Levels

At present, the role of social media data for psychology seems to remain in academic research. Can we see the psychological application of social media data in our lifetime?

At present, social media data has at least the following applications for psychology:

1.As an aid to the measurement of mental state

In addition to those mental illnesses that directly lead to hallucinations, insomnia, and other direct manifestations, there are many types or degrees of mental illness that are difficult to objectively feel. Most of the time, it is necessary to rely on the face-to-face consultation or to fill in the psychological state measurement table to confirm, but the patient may not be able to directly show his or her true state when he or she fills out independently. At this time, information undoubtedly revealed in social media can be used as a support.

2.The psychological status of the group

Compared with the individual’s psychological problems, the more complicated situation is experienced by an entire group. For example: changes in mental state that can occur when disasters or accidents occur.

For example, employee/student suicide occurs in a company or school, or an entire region suffers serious natural disasters such as earthquakes and typhoons. At this time, we often do not have the energy to do psychological counseling for everyone, and there is no way to assess the psychological status of the group as a whole. At most, the group conducts psychological counseling in the form of group lessons.

At this time, using machine learning to research social media data, you can clearly see the group’s psychological response to events. Even the long-term psychological status tracking of the crowd, and selective, targeted psychological counseling.

HIT has proposed a method to identify college students’ social media data by establishing classifiers to identify the risk of depression.

In fact, the application methods mentioned above are hardly technically difficult to achieve. Although the results obtained may not always be absolutely accurate, the value that can be provided for psychology, a labor-intensive industry, is very small.

But the biggest issue is whether it is ethical. Should publicly released social media data be considered personal privacy? The information extracted from it is not considered personal privacy? Even if it is a patient with mental illness, citizens should have the right not to disclose their prevalence, and to discover the citizens’ mental health status through social media data. Is this a serious violation of this power? In particular, if this technology is applied to colleges and universities, will anyone be so concerned that the teachers and classmates around them have learned their psychological state and have made their mental condition worse?

In fact, to a certain extent, we sometimes deliberately choose some ineffective solutions to problems, but we can exchange security and freedom for the soul.

The author has an experience of more than 6 years of corporate experience in various technology platforms such as Big Data, AWS, Data Science, Artificial Intelligence, Machine Learning, Blockchain, Python, SQL, JAVA, Oracle, Digital Marketing etc. He is a technology nerd and loves contributing to various open platforms through blogging. He is currently in association with a leading professional training provider, Mindmajix Technologies INC. and strives to provide knowledge to aspirants and professionals through personal blogs, research, and innovative ideas.

Continue Reading
Comments

Tech

Asia Needs a Region-Wide Approach to Harness Fintech’s Full Potential

MD Staff

Published

on

The importance of a region-wide approach to harness the potentials of fintech was emphasized at the High-Level Policy Dialogue: Regional Cooperation to Support Innovation, Inclusion and Stability in Asia on 11 October in Bali, Indonesia. Photo: ADB

Asia’s policy makers should strengthen cooperation to harness the potential of new financial technologies for inclusive growth. At the same time, they should work together to ensure they can respond better to the challenges posed by fintech.

New technologies such as mobile banking, big data, and peer-to-peer transfer networks are already extending the reach of financial services to those who were previously unbanked or out of reach, boosting incomes and living standards. Yet, fintech also comes with the risk of cyber fraud, data security, and privacy breaches. Disintermediation of fintech services or concentration of services among a few providers could also pose a risk to financial stability.

These and other issues were discussed at the High-Level Policy Dialogue on Regional Cooperation to Support Innovation, Inclusion, and Stability in Asia, organized by the Asian Development Bank (ADB), Bank Indonesia, and the ASEAN+3 Macroeconomic Research Office (AMRO).

The panel comprised Ms. Neav Chanthana, Deputy Governor of the National Bank of Cambodia; Mr. Diwa Guinigundo, Deputy Governor of Bangko Sentral ng Pilipinas; Ms. Mary Ellen Iskenderian, President and Chief Executive Officer of Women’s World Banking; Mr. Ravi Menon, Managing Director of the Monetary Authority of Singapore; Mr. Takehiko Nakao, President of ADB; Mr. Abdul Rasheed, Deputy Governor, Bank Negara Malaysia, and Mr. Veerathai Santiprabhob, Governor of the Bank of Thailand. Mr. Mirza Adityaswara, Senior Deputy Governor of Bank Indonesia, gave the opening remarks at the conference and Ms. Junhong Chang, Director of AMRO, gave the welcome remarks.

“Rapidly spreading new financial technologies hold huge promise for financial inclusion,” said Mr. Nakao. “We must foster an enabling environment for the technologies to flourish and strengthen regional cooperation to build harmonized regulatory standards and surveillance systems to prevent international money laundering, terrorism financing, and cybercrimes.”

“Technology is an enabler that weaves our economies and financial systems together, transmitting benefits but also risks across borders,” said Ms. Chang. “Given East Asia’s rapid economic growth, understanding and managing the impact of technology in our financial systems is essential for policymakers to maintain financial stability.”

“Asia, including Indonesia, is an ideal place for fintech to flourish,” said Mr. Adityaswara. “In Indonesia’s case, there are more than a quarter of a billion people living on thousand of islands, waiting to be integrated with the new technology; young people eager to enter the future digital world; more than fifty million small and medium-sized enterprises which can’t wait to get on board with e-commerce; a new society driven by a dynamic, democratic middle class which views the digital economy as something as inevitable as evolution.”

Despite Asia’s high economic growth in recent years, the financial sector is still under-developed in some countries. Fewer than 27% of adults in developing Asia have a bank account, well below the global median of 38%. Meanwhile, just 84% of firms have a checking or savings account, on a par with Africa but below Latin America’s 89% and emerging Europe’s 92%.

Financial inclusion could be increased through policies to promote financial innovation, by boosting financial literacy, and by expanding and upgrading digital infrastructure and networks. Regulations to prevent illegal activities, enhance cyber security, and protect consumers’ rights and privacy, would also build confidence in new financial technologies.

Continue Reading

Tech

Cutting-edge tech a ‘double-edged sword for developing countries’

MD Staff

Published

on

The latest technological advances, from artificial intelligence to electric cars, can be a “double-edged sword”, says the latest UN World Economic and Social Survey (WESS 2018), released on Monday.

The over-riding message of the report is that appropriate, effective policies are essential, if so-called “frontier technologies” are to change the world for the better, helping us to achieve the Sustainable Development Goals (SDGs) and addressing climate change: without good policy, they risk exacerbating existing inequality.

Amongst several positive indicators, WESS 2018 found that the energy sector is becoming more sustainable, with renewable energy technology and efficient energy storage systems giving countries the opportunity to “leapfrog” existing, often fossil fuel-based solutions.

The wellbeing of the most vulnerable is being enhanced through greater access to medicines, and millions in developing countries now have access to low-cost financial services via their mobile phones.

Referring to the report, UN Secretary-General António Guterres said that “good health and longevity, prosperity for all and environmental sustainability are within our reach if we harness the full power of these innovations.”

However, the UN chief warned of the importance of properly managing the use of new technologies, to ensure there is a net benefit to society: the report demonstrates that unmanaged implementation of developments such as artificial intelligence and automation can improve efficiency but also destroy quality jobs.

“Clearly, we need policies that can ensure frontier technologies are not only commercially viable but also equitable and ethical. This will require a rigorous, objective and transparent ongoing assessment, involving all stakeholders,” Mr. Guterres added

The Survey says that proactive and effective policies can help countries to avoid pitfalls and minimize the economic and social costs of technology-related disruption. It calls for regulation and institutions that promote innovation, and the use of new technologies for sustainable development.

With digital technology frequently crossing borders, international cooperation, the Survey shows, is needed to bring about harmonized standards, greater flexibility in the area of intellectual property rights and ensuring that the market does not remain dominated by a tiny number of extremely powerful companies.

Here, the UN has a vital role to play, by providing an objective assessment of the impact that emerging technologies have on sustainable development outcomes – including their effects on employment, wages and income distribution – and bringing together people, business and organizations from across the world to build strong consensus-led agreements.

Continue Reading

Tech

Our Trust Deficit with Artifical Intelligence Has Only Just Started

Eleonore Pauwels

Published

on

“We suffer from a bad case of trust-deficit disorder,” said UN Secretary-General António Guterres in his recent General Assembly speech. His diagnosis is right, and his focus on new technological developments underscores their crucial role shaping the future global political order. Indeed, artificial intelligence (AI) is poised to deepen the trust-deficit across the world.

The Secretary-General, echoing his recently released Strategy on New Technologies, repeatedly referenced rapidly developing fields of technology in his speech, rightly calling for greater cooperation between countries and among stakeholders, as well as for more diversity in the technology sector. His trust-deficit diagnosis reflects the urgent need to build a new social license and develop incentives to ensure that technological innovation, in particular AI, is deployed safely and aligned with the public interest.

However, AI-driven technologies do not easily fit into today’s models of international cooperation, and will in fact tend to undermine rather than enforce global governance mechanisms. Looking at three trends in AI, the UN faces an enormous set of interrelated challenges.

AI and Reality

First, AI is a potentially dominating technology whose powerful – both positive and negative –implications will be increasingly difficult to isolate and contain. Engineers design learning algorithms with a specific set of predictive and optimizing functions that can be used to both empower or control populations. Without sophisticated fail-safe protocols, the potential for misuse or weaponization of AI is pervasive and can be difficult to anticipate.

Take Deepfake as an example. Sophisticated AI programs can now manipulate sounds, images and videos, creating impersonations that are often impossible to distinguish from the original. Deep-learning algorithms can, with surprising accuracy, read human lips, synthetize speech, and to some extent simulate facial expressions. Once released outside of the lab, such simulations could easily be misused with wide-ranging impacts (indeed, this is already happening at a low level). On the eve of an election, Deepfake videos could falsely portray public officials being involved in money-laundering or human rights abuses; public panic could be sowed by videos warning of non-existent epidemics or cyberattacks; forged incidents could potentially lead to international escalation.

The capacity of a range of actors to influence public opinion with misleading simulations could have powerful long-term implications for the UN’s role in peace and security. By eroding the sense of trust and truth between citizens and the state—and indeed amongst states—truly fake news could be deeply corrosive to our global governance system.

AI Reading Us

Second, AI is already connecting and converging with a range of other technologies—including biotech—with significant implications for global security. AI systems around the world are trained to predict various aspects of our daily lives by making sense of massive data sets, such as cities’ traffic patterns, financial markets, consumer behaviour trend data, health records and even our genomes.

These AI technologies are increasingly able to harness our behavioural and biological data in innovative and often manipulative ways, with implications for all of us. For example, the My Friend Cayla smart doll sends voice and emotion data of the children who play with it to the cloud, which led to a US Federal Trade Commission complaint and its ban in Germany. In the US, emotional analysis is already being used in the courtroom to detect remorse in deposition videos. It could soon be part of job interviews to assess candidates’ responses and their fitness for a job.

The ability of AI to intrude upon—and potentially control—private human behaviour has direct implications for the UN’s human rights agenda. New forms of social and bio-control could in fact require a reimagining of the framework currently in place to monitor and implement the Universal Declaration of Human Rights, and will certainly require the multilateral system to better anticipate and understand this quickly emerging field.

AI as a Conflict Theatre

Finally, the ability of AI-driven technologies to influence large populations is of such immediate and overriding value that it is almost certain to be the theatre for future conflicts. There is a very real prospect of a “cyber race” in which powerful nations and large technology platforms enter into open competition for our collective data as the fuel to generate economic, medical and security supremacy across the globe. Forms of “cyber-colonization” are increasingly likely, as powerful states are able to harness AI and biotech together to understand and potentially control other countries’ populations and ecosystems.

Towards Global Governance of AI

Politically, legally and ethically, our societies are not prepared for the deployment of AI. The UN, established many decades before the emergence of these technologies, is in many ways poorly placed to develop the kind of responsible governance that will channel AI’s potential away from these risks and towards our collective safety and wellbeing. In fact, the resurgence of nationalist agendas across the world may point to a dwindling capacity of the multilateral system to play a meaningful role in the global governance of AI. Major corporations and powerful member states may see little value in bringing multilateral approaches to bear on what they consider lucrative and proprietary technologies.

There are, however, some important ways in which the UN can help build the kind of collaborative, transparent networks that may begin to treat our “trust-deficit disorder.” The Secretary-General’s recently-launched High-Level Panel on Digital Cooperation, is already working to build a collaborative partnership with the private sector and establish a common approach to new technologies. Such an initiative could eventually find ways to reward cooperation over competition, and to put in place common commitments to using AI-driven technologies for the public good.

Perhaps the most important challenge for the UN in this context is one of relevance, of re-establishing a sense of trust in the multilateral system. But if the above trends tell us anything, it is that AI-driven technologies are an issue for every individual and every state, and that without collective, collaborative forms of governance, there is a real risk that it will be a force that undermines global stability.

Continue Reading

Latest

Trending

Copyright © 2018 Modern Diplomacy