Connect with us

Science & Technology

Digital tracking of environmental risks offers insights to humanitarian actors

Published

on

photo: UN Environment

By the end of this day many people will have made life-changing decisions, relying on their best guess or their instinct. Some will yield great results while others will imperil individuals, corporations and communities.

Humanitarian crises require that we make difficult choices. As they increasingly become complex, as are their impact on the environment, the choices we make must be the right ones. And to make sound, informed decisions, we need data. 

Thankfully today, all those who work in the environmental field have at their fingertips a combination of global environmental data, technologies and data science tools and techniques. These have the potential to create insights that can underpin a sustainable future and profoundly transform our relationship with our planet.

For decades, the UN Environment Programme has been working with the Office for the Coordination of Humanitarian Affairs, and partners such as the UN Refugee Agency, to make sense of environmental data for improved humanitarian planning.

In December last year, UN Environment with support from the UN Refugee Agency piloted an innovative tool for environmental data gathering and risk assessment, the Nexus Environmental Assessment Tool (NEAT+). The tool was deployed in the Mantapala refugee settlement in northern Zambia.

Built around existing farmland, Mantapala refugee settlement, near Nchelenge in northern Zambia, was built in 2018 for up to 20,000 people. It was designed to enable refugees to make a living while contributing to local development. The surrounding humid sub-tropical Mantapala Forest Reserve—an area characterized by rich biodiversity—includes the productive Wet Miombo Woodland.

According to the UN Refugee agency, Zambia hosts at least 41,000 refugees from the Democratic Republic of Congo and Mantapala refugee settlement is home to around 13,000 of them.

 Daily life isn’t easy. Flash floods can be common during the long rainy seasons when rainfalls are particularly heavy. In addition, less than 20 per cent of Nchelenge district’s households have access to electricity, and even when they do, it is so expensive that people prefer to use firewood and charcoal as their primary cooking fuels.

“With pressure mounting on natural resources throughout the world, we are exploring how to support humanitarian actors in collecting, sharing and processing environmental data for better decision-making using innovative digital environmental tools such as the Nexus Environmental Assessment Tool (NEAT+) and MapX—a United Nations-backed platform—in Mantapala settlement and beyond,” says David Jensen, UN Environment’s Head of Environmental Cooperation for Peacebuilding and Co-Director of MapX.

What makes NEAT+ so appealing is its simplicity. It is a user-friendly environmental screening tool for humanitarian contexts, which combines environmental data with site-specific questions to automatically analyse and flag priority environmental risks. The tool was developed by eight humanitarian and environmental organizations as part of the Joint Initiative, a multi-stakeholder project aimed at improving collaboration between environmental and humanitarian actors. NEAT+ supports humanitarian actors in quickly identifying issues of concern to increase the efficiency, accountability and sustainability of emergency or recovery interventions.

“NEAT+ answers the demand of a simple process to assess the sensitivity of the environment in displacement settings. It overlays environmental realities with a proposed humanitarian intervention, identifying risk and mitigation measures,” says Emilia Wahlstrom, Programme Officer, UN Environment / Office for the Coordination of Humanitarian Affairs Joint Unit.

NEAT+ runs on KoBo—a free, open source data collection platform—built by the Harvard Humanitarian Initiative—that allows data to be collected through phone, tablet or computer. Once the data is recorded, the programme automatically generates a report in Excel, categorizing risk into high, medium and low, and providing information that can help mitigate the risk.

As a next step, NEAT+ will draw increasingly on MapX, an online, open-source, fully-customizable platform for accessing and visualizing geospatial environmental data. It offers various tools to highlight different environmental risks such as deforestation, natural hazards and flood risks. NEAT will use MapX to gather and vizualise data.

In the Mantapala settlement, the NEAT+ assessment tool was used to identify negative environmental and livelihoods impacts in the settlement, where MapX spatial data highlighted nearby areas of environmental concern.

The results showed opportunities for environmental action. Where there was risk of deforestation, alternative livelihoods and agroforestry programmes could be supported. Agricultural plots vulnerable to flood damage are undergoing modification to prevent further deforestation and to reduce flood risks.

“Developing a digital ecosystem for the environment offers the possibility to access the best available data for decision-making. Tools such as MapX and NEAT+ are critical in mitigating the effects of sudden-onset natural disasters and slow-onset environmental change and degradation,” says Jensen.

“Developing and applying the NEAT+ tool has showed us the added value the environmental community can bring to the frontlines of humanitarian response. By taking the time to understand the environmental context they operate in, humanitarian actors are designing programmes that are saving money, contributing to a healthy environment, and supporting the dignity, livelihoods and health of affected people. This is critical for an increasingly complex and protracted global humanitarian crisis panorama,” comments Wahlstrom.

In 2019, the same actors who developed the NEAT+ tool, the Joint Initiative partners, launched the Environment and Humanitarian Action Connect website. Environment and Humanitarian Action Connect is a unique digital tool spanning the humanitarian-environment nexus and represents the first comprehensive online repository of environmental and humanitarian action tools and guidance. It is easily searchable and readily accessible, whether at the office, at home, or in the field. The content aligns with the humanitarian programme cycle with specific guidance available for humanitarian clusters and themes.

Environment and Humanitarian Action Connect is administered and updated by the United Nations Environment / Office for the Coordination of Humanitarian Affairs Joint Unit. Through the Joint Unit, UN Environment and OCHA respond as one to the environmental dimensions of emergencies. The partnership assists countries affected by disasters and crises and works to enhance the sustainability of humanitarian action. The partnership has supported almost 100 countries and conducted over 200 missions, and celebrates its 25th anniversary this year.

UN Environment

Continue Reading
Comments

Science & Technology

Ethical aspects relating to cyberspace: Self-regulation and codes of conduct

Published

on

Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.

In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.

At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.

In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.

It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.

Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.

The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.

Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.

The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.

Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.

Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).

The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can  become the basis for the future creation of international information law, adapted to the realities of the electronic age.

Continue Reading

Science & Technology

Ethical aspects relating to cyberspace: Behaviours and fake news

Published

on

It is customary to define etiquette as a set of rules of conduct governing the external expressions of human relations. Etiquette helps to preserve the integrity of society. It creates and maintains a certain social order, coordinates the joint actions of individuals and helps to overcome possible communication tensions. In this capacity, etiquette is functionally linked to morality: ultimately, etiquette is a form of practical implementation of moral principles.

Observing the rules of etiquette makes it possible to show goodwill and attention to others and express respect for them, and to make communication easy and pleasant. Despite all the similarities existing between morality and etiquette, they cannot be considered the same: the regulating function of etiquette is of a rather subordinate nature and its main function – as noted by many researchers – is integrative and differentiating. Etiquette ensures integration within a social group by giving its members special distinctive characteristics – the way they greet each other, speak and cautiously gain confidence in each other, etc. – thus enabling this group to create a sense of belonging. This allows the group to create new behaviours in order to distinguish itself from others. At the same time, however – as in real life – etiquette on the Internet (“netiquette”) not only unites people, but also separates them, by emphasising their differences in status (gender, age, class, social status, national and religious affiliation, etc.).

As a whole, the functions of integration/differentiation/distinction enable an individual to streamline relations both within his or her reference group and outside it, i.e. with “outsiders”.

There is no universal “netiquette”’, which is uniform for everyone in modern society: each socio-demographic and/or socio-professional group develop/develops its own rules of decency, along with the generally accepted ones, which serve as an integral element of its own sub-culture, understood not in a derogatory sense but as a variant/variety of minority culture or localised in a cyberspace environment. It is therefore not surprising that special rules of etiquette are formed on the Internet. In the strict sense of the word, netiquette is not etiquette, since it does not (and cannot) perform the main function of traditional etiquette: its function of differentiation – i.e. of determining the individual’s place in social hierarchy – is merely virtual and does not fundamentally entail the nature of status, since it lacks human contact or bodily contact, as you might well call it. Consequently, the communication and integration function of “netiquette” clearly prevails.

This function manifests itself in two ways. Firstly, it is one of the tools for building the collective identity of the members of a particular virtual community: by developing its own unique rules of behaviour, this virtual society/group is aware of itself as a whole and represents itself to others. Secondly, “netiquette” promotes the individual socio-cultural identification: the knowledge and implementation of its rules enable an individual to confirm his or her belonging to a particular community and to prove that it is “his or her” and not everyone else’s, as the rules of that particular “netiquette” are not written in any Archbishop John della Casa’s Galateo: or, the Rules of Polite Behaviour.

It is not by chance that a fairly widespread (and more severe) penalty for infringing the rules of a specific group “netiquette” is a sort of expulsion from virtual society, i.e. the disconnection of the offender from a given Internet resource. Therefore, “netiquette” also acts as a mechanism of socialisation and marginalisation at the same time.

Unlike in traditional communities, the possibility of group influence on an individual (e.g. through public opinion) on the Internet is limited. The anonymity of virtual communication makes it easy to avoid social pressure, and therefore the only effective method of influence in a virtual environment is the voluntary inclusion of a person in the social system, his/her internalisation of group values and rules. This implies the conscious acceptance of some obligations, mainly moral ones – no matter whether shared or not by the external society – by each participant in virtual interaction. From this viewpoint, “netiquette” rules can be seen as a guideline demonstrating the standard of correct behaviour in cyberspace. Therefore, these rules are of a marked ethical nature.

An analysis of the various versions of the Internet etiquette shows that the rules do not generally differ much from the traditional ones: they imply respect for communication partners and are based on the “golden rule” of group morality. At the same time, besides universal ethical standards, equally applicable to both real and virtual communication, “netiquette” also includes a number of specific rules due to the specificities of the communication channel. For instance, it is not advisable to write messages in capital letters, which is equivalent to shouting, as capital letters are bad for the sense of sight and the sense of hearing. It is also not advisable to send e-mail attachments without warning; to use coarse language; to send unexpected notifications; to send e-mails with an empty subject line; to distribute spam; to send unsolicited mail; to forward advertisements, etc.

It can be assumed, however, that, with the further development of information technology, the approach to virtual communication will align with the usual forms of interaction, until “netiquette” will be absorbed into traditional etiquette.

As is well known, most journalistic codes of ethics usually proclaim freedom of speech as the highest moral value: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers” (Art. 19 of the Universal Declaration of Human Rights). This freedom finds its maximum expression on the Internet: specialised institutions lose their monopoly on the generation of information contained therein and any user – for a minimal cost – can design and make any message publicly available without subjecting it to further changes.

The specificity of the Internet, however, makes it easier to show fake, anti-social and simply illegal material, because the Internet is a type of media in which it is easier to hide, change or falsify the identity of the author of a statement. On the Internet, in fact, it is never possible to say with certainty who actually is the author of a message (unless the information is protected by special cryptographic means), and the text published on the Internet can at any time be modified beyond recognition, moved to another server or simply destroyed.

The situation is worsened by the fact that there are no institutional or professional criteria for the quality and reliability of information on the Internet, except in the cases where relevant news is accompanied by reliability indicators outside the web (e.g. the reputation of the author or of the institution that has its own website, etc.). Therefore, mass communication on the Internet is totally anonymous and however binding.

In full accordance with the postulates of web ideology, the attempts to solve the problem of the dissemination of questionable information on the Internet (“fake news”) by creating specialised laws and by also introducing censorship, are resolutely opposed by members of the net community and usually end in failure. It should be borne in mind that it is impossible to provide a universal definition of what “reprehensible information” means, considering the various countries’ cultural, national and religious characteristics. Therefore, the development of a unified information policy in this area is hardly possible.

As an acceptable alternative to censorship and other legislative restrictions, it is proposed to consider filtering materials published on the web by using an algorithm for evaluating electronic documents. The advantage of this approach is that it gives users freedom of choice, thus enabling them to decide what kind of information they wish to receive. Admittedly, for this choice to be truly conscious and responsible, it is necessary to have a fully-fledged value system (i.e. the ability to discern), both for those who make evaluations and give ratings and for those who are guided by third parties, since the basis of any evaluation and rating is the identification of value – in this case the information disseminated via the Internet. The rating evaluation methodology (assessment of reliability) cannot therefore be effective without enhancing the information culture of society as a whole. (5. continued)

Continue Reading

Science & Technology

Ethical aspects relating to cyberspace: Copyright and privacy

Published

on

In recent years, there has been a trend in cyberspace ethics towards the emergence of intra-net mechanisms and self-regulatory systems. In particular, in many European countries, information service providers have started to introduce voluntary self-limitation. For instance, in the UK, there is an independent Electronic Frontier Foundation (www.eff.org), whose representatives develop rating systems for Internet resources, by maintaining constant monitoring to collect information that infringes moral and legal standards on websites, and – where necessary – block access to them.

A solution to the problem of the quality of information provided on the Internet can probably come from traditional media, which in recent years have been increasingly committed to acquiring an electronic version of their print or radio and television editions. Moreover, exclusively online newspapers and magazines have already emerged which, thanks to their serious and cautious approach, have won the online public’s trust. These publications can play an extremely important role through widely applied survey protocols; evaluation of electronic publications; maintenance of the virtual media’s reputation; and supervision of the implementation of the basic rules and principles of professional journalistic ethics on the Internet.

Furthermore, the ethical conflict between the author (owner) of an information product and the Internet public has to be considered, i.e. the analysis of the contradiction between the desire for public accessibility of newly created information and the need to protect copyright.

The emergence of the “copyright” concept (dating back almost three hundred years: the first law on the subject is The Statute of Queen Anne, which was enacted in 1709 and came into force on April 10, 1710) is due to the need to strike a balance between the interests of the creators of original works and societal needs. Therefore, it is based on two non-coincidental and sometimes even contradictory moral principles: disposing of the fruits of labour is a natural matter, on the other hand there is the principle of universal free access to knowledge, which ensures the progress of science and art and encourages the free use of any information and ideas without restrictions.

Modern communication technologies create almost unlimited possibilities for personal possession and reproduction of information and this greatly complicates copyright protection. As a result, previous international laws and agreements on the protection of intellectual property are inadequate and traditional ideas on copyright need to be revised.

How should current legislation be changed to meet modern realities? There are two conceptual approaches to solve this problem. The generally accepted trend to improve national and international information law rules is to broaden the scope of copyright and extend it to electronic types of information.

At the same time, it should be emphasised that copyright arises from the fact of creating a work, and does not depend on the nature of the medium. Hence the problem lies in the need for proper interpretation of the legislation in force and in the implementation of the existing rules to the new conditions.

However, the opposite viewpoint – whereby compliance with copyright on the Internet slows down the web development and interferes with its active content – is increasingly expressed. The most radical proponents of this view argue that – since the free exchange of knowledge and ideas is the basis of information ethics – copyright categories are in principle not applicable to it, and therefore the Internet should be perceived as a public information space in which the value of a specific copyright text is levelled off. These ideas have found their most complete embodiment in the hackers’ ethical principles. Bear in mind that the word “hacker” is meant in its original and positive meaning: a person who uses his/her computer skills to explore the details of programmable systems and experiments with how to extend their use to everyone. The derogatory use that some people make of the word does not reflect and pertains to its full morality.

In line with this viewpoint, it is proposed to limit or even remove some rules from the conceptual foundations of copyright, e.g. to authorise the fair and proper use of original works and ultimately relinquish the idea of intellectual property altogether.

It is clear that the origins of this approach should be sought in the ideas of freedom on the net, based on the principle that information should not be encumbered by legal and/or authorization schemes. In fact, even those who support the abolition of intellectual property are not ready to completely relinquish the rights to their works and remove their names from titles and, especially, from revenues and fees. The origins of this approach are to be found within the net and this system of opinions seems legitimate in both directions.

It is therefore clear that the primary task in formulating modern information legislation is to maintain a balance between the interests of software producers and information resources on the one hand, and the interests of their consumers on the other. Otherwise, the development of new communication technologies will contribute to deepening information inequality in modern society, and to further dividing society between the well informed and the less informed.

A further right – the right to privacy – is one of the most fundamental rights: it reflects the natural human need for privacy, confidentiality and autonomy, as well as for the protection of one’s own “personal sphere” from outside intrusion, and the ability to make decisions without being spied on and to remain oneself and maintain one’s own individuality.

It is no coincidence that in all international documents declaring human rights and freedoms, as well as in all codes of ethics related to the sphere of information, privacy is proclaimed as a fundamental moral value, which constitutes the foundation of human freedom and security, and therefore requires respect and protection. It is interesting to note that, unlike other human rights formulated in the 18th century, the right to the inviolability of private life has only recently received protection and be recognized in legislation, i.e. in the mid-20th century. This can be explained precisely by the development of information and communication technologies, under the influence of which intrusion into the individual person’s private sphere has become much easier.

In particular, despite the declared anonymity of Internet surfing, there are technologies that allow to collect information on the users’ behaviours on the web. The collection of such information cannot be considered reprehensible, but only if some rather strict requirements and conditions are met. Information must be obtained in good faith, with the knowledge and consent of the data subject (the person to whom the information relates). It must be collected for well-defined purposes that do not infringe the law and be used in strict compliance with the stated purposes. It must be protected from unauthorised access and not be redundant or associated with personally identifiable data about the user without his or her permission.

In practice, however, these rules are not always complied with. This requires appropriate solutions to be found, thus enabling the Internet users’ privacy to be effectively protected from unauthorised interference by both governmental and commercial agencies.

An important role in ensuring the Internet users’ privacy is played by the creation of certain codes of ethics in the field of protection – the so-called privacy policy. The privacy policy is an official statement on the terms of use of personal data requested from the Internet users. As a rule, it is published on the home page of the website and includes a detailed description of the purpose for the collection of information and practices: I talked about it – expressing many doubts – in one of my previous articles.

The reason for my doubts is very simple: whoever is interested in spying on third parties pays the creator of the appropriate software more than the international or governmental organization, or the single private agency, which envisages very low fees for the creator of software that should protect citizens’ privacy. Those who are better paid have obviously more incentives to develop spy-software than the technician with a permanent job and a fixed salary. This is the immoral logic of capitalism.  

Therefore, the terms of the privacy policy also contain guarantees regarding the protection of personal data, which the website administration undertakes. In the West, the presence and adhesion of companies that provide for privacy policies is an integral part of the e-business ethos, and is clearly evidenced by international public bodies through which certifications of the Internet resources are created, thus informing users of the extent to which their personal data are protected when working with websites. Such examples clearly show that self-regulation is extremely effective on the Internet – as long as it lasts, for the above stated reasons. Therefore, hopefully Internet users will realise the importance of privacy as a social and moral value (6. end)

Continue Reading

Publications

Latest

Development5 hours ago

Naftali Bennett Highlights Tech and Trade, Bridge-Building and Climate Change

Prime Minister Naftali Bennett of Israel used his address to the Davos Agenda 2022 to highlight the role of digital...

Green Planet11 hours ago

The Meeting Point between Pandemic and Environmental

Humans in the Anthropocene Humans are born from history, on the other hand, history is born from human life. Currently,...

Africa Today13 hours ago

Lithuanians Pave Way for EU’s Legal Migration Initiatives with Sub-Saharan Africa

The European Union is facing a shortage of specialists. The reality of demographic characteristics and the labour market dictate that...

Reports15 hours ago

Nearly half of City GDP at Risk of Disruption from Nature Loss

Cities contribute 80% to global GDP – but they also account for 75% of global greenhouse gas emissions. Integrating nature-positive...

USA China Trade War USA China Trade War
Americas17 hours ago

Sino-American confrontation and the Re-binarized world

Americans performed three very different policies on the People’s Republic: From a total negation (and the Mao-time mutual annihilation assurances),...

Reports19 hours ago

Labour market recovery still ‘slow and uncertain’

As the COVID-19 pandemic grinds on and global labour markets continue to struggle, the latest International Labour Organization (ILO) report,...

South Asia21 hours ago

India’s open invitation to a nuclear Armageddon

Army chief General Manoj Mukund Naravane said that “India was not averse to the possible demilitarisation of the Siachen glacier...

Trending