With 19% of the world’s population but only 7% of its arable land, China is in a bind: how to feed its growing and increasingly affluent population while protecting its natural resources. The country’s agricultural scientists have made growing use of nuclear and isotopic techniques in crop production over the last decades. In cooperation with the IAEA and the Food and Agriculture Organization of the United Nations (FAO), they are now helping experts from Asia and beyond in the development of new crop varieties, using irradiation.
While in many countries, nuclear research in agriculture is carried out by nuclear agencies that work independently from the country’s agriculture research establishment, in China the use of nuclear techniques in agriculture is integrated into the work of the Chinese Academy of Agricultural Sciences (CAAS) and provincial academies of agricultural sciences. This ensures that the findings are put to use immediately.
And indeed, the second most widely used wheat mutant variety in China, Luyuan 502, was developed by CAAS’s Institute of Crop Sciences and the Institute of Shandong Academy of Agricultural Sciences, using space-induced mutation breeding (see Space-induced mutation breeding). It has a yield that is 11% higher than the traditional variety and is also more tolerant to drought and main diseases, said Luxiang Liu, Deputy Director General of the Institute. It has been planted on over 3.6 million hectares – almost as large as Switzerland. It is one of 11 wheat varieties developed for improved salt and drought tolerance, grain quality and yield, Mr Liu said.
Through close cooperation with the IAEA and FAO, China has released over 1,000 mutant crop varieties in the past 60 years, and varieties developed in China account for a fourth of mutants listed currently in the IAEA/FAO’s database of mutant varieties produced worldwide, said Sobhana Sivasankar, Head of the Plant Breeding and Genetics Section at the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture. The new mutation induction and high-throughput mutant selection approaches established at the Institute serve as a model to researchers from around the world, she added.
The Institute uses heavy ion beam accelerators, cosmic rays and gamma rays along with chemicals to induce mutations in a wide variety of crops, including wheat, rice, maize, soybean and vegetables. “Nuclear techniques are at the heart of our work, fully integrated into the development of plant varieties for the improvement of food security,” Liu said.
The Institute has also become a key contributor to the IAEA technical cooperation programme over the years: more than 150 plant breeders from over 30 countries have participated in training courses and benefited from fellowships at CAAS.
Indonesia’s nuclear agency, BATAN, and CAAS are looking for ways to collaborate on plant mutation breeding and Indonesian researchers are looking for ways to learn from China’s experience, said Totti Tjiptosumirat, Head of BATAN’s Center for Isotopes and Radiation Application. “Active dissemination and promotion of China’s activities in plant mutation breeding would benefit agricultural research across Asia,” he said.
From food safety to authenticity
Several of CAAS’ other institutes use nuclear-related and isotopic techniques in their research and development work and participate in several IAEA technical cooperation and coordinated research projects. The Institute of Quality Standards and Testing Technology for Agro-Products has developed a protocol to detect fake honey, using isotopic analysis. A large amount of what is sold in China as honey is estimated to be produced synthetically in labs rather than by bees in hives, so this has been an important tool in cracking down on fraudsters, said Professor Chen Gang, who leads the research work using isotopic techniques at the Institute. A programme is also in place to trace the geographical origin of beef using stable isotopes, he added.
The Institute uses isotopic techniques to test the safety and to verify the authenticity of milk and dairy products – work that was the outcome of IAEA technical coordinated research and cooperation projects that lasted from 2013 to 2018. “After a few years of support, we are now fully self-sufficient,” Mr Gang said.
Improving nutrition efficiency
Various CAAS institutes use stable isotopes to study the absorption, transfer and metabolism of nutrients in animals. The results are used to optimize feed composition and feeding schedules. Isotope tracing offers higher sensitivity than conventional analytical methods, and this is particularly advantageous when studying the absorption of micronutrients, vitamins, hormones and drugs, said Dengpan Bu, Professor at the Institute of Animal Science.
While China has perfected the use of many nuclear techniques, in several areas it is looking to the IAEA and the FAO for support: the country’s dairy industry is dogged by the low protein absorption rate of dairy cows. Less than half of the protein in animal feed is used by the ruminants, the rest ends up in their manure and urine. “This is wasteful for the farmer and the high nitrogen content in the manure hurts the environment,” Mr Bu said. The use of isotopes to trace nitrogen as it travels from feed through the animal’s body would help improve nitrogen efficiency by making the necessary adjustments to the composition of the feed. This will be particularly important as dairy consumption, currently at a third of global average per person, continues to rise. “We are looking for international expertise, through the IAEA and the FAO, to help us tackle this problem.”
Ethical aspects relating to cyberspace: Self-regulation and codes of conduct
Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.
In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.
At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.
In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.
It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.
Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.
The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.
Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.
The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.
Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.
Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).
The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can become the basis for the future creation of international information law, adapted to the realities of the electronic age.
Ethical aspects relating to cyberspace: Behaviours and fake news
It is customary to define etiquette as a set of rules of conduct governing the external expressions of human relations. Etiquette helps to preserve the integrity of society. It creates and maintains a certain social order, coordinates the joint actions of individuals and helps to overcome possible communication tensions. In this capacity, etiquette is functionally linked to morality: ultimately, etiquette is a form of practical implementation of moral principles.
Observing the rules of etiquette makes it possible to show goodwill and attention to others and express respect for them, and to make communication easy and pleasant. Despite all the similarities existing between morality and etiquette, they cannot be considered the same: the regulating function of etiquette is of a rather subordinate nature and its main function – as noted by many researchers – is integrative and differentiating. Etiquette ensures integration within a social group by giving its members special distinctive characteristics – the way they greet each other, speak and cautiously gain confidence in each other, etc. – thus enabling this group to create a sense of belonging. This allows the group to create new behaviours in order to distinguish itself from others. At the same time, however – as in real life – etiquette on the Internet (“netiquette”) not only unites people, but also separates them, by emphasising their differences in status (gender, age, class, social status, national and religious affiliation, etc.).
As a whole, the functions of integration/differentiation/distinction enable an individual to streamline relations both within his or her reference group and outside it, i.e. with “outsiders”.
There is no universal “netiquette”’, which is uniform for everyone in modern society: each socio-demographic and/or socio-professional group develop/develops its own rules of decency, along with the generally accepted ones, which serve as an integral element of its own sub-culture, understood not in a derogatory sense but as a variant/variety of minority culture or localised in a cyberspace environment. It is therefore not surprising that special rules of etiquette are formed on the Internet. In the strict sense of the word, netiquette is not etiquette, since it does not (and cannot) perform the main function of traditional etiquette: its function of differentiation – i.e. of determining the individual’s place in social hierarchy – is merely virtual and does not fundamentally entail the nature of status, since it lacks human contact or bodily contact, as you might well call it. Consequently, the communication and integration function of “netiquette” clearly prevails.
This function manifests itself in two ways. Firstly, it is one of the tools for building the collective identity of the members of a particular virtual community: by developing its own unique rules of behaviour, this virtual society/group is aware of itself as a whole and represents itself to others. Secondly, “netiquette” promotes the individual socio-cultural identification: the knowledge and implementation of its rules enable an individual to confirm his or her belonging to a particular community and to prove that it is “his or her” and not everyone else’s, as the rules of that particular “netiquette” are not written in any Archbishop John della Casa’s Galateo: or, the Rules of Polite Behaviour.
It is not by chance that a fairly widespread (and more severe) penalty for infringing the rules of a specific group “netiquette” is a sort of expulsion from virtual society, i.e. the disconnection of the offender from a given Internet resource. Therefore, “netiquette” also acts as a mechanism of socialisation and marginalisation at the same time.
Unlike in traditional communities, the possibility of group influence on an individual (e.g. through public opinion) on the Internet is limited. The anonymity of virtual communication makes it easy to avoid social pressure, and therefore the only effective method of influence in a virtual environment is the voluntary inclusion of a person in the social system, his/her internalisation of group values and rules. This implies the conscious acceptance of some obligations, mainly moral ones – no matter whether shared or not by the external society – by each participant in virtual interaction. From this viewpoint, “netiquette” rules can be seen as a guideline demonstrating the standard of correct behaviour in cyberspace. Therefore, these rules are of a marked ethical nature.
An analysis of the various versions of the Internet etiquette shows that the rules do not generally differ much from the traditional ones: they imply respect for communication partners and are based on the “golden rule” of group morality. At the same time, besides universal ethical standards, equally applicable to both real and virtual communication, “netiquette” also includes a number of specific rules due to the specificities of the communication channel. For instance, it is not advisable to write messages in capital letters, which is equivalent to shouting, as capital letters are bad for the sense of sight and the sense of hearing. It is also not advisable to send e-mail attachments without warning; to use coarse language; to send unexpected notifications; to send e-mails with an empty subject line; to distribute spam; to send unsolicited mail; to forward advertisements, etc.
It can be assumed, however, that, with the further development of information technology, the approach to virtual communication will align with the usual forms of interaction, until “netiquette” will be absorbed into traditional etiquette.
As is well known, most journalistic codes of ethics usually proclaim freedom of speech as the highest moral value: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers” (Art. 19 of the Universal Declaration of Human Rights). This freedom finds its maximum expression on the Internet: specialised institutions lose their monopoly on the generation of information contained therein and any user – for a minimal cost – can design and make any message publicly available without subjecting it to further changes.
The specificity of the Internet, however, makes it easier to show fake, anti-social and simply illegal material, because the Internet is a type of media in which it is easier to hide, change or falsify the identity of the author of a statement. On the Internet, in fact, it is never possible to say with certainty who actually is the author of a message (unless the information is protected by special cryptographic means), and the text published on the Internet can at any time be modified beyond recognition, moved to another server or simply destroyed.
The situation is worsened by the fact that there are no institutional or professional criteria for the quality and reliability of information on the Internet, except in the cases where relevant news is accompanied by reliability indicators outside the web (e.g. the reputation of the author or of the institution that has its own website, etc.). Therefore, mass communication on the Internet is totally anonymous and however binding.
In full accordance with the postulates of web ideology, the attempts to solve the problem of the dissemination of questionable information on the Internet (“fake news”) by creating specialised laws and by also introducing censorship, are resolutely opposed by members of the net community and usually end in failure. It should be borne in mind that it is impossible to provide a universal definition of what “reprehensible information” means, considering the various countries’ cultural, national and religious characteristics. Therefore, the development of a unified information policy in this area is hardly possible.
As an acceptable alternative to censorship and other legislative restrictions, it is proposed to consider filtering materials published on the web by using an algorithm for evaluating electronic documents. The advantage of this approach is that it gives users freedom of choice, thus enabling them to decide what kind of information they wish to receive. Admittedly, for this choice to be truly conscious and responsible, it is necessary to have a fully-fledged value system (i.e. the ability to discern), both for those who make evaluations and give ratings and for those who are guided by third parties, since the basis of any evaluation and rating is the identification of value – in this case the information disseminated via the Internet. The rating evaluation methodology (assessment of reliability) cannot therefore be effective without enhancing the information culture of society as a whole. (5. continued)
Ethical aspects relating to cyberspace: Copyright and privacy
In recent years, there has been a trend in cyberspace ethics towards the emergence of intra-net mechanisms and self-regulatory systems. In particular, in many European countries, information service providers have started to introduce voluntary self-limitation. For instance, in the UK, there is an independent Electronic Frontier Foundation (www.eff.org), whose representatives develop rating systems for Internet resources, by maintaining constant monitoring to collect information that infringes moral and legal standards on websites, and – where necessary – block access to them.
A solution to the problem of the quality of information provided on the Internet can probably come from traditional media, which in recent years have been increasingly committed to acquiring an electronic version of their print or radio and television editions. Moreover, exclusively online newspapers and magazines have already emerged which, thanks to their serious and cautious approach, have won the online public’s trust. These publications can play an extremely important role through widely applied survey protocols; evaluation of electronic publications; maintenance of the virtual media’s reputation; and supervision of the implementation of the basic rules and principles of professional journalistic ethics on the Internet.
Furthermore, the ethical conflict between the author (owner) of an information product and the Internet public has to be considered, i.e. the analysis of the contradiction between the desire for public accessibility of newly created information and the need to protect copyright.
The emergence of the “copyright” concept (dating back almost three hundred years: the first law on the subject is The Statute of Queen Anne, which was enacted in 1709 and came into force on April 10, 1710) is due to the need to strike a balance between the interests of the creators of original works and societal needs. Therefore, it is based on two non-coincidental and sometimes even contradictory moral principles: disposing of the fruits of labour is a natural matter, on the other hand there is the principle of universal free access to knowledge, which ensures the progress of science and art and encourages the free use of any information and ideas without restrictions.
Modern communication technologies create almost unlimited possibilities for personal possession and reproduction of information and this greatly complicates copyright protection. As a result, previous international laws and agreements on the protection of intellectual property are inadequate and traditional ideas on copyright need to be revised.
How should current legislation be changed to meet modern realities? There are two conceptual approaches to solve this problem. The generally accepted trend to improve national and international information law rules is to broaden the scope of copyright and extend it to electronic types of information.
At the same time, it should be emphasised that copyright arises from the fact of creating a work, and does not depend on the nature of the medium. Hence the problem lies in the need for proper interpretation of the legislation in force and in the implementation of the existing rules to the new conditions.
However, the opposite viewpoint – whereby compliance with copyright on the Internet slows down the web development and interferes with its active content – is increasingly expressed. The most radical proponents of this view argue that – since the free exchange of knowledge and ideas is the basis of information ethics – copyright categories are in principle not applicable to it, and therefore the Internet should be perceived as a public information space in which the value of a specific copyright text is levelled off. These ideas have found their most complete embodiment in the hackers’ ethical principles. Bear in mind that the word “hacker” is meant in its original and positive meaning: a person who uses his/her computer skills to explore the details of programmable systems and experiments with how to extend their use to everyone. The derogatory use that some people make of the word does not reflect and pertains to its full morality.
In line with this viewpoint, it is proposed to limit or even remove some rules from the conceptual foundations of copyright, e.g. to authorise the fair and proper use of original works and ultimately relinquish the idea of intellectual property altogether.
It is clear that the origins of this approach should be sought in the ideas of freedom on the net, based on the principle that information should not be encumbered by legal and/or authorization schemes. In fact, even those who support the abolition of intellectual property are not ready to completely relinquish the rights to their works and remove their names from titles and, especially, from revenues and fees. The origins of this approach are to be found within the net and this system of opinions seems legitimate in both directions.
It is therefore clear that the primary task in formulating modern information legislation is to maintain a balance between the interests of software producers and information resources on the one hand, and the interests of their consumers on the other. Otherwise, the development of new communication technologies will contribute to deepening information inequality in modern society, and to further dividing society between the well informed and the less informed.
A further right – the right to privacy – is one of the most fundamental rights: it reflects the natural human need for privacy, confidentiality and autonomy, as well as for the protection of one’s own “personal sphere” from outside intrusion, and the ability to make decisions without being spied on and to remain oneself and maintain one’s own individuality.
It is no coincidence that in all international documents declaring human rights and freedoms, as well as in all codes of ethics related to the sphere of information, privacy is proclaimed as a fundamental moral value, which constitutes the foundation of human freedom and security, and therefore requires respect and protection. It is interesting to note that, unlike other human rights formulated in the 18th century, the right to the inviolability of private life has only recently received protection and be recognized in legislation, i.e. in the mid-20th century. This can be explained precisely by the development of information and communication technologies, under the influence of which intrusion into the individual person’s private sphere has become much easier.
In particular, despite the declared anonymity of Internet surfing, there are technologies that allow to collect information on the users’ behaviours on the web. The collection of such information cannot be considered reprehensible, but only if some rather strict requirements and conditions are met. Information must be obtained in good faith, with the knowledge and consent of the data subject (the person to whom the information relates). It must be collected for well-defined purposes that do not infringe the law and be used in strict compliance with the stated purposes. It must be protected from unauthorised access and not be redundant or associated with personally identifiable data about the user without his or her permission.
In practice, however, these rules are not always complied with. This requires appropriate solutions to be found, thus enabling the Internet users’ privacy to be effectively protected from unauthorised interference by both governmental and commercial agencies.
The reason for my doubts is very simple: whoever is interested in spying on third parties pays the creator of the appropriate software more than the international or governmental organization, or the single private agency, which envisages very low fees for the creator of software that should protect citizens’ privacy. Those who are better paid have obviously more incentives to develop spy-software than the technician with a permanent job and a fixed salary. This is the immoral logic of capitalism.
Naftali Bennett Highlights Tech and Trade, Bridge-Building and Climate Change
Prime Minister Naftali Bennett of Israel used his address to the Davos Agenda 2022 to highlight the role of digital...
The Meeting Point between Pandemic and Environmental
Humans in the Anthropocene Humans are born from history, on the other hand, history is born from human life. Currently,...
Lithuanians Pave Way for EU’s Legal Migration Initiatives with Sub-Saharan Africa
The European Union is facing a shortage of specialists. The reality of demographic characteristics and the labour market dictate that...
Nearly half of City GDP at Risk of Disruption from Nature Loss
Cities contribute 80% to global GDP – but they also account for 75% of global greenhouse gas emissions. Integrating nature-positive...
Sino-American confrontation and the Re-binarized world
Americans performed three very different policies on the People’s Republic: From a total negation (and the Mao-time mutual annihilation assurances),...
Labour market recovery still ‘slow and uncertain’
As the COVID-19 pandemic grinds on and global labour markets continue to struggle, the latest International Labour Organization (ILO) report,...
India’s open invitation to a nuclear Armageddon
Army chief General Manoj Mukund Naravane said that “India was not averse to the possible demilitarisation of the Siachen glacier...
New Social Compact4 days ago
Age No Bar: A Paradigm Shift in the Girl Child’s Marriageable Age in India
Defense4 days ago
Why shouldn’t Israel Undermine Iran’s Conventional Deterrence
Middle East4 days ago
Egypt vis-à-vis the UAE: Who is Driving Whom?
Crypto Insights4 days ago
The Bitcoin ETFs: An Instrument to be Reckoned With
Southeast Asia4 days ago
Cambodian Prime Minister’s Visit to Myanmar: Weakening Role of the ASEAN?
Eastern Europe3 days ago
Rebuilding of Karabakh: Results of 2021
Middle East3 days ago
China-US and the Iran nuclear deal
Russia4 days ago
Do as You’re Told, Russia Tells the Neighborhood