Connect with us

Science & Technology

Nuclear and radiation technology helping society and health

Published

on

European Commission hosted a conference on Addressing Societal Challenges Through Advancing the Medical, Industrial and Research Applications of Nuclear and Radiation Technology. The conference sought to identify cross-cutting actions that the European Commission, EU countries and other stakeholders can take to maximise the societal benefits of nuclear and radiation technologies, whilst providing high standards of quality and safety to European citizens.

The conference was opened by Climate Action and Energy Commissioner Miguel Arias Cañete and Health and Food Safety Commissioner Vytenis Andriukaitis, alongside other senior speakers including Mr Yukiya Amano, the International Atomic Energy Agency’s (IAEA) Director General, and Dr Maria Neira from the World Health Organisation (WHO).

Commissioner Arias Cañete said: “Europe is a world leader in developing and exploiting radiation technologies to the advantage of citizens and society. The non-power application of these technologies is a success story Europe can be proud of. The ambition of the European Commission is to build on this leadership position with the goal of improving the quality of life of European citizens, generate employment and economic growth and maintain high standard of radiation protection and safety”.

Commissioner Andriukaitis added:“Nuclear and radiation technology offers immense opportunities in the field of modern medicine, with early diagnosis of diseases and cancer treatment for children being just two examples. Our task is to maximise this potential while at the same time managing the challenges posed by new technologies.  Close coordination, information sharing and mutual learning are key elements of this task”.

A range of EU policies play a significant role in the present and future of nuclear and radiation technology, including: the Euratom Basic Safety Standards Directive; the Spent Fuel and Radioactive Waste Directive; the Nuclear Safety Directive; and, more broadly, the EU legislation and initiatives on medical devices, pharmaceuticals and human health.  Research and innovation in this area is also supported through the Euratom and the Horizon 2020 research programmes.

The conference facilitated an in-depth discussion with a broad range of experts. The outputs will contribute significantly to the Commission’s work in this area, and will lead to actions that will enhance the implementation of the Euratom framework, and support integrated activity across several Commission policy areas.

Continue Reading
Comments

Science & Technology

American Big Tech: No Rules

Published

on

Over the past few years, a long-term trend towards the regulation of technology giants has clearly emerged in many countries throughout the world. Interestingly, attempts to curb Big Tech are being made in the United States itself, where corporate headquarters are located. The Big 5 tech companies are well-known to everyone—Microsoft, Amazon, Meta (banned in Russia), Alphabet and Apple. From small IT companies, they quickly grew into corporate giants; their total capitalisation today is approximately $8 trillion (more than the GDP of most G20 countries). The concern of American regulators about the power of corporations arose not so much because of their unprecedented economic growth, but because of their ability to influence domestic politics, censor presidents, promote fake news, and so on.

No laws, no rules

Traditionally, Americans have been less eager to put pressure on Big Tech than, for example, the Europeans, who introduced the General Data Protection Regulation (GDPR) in 2018; it was followed by the Digital Markets Act (DMA) and the Digital Service Act (DSA).

In the United States, there is no law that protects the personal data of users at the federal level; regulation is carried out only at the level of individual states. California, Virginia, Utah and Colorado have adopted their own privacy laws. Florida and Texas have social media laws that aim to punish internet platforms for censoring conservative views.

Dozens of federal privacy data protection and security bills have been defeated without bipartisan support.

One of the few areas where US legislators have reached a consensus is protection of children’s online privacy. This bill largely repeats many of the points of the DSA, such as establishing requirements for the transparency of algorithms and forcing companies to oversee their products.

It is also worth mentioning the accession of the USA in May 2021 to an international initiative to eliminate terrorist and violent extremist content on the Internet (Christchurch Call), but this call is not legally binding.

Perhaps all the successes of the US in the “pacification” of Big Tech are limited to the abovementioned steps.

As for the antimonopoly legislation, it is becoming tougher, but it is also being applied very selectively. The numbers speak for themselves: there have been 750 mergers in the high technology sector in the last 20 years.

Thus, we can conclude that today in the United States, there is still no comprehensive regulation of digital platforms.

Causes of Regulatory Inertia

There are several reasons for America’s soft attitude towards the dominant companies: First, the intellectual basis of U.S. antitrust policy over the past 40 years has largely been based on the ideas of the Chicago school of economics, according to which it is inappropriate for the state to overregulate companies if they show economic efficiency and do not violate the interests of consumers. The main inspirer of the Chicago school, Robert Bork, has many followers, so lawsuits filed by the Federal Trade Commission or individual state prosecutors often end in nothing. For example, in June 2021, the court dismissed two antitrust lawsuits against Facebook: claims against Facebook related to the acquisition of WhatsApp and Instagram by the company, which could have forced it to sell these assets. These were filed in December 2020 by the Federal Trade Commission (FTC) and a group of attorneys general from 48 states. U.S. District Judge James Boasberg ruled that the FTC’s lawsuit was “not legally sound” because it does not provide enough evidence to support claims of Facebook’s monopoly position in the social media market.

Second, Americans profess the “California model” of Internet governance, which also implies minimal government intervention in the affairs of Silicon Valley companies.

Third, one can note the close relationship between government structures and private business. Such a connection is provided both by the phenomenon of “revolving doors” (when civil servants go to work in corporations and vice versa), and by the active lobbying activities of corporations. The American “Tech five” actively interact with the US Congress and the European Parliament, allocating impressive amounts for lobbying and hiring personnel with political connections. In 2020, Big Tech’s total spending for these purposes in the US Congress amounted to more than $63 million.

Finally, given the fragmentation of the political and economic space, techno-economic blocs are being formed, which are precisely centred on such tech giants. They are the ones who provide America with economic and technological leadership, dominance and influence in the global digital space, which explains the cautious attitude of the authorities towards the industry.

Too much freedom…

At the same time, appetites for pacifying the tech giants are also growing in the United States. They stem from allegations of a variety of significant abuses. For example, the report of the Subcommittee on Antitrust, Commercial and Administrative Law, issued in October 2020, highlights the following violations: dissemination of disinformation and hatred, monopolisation of markets, violation of consumer rights.

Concerns about the political and economic power of dominant companies arose against the backdrop of declining wages, declining start-ups, declining productivity, increasing inequality and rising prices. In addition, some experts point out “concentrated corporate power actually harms workers, innovation, prosperity and sustainable democracy in general.” There are fears among some politicians and experts that the US economy has become too monopolised and, therefore, less attractive to the rest of the world, which reduces the ability of the United States to make a constructive contribution to the development of basic international standards in the field of competition and technology.

Another issue that worries the American establishment is content moderation. The 2020 presidential election and the storming of the US Capitol have shown the power of social media and its impact on the public consciousness. Joe Biden, like his predecessor Donald Trump, has threatened to reform or completely remove Section 230 from the text of the Communications Decency Act, according to which social networks are not “publishers” of information, and therefore are not responsible for the statements of third parties that use their services. While the issue of abolishing or reforming this section has not been resolved, 18 bills have already arisen around it from various members of Congress.

As mentioned above, there is no comprehensive regulation of tech giants in the United States, but this does not mean that they feel at ease on American soil and are not fined. Here we can recall a 2019 case, when the FTC fined Facebook a record $5 billion due to a data leak of millions of social network users to Cambridge Analytica, which advised Donald Trump’s headquarters. The fine was the largest in US history and, cumulatively, was almost five times (as of February 2021) more than all fines imposed by the EU under its Privacy Regulation (GDPR). In addition, a series of antitrust lawsuits against Google followed in late 2020. Thus, it is obvious that companies in some cases experience significant pressure from regulators.

From rhetoric to practical steps?

Washington Post columnists predicted that 2022 could be a watershed year in the regulation of Gatekeepers in the USA. However, if we sum up the interim results of the fight between Joe Biden and the tech giants, then progress is not so obvious yet. Of all the proposals currently before Congress, this is an antitrust bill (the American Innovation and Choice Online Act), which would prohibit Apple, Alphabet and Amazon from providing advantages to their own services and products presented in app stores and e-commerce platforms, to the detriment of those offered by their competitors. According to some experts, this bill has good prospects, and perhaps as early as this summer, it will be put to a vote.

The US authorities have demonstrated that they are not ignoring the problem and are responding to it. A June 9 presidential decree on combating monopoly practices, and the appointment of well-known critics of Big Tech to key positions such as Lina Khan (FTC Chair), Tim Wu (Special Assistant to the President for Technology and Competition Policy), and Jonathan Kanter (Chair of the U.S. Justice Department’s Antitrust Division) are proof of this. The American government earns points for showing that it’s proactive. However, all of the aforementioned measures are only the first cautious steps.

The solution to the problem of tech sector regulation is complicated not only by the lobbying power of technology companies, but also by the fact that there is no unanimity in the US Congress regarding how narrow and rigid the rules should be. There are fierce debates between representatives of both parties on this issue.

It is hardly worth expecting the United States to quickly adopt something similar to the Digital Market Act, Digital Services Act or GDPR at the federal level. This should be seen as a matter for the more distant future; not just when a consensus emerges on the issue of regulation within the leading parties, but also when the current model of interaction between regulators and large private business has been completely revised.

Today, America lags behind its European peers in rule-making. It is likely that the global leadership of the EU in the field of technical regulation could potentially spur the US government to take more active steps. As experts note, such a “gap” leaves American companies exposed to other countries where they carry out their activities. The status of the US as a leader in the field of digital products and services is threatened when policies and rules in the digital marketplace are determined by other states.

From our partner RIAC

Continue Reading

Science & Technology

Artificial intelligence and moral issues: The cyborg concept

Published

on

San Francisco, California, March 27, 2017. Entrepreneur Elon Musk, one of the masterminds behind projects such as Tesla and SpaceX, announced his next venture, namely Neuralink. The company aims to merge humans with electronics, creating what Musk calls the neural lace. It is a device that injected into the jugular vein would reach the brain and then unfold into a network of electrical connections connected directly to human neurons. The idea is to develop enhanced brain-computer interfaces to increase the extent to which the biological brain can interact and communicate with external computers. The neural lace will go down to the level of brain neurons: it will be a mesh that will be able to connect directly to brain matter and then connect with a computer. That human being will be a cyborg. The cyborg is a biological mix of man and machine.

Prof. Kaku wonders: “What drives us to merge with computers rather than compete with them? An inferiority complex? Nothing can prevent machines from becoming ever smarter until they are able to programme and make the robot themselves. This is the reason why humans try to take advantage of superhuman abilities”.

As we all know, although Elon Musk has made it clear what the dangers of creating an artificial intelligence that gets out of control are, he is also convinced that if the project is developed properly, humans will enjoy the power of an advanced computer technology, thus taking a step further than current biology. Nevertheless, while Neuralink technology is still at an embryonic stage, there are many people who insist that merging man and machine is not something so remote and are convinced that one way or another this has been happening for decades.

In 2002, Prof. Kevin Warwick – an engineer and Professor of cybernetics at Coventry University in the UK – demonstrated that a neural implant could not only be controlled by a prosthesis, but also by another human being.

In that same year, he and his wife had a set of electrodes – 100 each to be precise – implanted in their nervous system so that it could in turn be connected to a computer. Then all they did was connect the two nervous systems so that they could communicate with each other. Hence every time the wife closed her hand, the husband’s brain always received an impulse. If his wife opened and closed her hand three times in a row, he would receive three impulses. In this way they were able to connect two nervous systems. Who knows what might happen in the future.

Instead of talking and sending messages or emails, will we soon be able to communicate with each other? It is only a matter of time before cybernetic technology offers us an endless array of possible options. This would enable us to order something just by thinking; to listen to music directly in our brain or to search the Internet just by thinking about what we are interested in finding.

Prof. Kaku states: ‘We are heading to a new form of immortality, i.e. that of information technology. By digitalising all known information in our consciousness, then probably the soul becomes computerised. At that juncture the soul and information could be separated from the body and when the body dies the essence, soul and memory would live on indefinitely”.

In that case, humans will be about to replace body and mind, piece by piece, as they prepare to transform into cyborgs.

The marriage between man and machine has turned into something that is increasingly happening in the area of personal computers, tablets, mobile phones and even implants that provide an extraordinarily large amount of data ranging from a person’s vital signs to geolocation, from diet to recreational behaviour, etc.. We are therefore destined to merge with the machines we are creating. These technologies will help us make the leaps forward that can take us beyond our planet and the moon – as we will see more clearly later on. This is the future that awaits us: a future in which evolution will no longer take place by natural selection as Darwin’s theory maintains, but by human management. This will happen in the coming decades, in the short-term future.

On Icarus (vol. 224, issue No. 1, May 2013) – a journal dedicated to the field of planetary sciences, and published under the auspices of the Division for Planetary Sciences of the American Astronomical Society – mathematician Vladimir Ščerbak and astrobiologist Maksim A. Makukov, both from Kazakhstan, published a study conducted on the human genome: The ‘Wow! signal’ of the terrestrial genetic code.

The conclusions of the study are shocking. There is allegedly a hidden code in our DNA containing precise mathematical patterns and an unknown symbolic language. Examination of the human genome reveals the presence of a sort of non-terrestrial imprint on our genetic code, which would function just like a mathematical code. The probability that this sequence may repeat nine times in the randomness of our genetic code – as “’assumed” by Darwin’s theory – is one in ten billion. The DNA certainly has origins that are not random and have nothing to do with the 19th century Darwin’s theories, tiredly repeated to this day.

Our genes have been artificially mutated, and if the theory of the two Kazakh scholars were true, the fact that man is inclined to turn into a cyborg would be perfectly plausible since he has a non-random intelligence that can join the artificial intelligence that, for the time being, is only the heritage of sophisticated computers or early attempts of humanoid robots. There is also the answer to Prof. Kaku’s question: for this reason, from time immemorial, humans have had a penchant for creating their own variants and improving them with cybernetics (programming robots with artificial intelligence), as well as being eager to merge with AI itself. Many scholars and experts agree that – in view of surviving, evolving and travelling across the cosmos – any intelligent species shall overcome the biological stage. This is because by leaving the earth’s atmosphere and trying to go further, much further, humans must be able to adapt to different environments, to places where the atmosphere is poisonous, or where the gravitational pull is much stronger or much weaker than on our planet.

The best answer to Prof. Kaku’s question is that humans are somehow compelled to create robots ever more like themselves not to satisfy their desire to outdo one another by creating intelligent creatures in their image, but to fulfil their destiny outside the earth. This is demonstrated by further clues and signs coming from an analysis of the latest technologies developed by man in anticipation of the next phase of his evolution in the outer space.

Science Robotics – the prestigious scientific journal published by the American Association for the Advancement of Science – published the article Robotic Space Exploration Agents (vol. 2, issue no. 7, June 2017), written by Steven Chien and Kiri L. Wagstaff, from NASA’s Jet Propulsion Laboratory at the California Institute of Technology. According to their theory, astronauts travelling through the space will very soon be replaced by robots, e.g. synthetic human beings capable of making autonomous decisions using artificial intelligence. Space is a very hostile environment for humans. There is strong radioactivity and moving in a vacuum is not so easy, while machines can move nimbly in space. The important thing is that the electronic circuits are protected from damage. It is therefore easier and cheaper for a machine to explore another planet or another solar system. It is believed that space exploration will be more machine-based than man-based. It will not be man who will explore space on a large scale: we will send machines with artificial intelligence that will not have acceleration problems since they will be able to travel outside the solar system using the acceleration of gravity. It would be very useful to have an intelligent system capable of communicating, for example, with Alpha Centauri – our nearest star system – since it would take 8 years and 133 days to send a signal to earth and receive a response. Hence why not use artificial intelligence to make decisions and work? Missions to Mars and Alpha Centauri guided by artificial intelligence could become a reality. NASA has been testing this technology as early as 1998 with the Deep Space 1 probe. It was sent to the asteroid belt located between Mars and Jupiter. Using a system called AutoNav, the probe took photos of asteroids following its itinerary without any human support. The Mars rover is basically an autonomous terrestrial robot that travels around Mars collecting samples and transmitting information. It is a newly fielded autonomous system, which means that, as soon as artificial intelligence is sufficiently reliable to be deployed aboard a spacecraft, there will be a robotic spacecraft that can arrive on Mars. Once we send robotic spaceships programmed with artificial intelligence we will relinquish all possibility of control because it will be our “envoys” that will make decisions on the spot.

Continue Reading

Science & Technology

Artificial intelligence and moral issues: AI between war and self-consciousness

Published

on

At the beginning of 2018, the number of mobile phones in use surpassed the number of humans on the planet, reaching 8 billion. In theory, each of these devices is connected to two billion computers, which are themselves networked. Given the incredible amount of data involved in this type of use, and considering that the computer network is in constant contact and growing, is it possible that mankind has already created a massive brain? An artificial intelligence that has taken on an identity of its own?

The field of robotics is constantly evolving and continues to make strides. It is therefore clear that sooner or later we shall move from artificial intelligence to super-intelligence, i.e. a being on this planet that is smarter than we are and will soon not be any smarter. It will not be pleasant when artificial intelligence with its knowledge and intellectual abilities corners the human being, surpassing flesh and blood people in any field of knowledge. It will be a pivotal moment that will radically change world history – as for now our existence is justified by the fact that we are at the top of the food chain, but the moment when an entity is self-created that does not need to feed itself on pasta and meat, what will we exist for if that entity only needs solar energy to perpetuate itself indefinitely?

If sooner or later we are to be replaced by artificial intelligence, we must begin to prepare ourselves psychologically. Portland, Oregon, April 7, 2016: the US Defence Advanced Research Projects Agency (DARPA) launched the prototype of the unmanned anti-submarine vessel Sea Hunter, marking the beginning of a new era. Unlike the Predator and Air Force drones, this vessel does not need a remote operator and is built to be able to navigate on its own while avoiding all kinds of obstacles at sea. It has enough fuel to withstand up to three months at sea and is very silent. It also transmits encrypted information to Defence Intelligence Services. When the US Department of Defence says that an unmanned submarine would not be launched without remote control, they are telling the truth. But there is more to consider, i.e. that Russia has developed a remotely piloted submarine with a nuclear weapon. This means that between 5 and 15 years will elapse before the US Defence can respond to a remotely piloted submarine with a nuclear weapon on board.

It has always been said that the war drone replaces the flesh and blood soldier, who becomes a remote “playstation” operator. Hence the idea of the drone as a substitute for the human soldier, who would be guaranteed total safety and security, so as to avoid unnecessary dangers. It had been forgotten, however, that remote control could be intercepted by the enemy and change targets by striking its own army. At that point, however, drones would have to be made completely autonomous. Such a drone would be a killing machine that would wipe out entire armies, which is the reason why care should be taken to avoid their proliferation on battlefields. Any kind of accident, a fire or even a minor malfunction would trigger a “madness” mechanism that would cause the machine to kill anyone. Developing killer robots is possible. Facial recognition technology has made great strides and artificial intelligence can recognise faces and detect targets. In fact, drones are already being used to detect and target individuals, based on facial features: they kill and injure.

The application of artificial intelligence to military technology will change warfare forever. It is possible for the army’s autonomous machines to take wrong decisions, thus reaping tens of thousands of casualties among friends, enemies and defenceless civilians. What if they even go so far as to ignore instructions? If so, if autonomous, self-driving killing machines independent of human commands are designed, could we be facing a violent fate of extinction for the human race?

While many experts and scholars agree that humans will be the architects of their own violent downfall first and destruction later, others believe that the advancement of artificial intelligence may be the key to mankind salvation.

Los Angeles, May 2018: at the University of California, Professor Veronica Santos was working on the development of a project to create increasingly human-like robots capable of sensing physical contact and reacting to it. She was also testing different ways of robot tactile sensitivity. Combining all this with artificial intelligence, there may one day be a humanoid robot capable of exploring the space as far as Mars. Humanoid robots are increasingly a reality, ranging from the field of neuroprosthetics to machines for colonising celestial bodies.

Although the use of humanoid robots is a rather controversial topic, this sector has the merit of having great prospects, especially for those who intend to invest in the field. Funding development projects could prove useful in the creation of artificial human beings that are practically impossible to distinguish from flesh and blood individuals.

These humanoids, however, could conceivably express desires and feel pain, as well as display a wide range of feelings and emotions. It is actually well-known that we do not know what an emotion really is. Hence would we really be able to create an artificial emotion, or would we make fatal errors in software processing? If a robot can distinguish between good and evil and know suffering, will this be the first step towards the possibility of developing feelings and a conscience?

Let us reflect. Although computers surpass humans in data processing, they pale into insignificance faced with the complexity and sophistication of the central nervous system. In April 2013, the Japanese technology company Fujitsu tried to simulate the network of neurons in the brain using one of the most powerful supercomputers on the planet. Despite being equipped with 82,000 of the world’s fastest processors, it took over 40 minutes to simulate just one second of 1% of human brain activity (Tim Hornyak, Fujitsu supercomputer simulates 1 second of brain activity in https://www.cnet.com/culture/fujitsu-supercomputer-simulates-1-second-of-brain-activity/)

Japanese-born astrophysicist Michio Kaku – graduated summa cum laude from Harvard University – stated:

“Fifty years ago we made a big mistake thinking that the brain was a digital computer. It is not! The brain is a machine capable of learning, which regenerates itself when it has completed its task. Children have the ability to learn from their mistakes: when they come across something new, they learn to understand how it works by interacting with the world. This is exactly what we need and to do this we need a computer that is up to the job: a quantum computer”.

Unlike today’s computers that rely on bits – a binary series of 0s and 1s to process data – quantum computers use quantum bits, or qubits – which can use 0s and 1s at the same time. This enables them to perform millions of calculations simultaneously in much the same way as the human brain does.

Kaku added: “Robots are machines and as such they do not think and have no silicon consciousness. They are not aware of who they are and their surroundings. It has to be recognised, however, that it is only a matter of time before they can have some awareness”.

Is it really possible for machines to become sentient entities fully aware of themselves and their surroundings?

Kaku maintained: ‘We can imagine a future time when robots will be as intelligent as a mouse, and after the mouse as a rabbit, and then as a cat, a dog, until they become as cunning as a monkey. Robots do not know they are machines and I think that, by the end of this century, robots will probably begin to realise that they are different, that they are something else than their master”.

Continue Reading

Publications

Latest

Reports2 hours ago

What COVID-19 taught us about risk in a complex, inter-connected world

A new UN report has shed fresh light on the ways that the COVID-19 pandemic unleashed cascading risks, particularly on...

Environment7 hours ago

The Caribbean is ‘ground zero’ for the global climate emergency

The UN Secretary-General’s final day in Suriname began on a small plane and ended at a podium. A 90-minute flyover...

World News9 hours ago

In Afghanistan, women take their lives out of desperation

The situation for women is so desperate in Afghanistan that they are committing suicide at a rate of one or...

Finance10 hours ago

Potanin’s core business unfazed by personal sanctions

The news agencies’ report that Vladimir Potanin the president of MMC Norilsk Nickel PJSC was first mentioned in the UK...

South Asia12 hours ago

Growing insecurity in Rohingya Refugee Camps: A Threat to South Asian Security?

5 years have passed since the Rohingya refugee influx in August, 2017.  Bangladesh is currently hosting 1.2 million Rohingya refugees...

Economy14 hours ago

The Rise of the Sovereign Wealth Funds And How They Are Affecting Global Politics

A revolution is taking place in world finance, and it appears that the world is sound asleep.  Investment entities owned...

Eastern Europe16 hours ago

Lessons of Ukraine and the Death of Leadership: Only History Exists

Having considered a plethora of articles pontificating on Moscow’s military action in Ukraine, whether journalistic, academic, ideological, purely propagandistic and/or...

Trending