Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.
Ethics and Artificial Intelligence
There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?
There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made . And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.
As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.
Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.
Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.
The Main Problems Facing the Law
The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted , and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible .
There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.
Proposals on the application of the law on animals have been made , although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent . The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.
Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities . Since a legal entity is an artificially constructed subject of the law , robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.
Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law . Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.
The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are . In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined . The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.
Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.
National and International Regulation
Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.
I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.
In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.
Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.
The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.
The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”
The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).
The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.
Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems . According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.
In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions .
First published at our partner RIAC
- 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
- 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
- 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
- 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
- 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
- 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
- 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
- 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
- 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
- 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
- 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
- 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
Digital Child’s Play: protecting children from the impacts of AI
Artificial intelligence has been used in products targeting children for several years, but legislation protecting them from the potential impacts of the technology is still in its infancy. Ahead of a global forum on AI for children, UN News spoke to two UN Children’s Fund (UNICEF) experts about the need for improved policy protection.
Children are already interacting with AI technologies in many different ways: they are embedded in toys, virtual assistants, video games, and adaptive learning software. Their impact on children’s lives is profound, yet UNICEF found that, when it comes to AI policies and practices, children’s rights are an afterthought, at best.
In response, the UN children’s agency has developed draft Policy Guidance on AI for Children to promote children’s rights, and raise awareness of how AI systems can uphold or undermine these rights.
Conor Lennon from UN News asked Jasmina Byrne, Policy Chief at the UNICEF Global Insights team, and Steven Vosloo, a UNICEF data, research and policy specialist, about the importance of putting children at the centre of AI-related policies.
AI Technology will fundamentally change society.
Steven Vosloo At UNICEF we saw that AI was a very hot topic, and something that would fundamentally change society and the economy, particularly for the coming generations. But when we looked at national AI strategies, and corporate policies and guidelines, we realized that not enough attention was being paid to children, and to how AI impacts them.
So, we began an extensive consultation process, speaking to experts around the world, and almost 250 children, in five countries. That process led to our draft guidance document and, after we released it, we invited governments, organizations and companies to pilot it. We’re developing case studies around the guidance, so that we can share the lessons learned.
Jasmina Byrne AI has been in development for many decades. It is neither harmful nor benevolent on its own. It’s the application of these technologies that makes them either beneficial or harmful.
There are many positive applications of AI that can be used in in education for personalized learning. It can be used in healthcare, language simulation and processing, and it is being used to support children with disabilities.
And we use it at UNICEF. For example, it helps us to predict the spread of disease, and improve poverty estimations. But there are also many risks that are associated with the use of AI technologies.
Children interact with digital technologies all the time, but they’re not aware, and many adults are not aware, that many of the toys or platforms they use are powered by artificial intelligence. That’s why we felt that there has to be a special consideration given to children and because of their special vulnerabilities.
Privacy and the profit motive
Steven Vosloo The AI could be using natural language processing to understand words and instructions, and so it’s collecting a lot of data from that child, including intimate conversations, and that data is being stored in the cloud, often on commercial servers. So, there are privacy concerns.
We also know of instances where these types of toys were hacked, and they were banned in Germany, because they were considered to be safe enough.
Around a third of all online users are children. We often find that younger children are using social media platforms or video sharing platforms that weren’t designed with them in mind.
They are often designed for maximum engagement, and are built on a certain level of profiling based on data sets that may not represent children.
Predictive analytics and profiling are particularly relevant when dealing with children: AI may profile children in a way that puts them in a certain bucket, and this may determine what kind of educational opportunities they have in the future, or what benefits parents can access for children. So, the AI is not just impacting them today, but it could set their whole life course on a different direction.
Jasmina Byrne Last year this was big news in the UK. The Government used an algorithm to predict the final grades of high schoolers. And because the data that was input in the algorithms was skewed towards children from private schools, their results were really appalling, and they really discriminated against a lot of children who were from minority communities. So, they had to abandon that system.
That’s just one example of how, if algorithms are based on data that is biased, it can actually have a really negative consequences for children.
‘It’s a digital life now’
Steven Vosloo We really hope that our recommendations will filter down to the people who are actually writing the code. The policy guidance has been aimed at a broad audience, from the governments and policymakers who are increasingly setting strategies and beginning to think about regulating AI, and the private sector that it often develops these AI systems.
We do see competing interests: the decisions around AI systems often have to balance a profit incentive versus an ethical one. What we advocate for is a commitment to responsible AI that comes from the top: not just at the level of the data scientist or software developer, from top management and senior government ministers.
Jasmina Byrne The data footprint that children leave by using digital technology is commercialized and used by third parties for their own profit and for their own gain. They’re often targeted by ads that are not really appropriate for them. This is something that we’ve been really closely following and monitoring.
However, I would say that there is now more political appetite to address these issues, and we are working to put get them on the agenda of policymakers.
Governments need to think and puts children at the centre of all their policy-making around frontier digital technologies. If we don’t think about them and their needs. Then we are really missing great opportunities.
Steven Vosloo The Scottish Government released their AI strategy in March and they officially adopted the UNICEF policy guidance on AI for children. And part of that was because the government as a whole has adopted the Convention on the Rights of the Child into law. Children’s lives are not really online or offline anymore. And it’s a digital life now.
How digital technology and innovation can help protect the planet
As a thick haze descended over New Delhi last month, air quality monitors across the Indian capital began to paint a grim picture.
The smoke, fed by the seasonal burning of crops in northern India, was causing levels of the toxic particle PM 2.5 to spike, a trend residents could track in real time on the Global Environment Monitoring System for Air (GEMS Air) website.
By early November, GEMS Air showed that concentrations of PM 2.5 outside New Delhi’s iconic India Gate were ‘hazardous’ to human health. In an industrial area north of the Indian capital, the air was 50 times more polluted.
GEMS Air is one of several new digital tools used by the United Nations Environment Programme (UNEP) to track the state of the environment in real time at the global, national and local levels. In the years to come, a digital ecosystem of data platforms will be crucial to helping the world understand and combat a host of environmental hazards, from air pollution to methane emissions, say experts.
“Various private and public sector actors are harnessing data and digital technologies to accelerate global environmental action and fundamentally disrupt business as usual,” says David Jensen, the coordinator of UNEP’s digital transformation task force.
“These partnerships warrant the attention of the international community as they can contribute to systemic change at an unprecedented speed and scale.”
The world is facing what United Nations Secretary-General António Guterres has called a triple planetary crisis of climate change, pollution and biodiversity loss. Experts say averting those catastrophes and achieving the Sustainable Development Goals will require fundamentally transforming the global economy within a decade. It’s a task that would normally take generations. But a range of data and digital technologies are sweeping the planet with the potential to promote major structural transformations that will enhance environmental sustainability, climate action, nature protection and pollution prevention.
A new age
UNEP is contributing to that charge through a new programme on Digital Transformation and by co-championing the Coalition for Digital Environmental Sustainability as part of the Secretary-General’s Digital Cooperation Roadmap.
UNEP studies show that for 68 per cent of the environment-related Sustainable Development Goal indicators, there is not enough data to assess progress. The digital initiatives leverage technology to halt the decline of the planet and accelerate sustainable finance, products, services, and lifestyles.
GEMS air was among the first of those programmes. Run by UNEP and Swiss technology company IQAir, it is the largest air pollution network in the world, covering some 5,000 cities. In 2020, over 50 million users accessed the platform and its data is being streamed into digital billboards to alert people about air quality risks in real time. In the future, the program aims to extend this capability directly into mobile phone health applications.
Building on lessons learned from GEMS Air, UNEP has developed three other lighthouse digital platforms to showcase the power of data and digital technologies, including cloud computing, earth observation and artificial intelligence.
One is the Freshwater Ecosystem Explorer, which provides a detailed look at the state of lakes and rivers in every country on Earth.
The fruit of a partnership between UNEP, the European Commission’s Joint Research Centre and Google Earth Engine, it provides free and open data on permanent and seasonal surface waters, reservoirs, wetlands and mangroves.
“It is presented in a policy-friendly way so that citizens and governments can easily assess what is actually happening to the world’s freshwater resources,” says Stuart Crane, a UNEP freshwater expert. “That helps countries track their progress towards the achievement of Sustainable Development Goal Target 6.6.”
Data can be visualized using geospatial maps with accompanying informational graphics and downloaded at national, sub-national and river basin scales. Data are updated annually and depict long-term trends as well as annual and monthly records on freshwater coverage.
Combating climate change
UNEP is also using data-driven decision making to drive deep reductions in methane emissions through the International Methane Emissions Observatory (IMEO). Methane is a potent greenhouse gas, responsible for at least a quarter of today’s global warming.
The observatory is designed to shine a light on the origins of methane emissions by collecting data from various sources, including satellites, ground-based sensors, corporate reporting and scientific studies.
The Global Methane Assessment published by UNEP and the Climate and Clean Air Coalition (CCAC) found that cutting human-caused methane by 45 per cent this decade would avoid nearly 0.3°C of global warming by the 2040s, and help prevent 255,000 premature deaths, 775,000 asthma-related hospital visits, and 26 million tonnes of crop losses globally.
“The International Methane Emissions Observatory supports partners and institutions working on methane emissions reduction to scale-up action to the levels needed to avoid the worst impacts of climate change,” says Manfredi Caltagirone, a UNEP methane emissions expert.
Through the Oil and Gas Methane Partnership 2.0, the methane observatory works with petroleum companies to improve the accuracy and transparency of methane emissions reporting. Current member companies report assets covering over 30 per cent of oil and gas production globally. It also works with the scientific community to fund studies that provide robust, publicly available data.
UNEP is also backing the United Nations Biodiversity Lab 2.0, a free, open-source platform that features data and more than 400 maps highlighting the extent of nature, the effects of climate change, and the scale of human development. Such spatial data help decision-makers put nature at the heart of sustainable development by allowing them to visualize the natural systems that hold back natural disasters, store planet-warming gasses, like carbon dioxide, and provide food and water to billions.
More than 61 countries have accessed data on the UN Biodiversity Lab as part of their national reporting to the Convention on Biological Diversity, an international accord designed to safeguard wildlife and nature. Version 2.0 of the lab was launched in October 2021 as a partnership between UNDP, UNEP’s World Conservation Monitoring Centre, the Convention on Biodiversity Secretariat and Impact Observatory.
All of UNEP’s digital platforms are being federated into UNEP’s World Environment Situation Room, a digital ecosystem of data and analytics allowing users to monitor progress against key environmental Sustainable Development Goals and multi-lateral agreements at the global, regional and national levels.
“The technical ability to measure global environmental change—almost in real time—is essential for effective decision making,” says Jensen.
“It will have game-changing implications if this data can be streamed into the algorithms and platforms of the digital economy, where it can prompt users to make the personal changes so necessary to preserving the natural world and achieving net zero.”
Housing needs, the Internet and cyberspace at the forefront in the UK and Italy
Modern construction methods and smart technology can revolutionise the building process and the way we live.
Population growth and demographic changes have led to a global housing shortage. According to research carried out by the Heriot-Watt University National Housing Federation and by the Homeless Charity Crisis Organisation, the UK will face a shortage of four million housing units by the end of 2031. This means that approximately 340,000 new housing units will need to be built each year. The houses built shall meet the demands of home automation and increasing environmental constraints.
Traditional building technology is unlikely to meet this demand. It is relatively expensive and too slow in fulfilling the necessary procedures and complying with all rules and regulations. Furthermore, the quality and capabilities of traditional construction methods are also limited. The only solution is modular production based on the principles of factory automation. This solution uses cordless and battery-free controls and sensors to perfectly integrate with home automation.
Modular buildings are based on a combination of construction methods called Modern Method of Construction (MMC). They include the use of panelling systems and components, such as roof and floor boxes, precast concrete foundation components, prefabricated wiring, mechanical engineering composites and innovative technologies.
With the opening of several factories, the UK has started to use the MMC to build prefabricated and fully equipped houses in modular form, which can be loaded onto trucks for transport across the country. This type of on-site assembly enables the house to be completed in days rather than months, thus reducing costs significantly. Modular buildings have become popular in Europe. In Italy, a pioneering company is the RI Group of Trepuzzi (Lecce), which is also operating in the fields of logistics and services and building health care facilities, field hospitals and public offices, which are cost-effective and quick to construct.
The impact of modular construction is expected to be significant and factories producing up to five thousand houses per year could become the best builders in the sector.
The construction standards of these new technology houses are higher than those of traditional houses. Thanks to better insulation, the electricity bill could be only half that of a traditional house.
Modular houses have kitchens and bathrooms, and are equipped with power and lighting via power cables, which are also modular, and wireless controls, in addition to the increasingly important network and telecommunications infrastructure.
Structural and modular wiring are derived from commercial electrical and industrial installations to ensure efficient and minimal electrical installation work. As technology changes, this standard installation is adaptable and offers a high degree of flexibility.
Experience in industrial and commercial construction shows that traditional fixtures are labour-intensive, rather rigid and still expensive. In contrast, on-site prefabricated modular cabling and the IDC system combined with wireless controllers and sensors can be fully installed at low cost. These are proven technologies and are moving from commercial to domestic use scenarios.
With the help of CAD support for modular cabling, all power cables are laid in the ceiling or wall space. The installation of wireless energy harvesting equipment simplifies the installation process as no switches and duct installation are required. For the first electrical fixing through the wall, the cable takes less time because there is no need to coordinate the position of the switch with the wall bolts. The level of dependency of on-site installation activities has also been reduced. Sensors, switches and wireless energy harvesting controls can be installed anywhere in the building, even in hard-to-reach areas.
After installation, the principle of energy harvesting will be used. Switches and sensors are powered by the surrounding environment and there is no need to replace old batteries and other maintenance equipment. Moreover, this flexibility and this reliability enable the system to be expanded at any time.
The modular construction technology enables it to adapt to various types of houses and meet the needs of today’s life through flexible shapes and various exterior decorations. This is not exactly the same as the old prefabricated houses, “granted” in Italy to earthquake victims who have been waiting for years for a decent, civilised home.
By providing a range of traditional and modern exterior decorative panels, the roofline can also be customised to suit local customs and architecture.
Through the combination of innovative product technology and good design, the aim of the smart home is to provide security and comfort. The usual requirement is to place the light switch and dimmer (or potentiometer) in the most convenient place. Driven by the kinetic energy collected by the switch itself, they can be placed anywhere.
They do not require wiring, but can send wireless signals to the receiver inside or near lights or DIN-rail mounts (German Institute for Standardisation). In addition, there is no need to use batteries and no need to replace them. This saves all the inconvenience and environmental risks that can be caused by replacing batteries.
Since this type of equipment has reached a wide range of applications, lighting and home entertainment will choose battery-free products. Besides controlling brightness and colour, self-powered switches can also be used to control sound systems or blinds. A key application of the smart home is the switch that can turn off/on devices that do not use traditional electricity when leaving or coming back home.
Energy harvesting technology also supports other sensor-based applications. For example, self-powered sensors can be wirelessly connected to an intruder alarm. Furthermore, by installing light-activated touch sensors on windows, lighting and heating can be turned off when no one is at home.
Another source of energy is the temperature difference between the heating radiator and the surrounding environment. For example, this energy harvesting enables a self-powered heating valve to perform heating control via a room temperature controller according to specific conditions.
From factories to offices, from multifunctional buildings to smart homes, wireless energy harvesting technology has been tested in approximately one million buildings worldwide. Most sensors, switches and other self-powered energy-harvesting devices can communicate at a distance of up to 30 metres in a building and meet the EnOcean international wireless standard, which encrypts messages below 1 GHz by sending a short message.
There are also some self-powered devices that integrate EnOcean energy harvesting technology and can communicate directly with the lights via the well-known Bluetooth or Zigbee (wireless communication standard based on the IEEE 802.15.4 specification, maintained by the ZigBee Alliance). This makes it possible to use green, battery-free switches and solar sensors to flexibly control other applications, such as LED lights or speakers.
Now that wireless sensors for energy harvesting can frame data at home, it will be a huge step forward to aggregate information and perform useful analysis. They process data through the Internet of Things (IoT), which refers to the path in technological development whereby, through the Internet, potentially every object of everyday life can acquire its own identity in cyberspace. As mentioned above, the IoT is based on the idea of “smart” items which are interconnected to exchange the information they possess, collect and/or process.
It also uses Artificial Intelligence (AI) to keep track of living patterns and activities in modular homes. Energy analysis is an application that can currently help homeowners further reduce energy consumption through AI.
Looking to the future, the combination of the IoT and AI will bring many benefits. Geographical data, weather and climate information, as well as activities, water and energy consumption and other factors will be very useful for planners, building organisations, builders and landlords.
Perceived architecture represents the next generation of sustainable building systems. Smart buildings will soon be able to integrate the IoT devices on their own, as well as generate large amounts of information and use it to optimise buildings. This provides a whole new dimension to the service and to the business and home economics model.
This is particularly relevant for the ageing population, as these smart technologies can radically change the lifestyles of the elderly people and their families. They are expected to bring transformative benefits in terms of health and well-being.
The key elements of such a home include smart, non-invasive and safe and secure connections with friends, family members, general practitioners, nurses and health care professionals, involving the care of residents. Technology based on battery-free sensors connected to the IoT will help prevent accidents at home, resulting from kitchens utensils and overflowing toilets, etc., and keep up with residents’ interactions with healthcare professionals.
WHO and Future Frontiers of Global Pandemic Governance
The Covid-19 pandemic has revealed the deep fissures among the countries with regards to governance of the pandemic .The uncoordinated...
Local Wisdom Brings Everybody Towards Sustainability
Climate change, carbon emission, zero waste, circular economy, and sustainability. If you are anywhere on the internet just like 62%...
China will donate 1 billion covid-19 vaccines to Africa
Chinese President Xi Jinping during his keynote speech, via video link, at the opening ceremony of the Eighth Ministerial Conference...
Shifting Geography of the South Caucasus
One year since the end of the second Nagorno-Karabakh war allows us to wrap up major changes in and around...
Uzbek home appliance manufacturer Artel joins United Nations Global Compact
This week, Artel Electronics LLC (Artel), Central Asia’s largest home appliance and electronics manufacturer, has become an official participant of...
Afghanistan: The Humanitarian Imperative Must Come First to Avoid Catastrophe | podcast
The international community must urgently step-up direct funding through United Nations agencies and NGOs to provide Afghan girls & boys...
Being Black in the Bundestag | podcast
The official dress down as Chancellor for Angela Merkel is in full swing. Recently, the first significant step that would...
Africa4 days ago
China and Africa Move into New Era of Cooperation
Intelligence3 days ago
ISIS-K, Talc, Lithium and the narrative of ongoing jihadi terrorism in Afghanistan
Africa4 days ago
The role of China’s Health Silk Road to combat Covid-19 in Africa and Egypt
Africa4 days ago
Eighth Ministerial Meeting of the Forum on China-Africa Cooperation “FOCAC”
Economy3 days ago
Gender-based violence in Bangladesh: Economic Implications
Africa3 days ago
What a Successful Summit for Democracy Looks Like from Africa
Africa3 days ago
Q&A: Arguments for Advancing Russia-African Relations
Middle East3 days ago
Vienna Talks: US-Russia-China trilateral and Iran