I must preface that I am not a certified or self-trained expert in computer networking, the Internet, or Information-Technology (IT). The following views are mine and have been arrived at by listening to/reading up on the issue of net neutrality from partisan and non-partisan sources. Well-informed and fact-based views from experts on the subject are most welcome and highly sought.
The Trump administration placed net neutrality on the chopping block and AjitPai did the honors by repealing it. The issue created a large furor in the world of Internet and social media with divergent explanations floated by both sides.
Conservatives and right-wingers supported the repeal stating that the government shouldn’t impose itself on service providers and get to have a say in their operations. Folks on the left claimed that the Internet is no longer free and that loss of net-neutrality will usher in tiered tariffs and throttling/blocking of web content at the whim of the service providers (ISP).
It’s increasingly difficult to take a purely scientific approach towards technical issues in a culture where the pettiest things are used to smear the opposition and play partisan political games. With much effort, I have attempted to put aside politics and look merely into the nerdy details of this extremely obtuse concept of net-neutrality.
The premise of net neutrality hinges on the aphorism that the Internet/Web (* a nuanced, yet significant, distinction between the two will be discussed briefly later) is a public utility, hence, should be made available and accessible to everyone equally, just like electricity, cooking gas, and water. Corporations are profit-driven and heartless, as a result, the government should get involved in the markets and make sure that everyone gets these utilities and nobody is left in the lurch.
So, is the Internet a public utility?
The science of economics describes two characteristics for a service to qualify as public utility: non-excludability (people cannot be denied the product regardless of whether they have paid) and non-rivalry (consumption by one doesn’t reduce availability for others).
The Internet certainly doesn’t meet the non-excludability criterion, in that people who don’t pay for the service don’t get to use it. Major cities across the US have set up public Wi-Fi in a bid to provide Internet to all, but such “access-for-all” isn’t standard across the vast majority of the nation.
Thankfully, the Internet doesn’t fail to meet the non-rivalry criteria. A huge slug of new users might overwhelm existing service capabilities transiently, but additional hardware can be added to accommodate the growing demand. Thus, for all practical purposes, the Internet qualifies the non-rivalry criterion.
In summary, the Internet isn’t a public utility, at least not now.
But I would like to make additional depositions to make my case well-rounded and cogent.
The Internet was conceived in the 1960s as an effort on part of the US federal government to transfer data over foolproof communication networks run by computers. What started as a nascent and clunky project involving huge machines and laughable transfer speeds evolved into a means of global networking, telephony, and information transfer at incredibly fast speeds. This evolution was majorly spearheaded by researchers at several government agencies from different parts of the world. In the 1990s, the Internet was opened up to private players for commercial usage. Thus, the Internet has been built and developed using taxpayer money. Also, of note is that the Internet is a decentralized space that no one has hegemony over.
Now, over to the Web. Thrown around carelessly and interchangeably to describe the Internet, the Web is actually different from the Internet. The Web is an application developed by Sir Tim Berners-Lee, during his time at CERN – a multi-government funded organization – to access documents, pictures, videos and other files on the Internet that are marked in a distinguished manner. It’s one of the several ways to access stuff on the Internet and communicate with one another. By corollary, the Web was thus crafted by an individual using public’s (taxpayer)money. It’s this little, yet extremely important, corner of the Internet that this brouhaha is all about.
ISPs function as middlemen connecting end users to the Internet space, mainly through the World Wide Web or the Web or WWW. Neither did they create the Internet or the WWW, nor do they maintain it.
Effectively, private corporations are helping us access a digital space that was created using public’s money. Moreover, the creators of this space – whether it be governmental agencies or individuals – in all their largesse decided to open up the space for commercial use and allow people to freely (not to be conflated with ‘for free’) use the space.
Over the years, the Web has grown from an information archive and emailing medium to a source of employment, a means of starting and running a business, a tool to reach out to people across the world, a place to broadcast yourself and your work, and much more. While the Web doesn’t qualify as public utility, it does serve as one of the few ways by which people in first countries can augment the socioeconomic momentum of the Industrial Revolution using digital technology and by which people in third countries can change their destinies by creating an app, or by engaging in commerce across borders, or educating themselves for free.
Repealing net neutrality gives ISPs a kind of hegemony, not over the Web or the Internet, but over what we consume from this public-utility-hopeful. While larger corporations can find a way around by paying the large sums of money ISPs might demand for a certain degree of visibility on their respective services, it is almost difficult for an entrepreneur or a blogger or an independent journalist to pay the same sum for the same degree of visibility on those services.
“Take your business over to Facebook or on some other social media outlet and you won’t be discriminated against,” one might argue. Not quite true! Social media have tailored news feeds and show you what you have already seen. It will be difficult to market your business on fronts that are slowly devolving into echo chambers. Also, one cannot be certain that social media giants are unbiased in the way they deliver content, as has been the case with Facebook, which was accused of manipulating the ‘trending’ feature to suit their political leaning.
The gravity of the problem is further compounded when one factors in the regional monopolies that ISPs enjoy in the US. Competition is scarce because of the cost-intensive nature of running cables under the streets and setting up hardware. Overbuilders (ISPs using existing hardware and cables to provide an alternative) can increase competition, but financial feasibility and ROI of such ventures are pretty dim. In this regard, the Web certainly functions like a public utility and requires some sort of accountability on part of the ISP.
There is also a technical angle to the importance of net neutrality, which is lucidly explained here.
Repeal of net-neutrality should get everyone disconcerted, especially, small business owners, entrepreneurs, innovators, and the most vulnerable – alternative news media outlets, especially the ones with unsavory views – many of which tend to be on the political right. Cheering along to your own demise because your guy did it is the gold standard of intellectual indolence and buffoonery.
I would like to once again post face that I am not a certified or self-trained expert in matters of the Internet, computing, or networking and would welcome fact-based feedback on this subject.
Having said that, I can tell you two things with certainty: 1. Capitalize the first letter of Internet and Web and place the definite article the before these words when referencing them; and 2. We use the Web to get on the Internet to do stuff.
Towards A Better World: Our Senses and How Artificial Intelligence is Replicating Them
Our five senses help us perceive the world around us. The sense of touch, for example, can bring loved ones closer but, on a darker note, can also frustrate amputees. What bothers them particularly about their prosthetic arms is missing feedback. Is what they are touching hot or cold, liquid or solid, a rose or its thorn; an aspect so universal for the able-bodied that it is not given a second thought.
Though it has not escaped Artificial Intelligence (AI) researchers who are trying to replicate these senses (Engineering and Technology, August 2023). They have been busy developing artificial hands with softer fingers and embedded sensors. How long will it be before the problem is solved?
Well, a US company Atom Limb is expecting to release a mind-controlled prosthetic limb in 2024. In it the movement sensors in the hand section of the prosthesis send electronic signals to the wearer’s stump, where the neurons once connected to the amputated hand are still in place and capable of transmission to the brain.
Notice how we know at once when there is something crawling on our skin. In May 2022, researchers at Stanford University’s Bao Research Group announced the invention of artificial skin that is durable, paper thin and stretchable. This has the future potential of being wired into the wearer’s nervous system to give a real touch capability — namely, sensing temperature, pressure, vibration and location. Thus when the finger moves from the handle to the cup itself, you sense the change in temperature and distance.
Another sense, that of hearing or rather lack thereof, is not infrequently a source of humor. Possibly because the sufferers are able to compensate through other means. Beethoven suffered from Paget’s disease. It caused skull bone enlargement which pressed on the eighth cranial nerve associated with auditory function. The loss was gradual from the age 28 to 44 when he was quite deaf. While he could still hear a little, he would strap an ear trumpet on his head so he could conduct the orchestra with his hands. He also carried a notebook and pencil to jot down musical brainstorms but also to converse with friends.
Hence the somewhat morbid joke of someone seeing Beethoven sitting on his grave furiously erasing some sheet of music. “Maestro! Maestro! What are you doing,” the person asks, to which he gets the reply, “I am decomposing.”
Hearing loss when it is congenital is no joke, however. It can inhibit language learning and speech. Thus the words ‘deaf and dumb’ are often placed together with ‘dumb’ of late being replaced by the kinder ‘mute’.
Here again technology comes to the rescue. Cochlear implants have been around for quite a while. Invented in 1957, the first implant procedure is credited to Stanford University. A single-channel electrode was used but was found to be of limited utility for detecting speech. It took a further 20 years to get to the modern multi-channel type.
Hearing aids now are small enough to be barely visible. They work for most people and only those with profound hearing loss consider the implant option.
Our sense of sight helps us navigate the world around us, and enjoy its beauty. For some it may be taken away gradually through macular degeneration (AMD). It is a form of retinal deterioration that affects the sight of some 200 million people in the world. As the photoreceptors in the central retina degenerate, it impairs the ability to read or even recognize people.
The good news is that a prosthetic replacement is now being developed to replace the lost photoreceptors with photovoltaic pixels. These convert light into electricity which stimulates the neurons in the retina. While the present version leaves the recipient somewhat shortsighted, a newer one currently being tested in rats will restore 20/20 vision.
For the future, there is Science Eye, a device employing optogenetics. It uses gene therapy to restore optic nerve cells while an ultra-dense micro-LED display panel is inserted directly over the retina.
There are others in the field including Cortigent which is making headway with a system that does not require genetically modifying retinal cells because it is a direct cortical (brian layer) stimulator. Cortigent is in the process of designing a study to get their stimulator implant approved. They have already spent five years studying the safety and reliability of their devices.
Then there are our senses of smell and taste, to some extent linked. There is a good reason food seems bland and tasteless when a person has a bad cold — the sense of smell is absent. Thus when chefs talk about flavor, they imply both taste and smell.
Taste receptors in the mouth sense sweet, sour, salt, bitter and savory — the latter also known as umami. But try sucking a lemon flavored candy while pinching your nose. You will taste the sweetness, but not the lemon flavor. The tongue is, of course, also sensitive to cold and heat.
A promising approach to treatment for loss of smell is to train the olfactory nerve through inhaling a set of odors (originally rose, lemon, clove and eucalyptus) twice daily for three months. It was found to help the nerve to regenerate.
Taste has been with humans forever. Long before scientists and their experiments, humans knew to avoid plants that tasted bitter — it signified something harmful. Yet there are people unfortunate enough to be without this sense.
Having all the senses is so commonplace that we rarely ponder their absence. So let the next gustatory and olfactory experience, or the music we hear, or the walk we take in a park where we can also smell the flowers, be all the more meaningful for valuing our senses. Harnessing them and adding that subconscious sense of perception to enhance our understanding of the world as it is, and we need only imagination to observe the world as it could be … to be ready to take the first step on the journey to a better one, a world at peace.
Development of Metaverse in China: Strategies, Potential, and Challenges Part I
In the rapid era of digitalization, the metaverse has become a hot topic among developed countries. While many nations focus on the entertainment aspect of the metaverse, China appears to have a different perspective. Adopting a more industry-oriented approach, can the Bamboo Curtain country lead the next metaverse revolution?
Why Does China Opt for an Industrial Approach?
There are several reasons why an industrial approach might be China’s key to success in the metaverse:
The metaverse, with its virtual simulation technology, has immense potential to revolutionize various industries like manufacturing, urban planning, and healthcare. In manufacturing, the metaverse can facilitate product design optimization, in-depth employee training, and manufacturing process optimization through virtual prototypes and real-time simulations, enabling instant collaboration between designers and engineers, thus reducing time and costs associated with physical prototypes.
Regarding urban planning, the metaverse can be applied for the visualization of city layouts and infrastructure in a 3D virtual environment, allowing urban planners to make better-informed decisions about urban development. Moreover, it enables public participation in urban development projects, offering citizens the chance to explore and provide input on proposed designs, and promoting sustainable development through the environmental impact analysis of urban designs.
In the healthcare sector, the metaverse can be used for medical training, patient rehabilitation, and remote consultations. Medical students can practice medical procedures in a risk-free virtual environment, while patients can undergo intensive virtual medical consultations and rehabilitation therapy. This technology can enhance the skills and confidence of prospective doctors and expedite patient recovery processes.
Overall, the metaverse offers innovative and interactive solutions that can address the specific needs of various industrial sectors, allowing enhanced learning, better design, optimized processes, and innovative solutions, which will ultimately contribute to the progress of these industries.
Metaverse technology, with its capabilities in virtual and augmented reality, opens new doors in industrial efficiency and innovation. It enables industries to develop prototypes in virtual environments, accelerating product development cycles and reducing costs associated with physical models. For instance, in the automotive industry, designers and engineers can collaborate in a 3D environment to test and modify car designs in real-time, allowing a faster response to market needs.
Additionally, the metaverse plays a crucial role in employee training and development. In the manufacturing sector, virtual simulations can be utilized for operational machine and production process training, reducing risks and enhancing employee skills while saving training time and costs. This also contributes to increased safety and reduction of incidents in the workplace.
The metaverse also supports industrial process optimization through real-time simulations and data analysis. Companies can visualize and optimize workflows, plant layouts, and production schedules to enhance productivity and reduce operational costs. This enables quicker and more accurate identification and resolution of inefficiencies and obstacles.
In conclusion, the implementation of the metaverse in industries promises a revolution in product design, employee training, and daily operations. The ability to integrate and optimize various operational aspects in a virtual environment offers opportunities for sustainable innovation and heightened competitiveness in the modern industrial era.
History shows that the Chinese government often provides full support to strategic sectors. With financial support and progressive regulations, the metaverse industry in China has the potential to grow rapidly. China is actively exploring the metaverse, with Beijing planning to create a ‘Digital Identity System’ for the metaverse and Web 3.0, following Shanghai’s footsteps. Even though known for being cautious in adopting advanced technologies, China sees significant potential in the metaverse and strives to establish a regulatory framework to develop virtual reality and the metaverse as part of the digital economy, as detailed in the Virtual Reality Development Action Plan released in November 2022. The proposed digital identity system aims to control user anonymity and identify individual characteristics in the metaverse, allowing regulated and controlled use of this technology. The regulatory proposal discussions are underway at the International Telecommunication Union (ITU), with the involvement of technology experts and Chinese telecom operators like China Mobile. This demonstrates China’s commitment to developing the metaverse in a safe and orderly manner, aligning with the high interest of its citizens, where 78% of Chinese citizens have expressed interest in the metaverse.
Differentiation from Competitors:
In the midst of tight global competition in the metaverse, China’s focus on industrial applications could be a key differentiator. While many other countries might be focusing more on entertainment aspects, China can lead in industrial applications of the metaverse. China chooses to focus on the industrial applications of the metaverse as part of its strategy to become a global leader in technology and innovation, aligning with the country’s ambitions to build competitive advantage in high technology and strengthen its trading position on the international stage. With its broad and diverse industrial sector, from manufacturing to healthcare, the integration of the metaverse allows for significant innovation and economic growth across various fields, enabling the development of specific and value-added solutions for industrial needs. Additionally, by focusing on industrial applications, China can also address challenges and risks associated with the metaverse, such as privacy and data security issues. The development of regulatory frameworks and technical standards for the industrial metaverse will ensure that this technology is developed and used safely, responsibly, and in line with national priorities and sustainable development goals.
Domestic Technology Development
China has been making substantial investments in cutting-edge technologies such as AI, 5G, and semiconductors, integrating these technologies with the metaverse to build a strong and competitive ecosystem. The adoption of 5G technology is key to the implementation of the metaverse, and I once read a book about the Metaverse written by experts in China, stating that mass adoption of the Metaverse will occur when 60% of the population has adopted 5G. Therefore, China currently seems to be focusing on developing 5G connectivity, hoping that the industrial sector will be the first adopter of this technology due to its higher purchasing power and more specific and limited scope. In the B2B context, this is considered a realistic step before introducing this technology to the end consumer. According to data from the GSMA The Mobile Economy Report China 2023, 5G penetration in China in 2023 is 45% of the population and is expected to reach 70% by 2027. Observing this data, it can be hypothesized that the evolution of Metaverse utilization in China will experience significant progress around the year 2027. Hence, in the coming years, we can observe how the integration between AI, 5G, semiconductors, and the metaverse can form a more synergistic and competitive technology ecosystem, where the industrial sector will play a key role in early adoption, before the metaverse truly enters and is accepted by the general consumers in China.
With an approach focused on the industry, China has the opportunity to lead the metaverse revolution in the future. However, as with any innovation, there will be challenges to face. With government support, investment in R&D, and a clear vision, China is on the right track to leverage the full potential of the metaverse for industrial interests and national economic growth.
Artificial Intelligence and Advances in Chemistry (II)
As previously seen, chemical representation types have developed several sub-types over recent years. Unfortunately, however, there is no clear answer as to which representation is the most efficient for a particular problem. For example, matrix representations are often the first choice for attribute prediction but, in recent years, graphs have also emerged as strong alternatives. It is also important to note that we can combine several types of representations depending on the problem.
Hence how (and which) representations can be used to explore chemical space? We have already said that string representations are suitable for generative modelling. Initially, graphical representations were not easy to model by using generative models, but more recently their combination with the Varational Autoencoder (VAE) has made them a very attractive factor.
In machine learning a variational autoencododer is an artificial neural network architecture introduced by Diederik P. Kingma e Max Welling. It is part of the families of probabilistic graphical models and variational Baysenian methods (i.e. family of methods for the approximation of integrals).
VAEs have proved particularly useful since they enable us to have a more machine-readable continuous representation. A study used VAEs to show that both string and graph representations can be encoded and decoded in a space where molecules are no longer discrete, but can be decoded into continuous vectors with real values of molecule representations. The Euclidean distance between different vectors will correspond to chemical similarity. Another model is added between the encoder and the decoder to predict the attribute to be reached at any point in space.
But while generating molecules per se is a simple task – we can take any generative model and apply it to the representation we desire – generating structures that are chemically valid and display the properties we desire is a much more challenging issue.
The initial approaches to achieve this goal imply models on existing data sets and their subsequent use for transfer to learning. The model is fine-tuned through a calibration data set to enable the generation of structures oriented towards specific properties, which can then be further calibrated using various algorithms. Many examples of this imply the use of string representations or graphs. However, difficulties are encountered with respect to the chemical validity or desired properties when these are not successfully obtained. Furthermore, the fact of relying on data sets limits the search space and introduces potentially undesirable biases.
An attempt at improvement is to use Markov Decision Process (MDP) to ensure the validity of chemical structures and optimise the MDP itself to achieve the desired properties through deep Q-learning (a model-free reinforcement learning algorithm to derive the value of an action in a particular state). In mathematics, an MDP is a discrete-time stochastic control process (a function or signal, with values given at a chosen set of times in the integer domain). It provides a mathematical framework for modelling the decision-making process in situations where outcomes are partly random and partly under the control of a decision-maker. MDPs are useful for studying optimisation problems solved by means of programming. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The MDP is named after the Russian mathematician Andrej Andreevič Markov (1856-1922).
A particular advantage of this model is that it enables users to visualise the preference of different actions: (a) to visualise the degree of preference for certain actions (1 being the highest preference, 0 the least preferred); and (b) take steps to maximise the quantitative estimation of the drug similarity to the starting molecule.
Although still in its infancy, the use of Artificial Intelligence to explore the chemical space is already showing great promise. It provides us with a new paradigm to explore the chemical space and a new way to test theories and hypotheses. Although empiricism is not as accurate as experimental research, computationally-based methods will remain an active research area for the foreseeable future and will already be part of any research group.
So far we have seen how Artificial Intelligence can help discover new chemicals more quickly by exploiting generative algorithms to search the chemical space. Although this is one of the most noteworthy use cases, there are also others. Artificial Intelligence is being applied to many other problems in chemistry, including:
1. Automated work in laboratory. Machine learning techniques can be used to speed up synthesis workflows. An approach uses self-driving laboratories to automate routine tasks, optimise resource expenditure and save time. A relatively new but noteworthy example is the use of the Ada robotic platform to automate the synthesis, processing and characterisation of materials. Ada tools are developed to provide predictions and models to automate repetitive processes, using machine learning and AI technologies to collect, understand and process data, so that resources can be dedicated to more value-added activities.
Ada is basically a laboratory that discovers and develops new organic thin-film materials without any human supervision. Its productivity is making most recent graduates uncomfortable. The entire thin-film fabrication cycle, from the mixing of chemical precursors, through deposition and thermal annealing, to the final electrical and optical characterisation, takes only twenty minutes. An additional aid is the use of a mobile chemical robot that can operate tools and perform measurements on 688 experiments over eight days.
2. Chemical reaction prediction. Classification models can be used to predict the type of reaction that will occur, or simplify the problem and predict whether a certain chemical reaction will occur.
3. Chemical data mining. Chemistry, like many other disciplines, has an extensive scientific literature for the study of trends and correlations. A notable example is the data mining of the vast amounts of information provided by the Human Genome Project to identify trends in genomic data.
4. Finally, although the new data-driven trend is developing rapidly and has had a great impact, it also entails many new challenges, including the gap between computation and experiment. Although computational methods aim to help achieve the experiment goals, the results of the former are not always transferable to the latter. For example, when using machine learning to find candidate molecules, we have to bear in mind that molecules are rarely unique in their synthetic pathways, and it is often difficult to know whether an unexplored chemical reaction will work in practice. Even if it works, there are problems with the yield, purity and isolation of the compound under study.
5. The gap between computational and experimental work becomes even wider, as computational methods use metrics that are not always transferable to the latter, such as Quantum Electrodynamics (QED), which describes all phenomena involving charged particles interacting by means of the electromagnetic force, so that its experimental verification may not be feasible. There is also the need for a better database. However, the problem of the lack of benchmarks arises. Since the entire chemical space is infinite, it is hoped to have a sufficiently large sample which may help in subsequent generalisation. Nevertheless, most of today’s databases are designed for different purposes and often use different file formats. Some of them have no validation procedures for submissions or are not designed for AI tasks. It should also be said that most of the databases available have a limited scope of chemical data: they only contain certain types of molecules. Furthermore, most tasks involving the use of Artificial Intelligence for chemical predictions have no reference platform, thus making the comparisons between many different studies impracticable.
One of the main reasons for the success of AlphaFold – which, as already seen, is an AI programme developed by DeepMind (Alphabet/Google) to predict the 3D structure of proteins – lies in the fact that it has provided all of the above as part of the critical evaluation of Protein Structure Prediction, i.e. the inference of a protein 3D structure from its amino acid sequence, e.g. the prediction of its secondary and tertiary structure from its primary structure. This evaluation demonstrates the need for organised efforts to streamline, simplify and improve other tasks involving chemical prediction.
In conclusion, as we continue to advance in the digital age, new algorithms and more powerful hardware will continue to lift the veil on previously intractable problems. The integration of Artificial Intelligence into chemical discovery is still in its infancy, but it is already a commonplace to hear the term “data-driven discovery”.
Many companies, whether pharmaceutical giants or newly founded start-ups, have adopted many of the above technologies and brought greater automation, efficiency and reproducibility to chemistry. Artificial Intelligence enables us to conduct science on an unprecedented scale and in recent years this has generated many initiatives and attracted funding that will continue to lead us further into an era of autonomous scientific discovery. (2. continued).
World News4 days ago
Assad-Xi Jinping meeting: China-Syria strategic partnership
Environment4 days ago
Global warming did the Unthinkable
Economy3 days ago
International Forum for China’s Belt and Road and the Six Economic Corridors Projects
Terrorism3 days ago
Bad Strategies Boost Al-Shabab
Africa4 days ago
Decorating Africa at United Nations
East Asia3 days ago
Assad’s visit to China: Breaking diplomatic isolation and rebuilding Syria
Americas3 days ago
Quad foreign ministers meet in New York for the third time
World News3 days ago
India’s Canadian riddle