Artificial Intelligence: Low Intensity Threats. (Continuation and end)

Scientists have produced a list of 20 threats posed by Artificial Intelligence. This list produces these threats in order of severity that they can engender. It’s not just these threats developed by researchers from the University of London who worries. The tone is alarmist everywhere. calls burst from all sides. Some industry experts go so far as to compare the risks that AI presents to the dangers of nuclear war. This is to tell you how these threats are taken seriously.

In our past editions, we have written two articles which explain more serious and dangerous threats and less serious threats. let’s give it worth mentioning a little less serious threats compared to those which we touched upon in the edition which opened this series of threats.

Artificial intelligence gives an ability to sixteen military robots or weapons. AI can offer its services to the scam. The other threat in this Medium severity threat category is data corruption. This technique allows AI to voluntarily modify or introduce data false to infiltrate snitches in order to take control of the data personal information of a person or a company. We also talked about cyberattacks to launch specific and massive attacks, drones autonomous attackers are on the threat list. They can be diverted to AI assistance for military use.

Also on the list of less serious threats, remember, is access denial. Hey there is facial recognition to make fake ID photos always in the search to take possession of someone’s personal data. And finally, market manipulation which can artificially lower or raise a value but also causes a stock market crash.

We are now reaching the last category of the list of 20 threats selected by London scientists and this is the category that bears the qualification of low intensity threats.

At the top of this last category is the exploitation of prejudice. It consists to take advantage of the detours that exist in algorithms, especially on You Tube to influence viewers or on Google to refine the profile of products to sell or disappear competitors.

Burglar robots. We talked about it at the very beginning. They are very scary but not that dangerous. The technique is used to introduce autonomous robots in mailboxes or windows to collect keys and open doors. The damage is quite low and these cases occur we have a small scale.

AI detection blocking. Here artificial intelligence makes it possible to thwart sorting and the collection of data for the purpose of erasing evidence or hiding criminal information. Like for example pornography.

Fake reviews written by AI. She works to produce fake reviews on a site like Amazon or another Le clerec site or Tripadvisor to harm or promote a product.

AI-assisted tracking. This involves using learning systems to track a person’s location and activity.

Finally counterfeit. It’s making fake content like tables and music for example to sell them under false paternity. The strength of nuisance always remains minimal insofar as the known paintings and music are few.

Even if this list does not seem alarmist, some do not hesitate to speak of the extinction of humanity because of artificial intelligence. “Reducing Risk of the extinction of humanity due to the development of AI should be a global priority alongside other risks such as pandemics and war nuclear.” This apocalyptic sentence which puts the risks of the development of artificial intelligence on the same level as pandemics and nuclear war has was co-signed by 350 AI industry leaders and scientists.

For the credibility of this concern, the executives of Google, Microsoft and also OpenIA while they are part of the development of these AIs. This message which was published on the Center for all Safety sets the tone for the demand for a moratorium on the development of AI signed by more than 1,000 digital experts including Elon Musk.