Risks of Artificial Intelligence

Elon Musk, the man behind Tesla Motors' electric car and SpaceX, already warned us: Artificial Intelligence (AI) is likely to become the greatest threat to society. Even astronomer Stephen Hawking and other scientists warned us. But a monster created by ourselves that wants to destroy humanity raises many questions.

When you think of AI, the first thing that might pop into your head are the Hollywood films where evil robots play the leading role and want to take over the world. Well, science fiction is slowly becoming reality. Intelligent computer systems are getting better and better at remembering and reading what we as people can do (such as listening, speaking, moving and watching). They also learn to discover patterns and rules from large amounts of data. In some areas, these types of systems are rapidly taking over, and that of course has some risks.

8028296267?profile=original

Risks of Artificial Intelligence:

1. Decrease in privacy

Technology gets eyes to see through. Cameras can be equipped with face recognition software. Our gender, age, ethnicity and mood can be measured with smart software. Analysis of face, voice, behaviour and gestures leads to increasingly sharper profiles. A profile of us can be created in no-time through the use of smart cameras. Smart systems can even better determine our mood than our partner or family members. That is not even a thing of the future, it already exists. And that is easily and often available as open source.

A number of these options are already a reality in China. Some cops wear facial recognition technology glasses using a database with face photos of thousands of 'suspects' on them. But don't forget, you can be a suspect in China quite quickly when you make certain political statements in public. The social crediting system already exists in China, it’s a point system in which you are assessed on the basis of certain behaviour. People with higher scores get privileges or other rewards and there are so many cameras in the country with image recognition or face recognition software. The risk of using artificial intelligence by governments is that we are likely to get more totalitarian states that use this technology to control their people.

2. Impact on the job market

AI will definitely put pressure on the labor market in the coming years. Due to the fast increase in the quality of artificial intelligence, smart systems will become much better at specific tasks. Recognising patterns from large amounts of data, providing specific insights and performing cognitive tasks will be taken over by smart AI systems. Professionals must keep a close eye on the development of Artificial Intelligence, because systems can look, listen, speak, analyse, read and create content better and better. There are certainly people with a job in the danger zone who have to adapt quickly. But the majority of the population will collaborate with artificially intelligent systems.

And don't forget: there will also be many new jobs, even though they are more difficult to imagine than the jobs that are disappearing. Social inequality will increase in the coming years, because a job is more than just the salary at the end of the month. It is a day's work, purpose, identity, status and a role in society.

3. Autonomous weapons

Elon Musk warned the United Nations about autonomous weapons, driven by artificial intelligence. Together with 115 other experts, he pointed out the possible threat of autonomous war equipment. That makes sense, they are powerful resources that can cause a lot of damage. Not only is military equipment dangerous, but because technology is becoming easier, cheaper and more user-friendly, anyone can get it, even people who have bad intentions.

For a thousand dollars you already have a very good drone with a camera. A whizzkid can install software on it so that the drone will fly autonomously and there is already artificial intelligent facial recognition software with which the drone camera can recognize faces and follow the person.

4. Fake News

Smart systems are capable of creating content, creating faces, composing texts, producing tweets, manipulating images, cloning voices and advertising smartly. AI systems can transform winter into summer and day into night. Making lifelike faces of people who never existed. Open source software "DeepFake" can paste photos of faces onto moving video images. So it appears on video that you are doing something (which is not true and has not happened). Celebrities are already bothered by this because malicious parties can easily make pornographic videos with these celebrities in the lead. When this technology is a bit more user-friendly, extorting anyone is child's play.

You can take a photo of anyone and paste it on porn. Artificially intelligent systems that create fake content also pose a risk of manipulation and influencing by companies and governments. Content can then be produced at such a speed and scale that opinions are influenced and fake news is thrown into the world with great force. Specifically aimed at people who are sensitive to manipulation, framing, control and influencing. These practices are already a reality, as we saw in the case of Cambridge Analytica, the company that managed to extract data from 87 million Facebook profiles of Americans and used this data for a campaign to get President Trump in power. Companies and governments with bad intentions have a powerful tool in their hands with artificial intelligence.

5. Hacking algorithms

Artificial intelligent systems are becoming increasingly smarter and can spread malware and ransomware with great speed and on a large scale in the future. They are also getting better at penetrating systems and cracking encryption and security, such as recently with the Captcha key. Particularly when the power of artificial intelligence increases even more, we will have to look critically at our current encryption methods. Ransomware as a service is constantly improving due to artificial intelligence.

Other computer viruses are also getting smarter due to trial and error. In hospitals, for example, more and more equipment is connected to the internet. What if the digital systems there are hacked with so-called hostage software? That is software that can block complete computer systems in exchange for ransom. You shouldn't think about anyone creating or threatening to create a huge outage in pacemakers.



6. Data-poisoning.

Similar to the falsification of healthcare lab results or CT scans, hackers could use machine-learning algorithms to wage data-poisoning attacks on automated biotech supply chains. As bio-experiments are increasingly run by software, malware could corrupt engineering instructions, leading to the contamination of vital stocks of antibiotics, vaccines and expensive cell-therapies.



7. Genetic-engineering and biowarfare.

Cloud labs let you control up to 50 types of bio-experiments from anywhere in the world while sitting at your computer. Hackers could rely on such automated workflow to modify the genetic makeup of the E. coli bacteria and turn it into a multi-drug resistant bio-agent. As a next step, hackers could harness off-the-shelf drones, and equip them with aerosols, to spread the multi-drug resistant bacteria within water-systems or on farms. Farmers already use drones to spray insecticides on crops.

Such a combination of data-poisoning, the weaponization of bio-manufacturing and manipulation of strategic information would have drastic economic costs and potentially lethal outcomes for populations.

We just mentioned 7 risks but could have done much more. Some questions that we might have to ask ourselves are:

Will we control AI or will they control us? Will AI replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? 

Please join the conversation by commenting on this blog post.

p.s. Download the Labs-Report-AI-gone-awry.pdf from MalwareBytes

Votes: 0
E-mail me when people leave their comments –

You need to be a member of Global Risk Community to add comments!

Join Global Risk Community

    About Us

    The GlobalRisk Community is a thriving community of risk managers and associated service providers. Our purpose is to foster business, networking and educational explorations among members. Our goal is to be the worlds premier Risk forum and contribute to better understanding of the complex world of risk.

    Business Partners

    For companies wanting to create a greater visibility for their products and services among their prospects in the Risk market: Send your business partnership request by filling in the form here!

lead