The huge problem with AI cybersecurity that could leave you defenceless

cyber sercurity

So, I think we all know that artificial intelligence is going to be the next big thing. We want computers that can answer our questions, predict our needs and protect themselves. As Tech companies scramble to meet the demand, we edge towards a more convenient existence.

But is it really safer?

light plug

The Black Hat Convention in Las Vegas has just wrapped up for 2018 and AI was something that had everyone talking. Most of the conversation was positive but there were a few negative thoughts being thrown backward and forwards by some of the biggest names.

Why is AI in cyber security so problematic? Well it isn’t, the problem is the tech giants designing the programs.

The main issue is that these companies are rushing their products to market before they are ready. They are giving their programs limited examples of ‘Clean’ and ‘Corrupt’ coding. As AI needs examples to learn, if there is only one type of corrupt code available, that is the only corrupt code it will be able to identify.

You can see the issues here. When security software is released hackers begin working on ways to bypass that software. Sometimes these hackers work for the company releasing the software and are simply trying to identify flaws and how to fix them. Most times though, hackers are trying to crack it to get access to private information.

When your computers are guarded with AI that can only identify one type of malware, they don’t stay very secure for long.

Another issue that can arise with this type of program is when programmers fail to thoroughly vet the clean code they are feeding their AI. When programmers do not identify anomalous data points in the code, this could potentially leave users open to attack.

AI can be harnessed for so many great things, and we are on the brink of a revolution. This is a great thing. But with new technology comes the potential of it being harnessed for crime and used to make the world a more difficult place to live in. 

We are all hoping that somewhere the companies that are developing AI software are calculating the risk factors and working towards protecting us.

Want to read more from RLR Distribution? Check out the "How Sexism in the Tech Industry is dangerous to us all" post