“We’ve seen a lot of the ransomware attacks and what it’s done. And in particular, it has impacted business resiliency. It’s no longer the case of encrypted computer and, you know, Reimage, and carry on. It’s impacting massive amounts of business and costing hundreds of millions of dollars.”
George Kurtz, CEO at CrowdStrike Holdings, Inc.
Using intelligent algorithms for cyberattack prevention became one of the most significant AI innovations of the past years. Forward-thinking enterprises leverage artificial intelligence to detect suspicious traffic within corporate IT networks, identify malicious software programs, spot infected links in employees’ emails, and even model cyberattack scenarios based on vulnerability assessments.
The trouble is, the other side (i.e., cybercriminals) can do the magic too. For example, hackers can now manipulate the data used for AI model training, causing adversarial attacks. Other AI breakthroughs in the cybercrime field include inference, which allows attackers to access sensitive data by reversing AI engineer systems, and advanced social engineering techniques driven by behavior analytics. Additionally, hackers may employ AI to pinpoint security vulnerabilities in corporate IT systems.
To address these looming threats, companies looking to deploy AI innovations should closely monitor all the data presented to AI models, review real-world data used for model training, and build the elements of randomness into their artificial intelligence models.
Below you will find several statistics shedding light on artificial intelligence trends in the cybersecurity domain: