Generative AI’s Impact on Cybersecurity – Q&A With an Expert

In the ever-evolving landscape of cybersecurity, the integration of generative AI has become a pivotal point of discussion. To delve deeper into this groundbreaking technology and its impact on cybersecurity, we turn to renowned cybersecurity expert Jeremiah Fowler. In this exclusive Q&A session with vpnMentor, Fowler sheds light on the critical role that generative AI plays in safeguarding digital environments against evolving threats.

[…]

Not long ago, it was far easier to identify a phishing attempt, but now that they have AI at their disposal, criminals can personalize their social engineering attempts using realistic identities, well-written content, or even deepfake audio and video. And, as AI models become more intelligent, it will become even harder to distinguish human- from AI-generated content, making it harder for potential victims to detect a scheme.

[…]

There are numerous examples of generative AI being used in recent cyberattacks. The Voice of SecOps report released by Deep Instinct found that 75% of security professionals surveyed saw an increase in cyberattacks in 2023, and that 85% of all attacks that year were powered by generative AI.

[…]

Currently, several malicious generative AI solutions are available on the Dark Web. Two examples of malicious AI tools designed for cybercriminals to create and automate fraudulent activities are FraudGPT and WormGPT. These tools can be used by criminals to easily conduct realistic phishing attacks, carry out scams, or generate malicious code. FraudGPT specializes in generating deceptive content while WormGPT focuses on creating malware and automating hacking attempts.

These tools are extremely dangerous and pose a very serious risk because they allow unskilled criminals with little or no technical knowledge to launch highly sophisticated cyberattacks. With a few command prompts, perpetrators can easily increase the scale, effectiveness, and success rate of their cybercrimes.

[…]

According to the 2023 Microsoft Digital Defense Report, researchers identified several cases where state actors attempted to access and use Microsoft’s AI technology for malicious purposes. These actors were associated with various countries, including Russia, China, Iran, and North Korea. Ironically, each of these countries have strict regulations governing cyberspace, and it would be highly unlikely to conduct large-scale attacks without some level of government oversight. The report noted that malicious actors used generative AI models for a wide range of activities such as spear-phishing, hacking, phishing emails, investigating satellite and radar technologies, and targeting U.S. defense contractors.
Hybrid disinformation campaigns — where state actors or civilian groups combine humans and AI to create division and conflict — have also become a serious risk. There is no better example of this than the Russian troll farms. […]

Earlier this year, fake X (formerly Twitter) accounts — which were actually Russian bots pretending to be real people from the U.S. — were programmed to post pro-Trump content generated by ChatGPT. The whole thing came to a head in June 2024, when the pre-programmed posts started reflecting error messages due to lack of payment.

 

This screenshot shows a translated tweet from X indicating that a bot using ChatGPT was out of credits.

A few months later, the U.S. Department of Justice announced that Russian state media had been paying American far-right social media influencers as much as 10 million USD to echo narratives and messages from the Kremlin in yet another hybrid disinformation campaign.

[…]

The trepidation regarding AI’s role in creating security threats is very real, but some time-tested advice is still valid — keeping software updated, applying patches where needed, and having endpoint security for all connected devices can go a long way. However, as AI becomes more advanced, it will likely make it easier for criminals to identify and exploit more complex vulnerabilities. So, I highly recommend implementing network segmentation too — by isolating individual sections, organizations can effectively limit the spread of malware or restrict unauthorized access to the entire network.

Ultimately, the most important thing is to have continuous monitoring and investigate all suspicious activity.

[…]

One recent example of self-evolving malware that uses AI to constantly rewrite its code is called “BlackMamba“. This is a proof of concept AI-enhanced malware. It was created by researchers from HYAS Labs to test how far it can go. BlackMamba was able to avoid being identified by most sophisticated cybersecurity products, including the leading EDR (Endpoint Detection and Response).

Generative AI is also being used to enhance evasion techniques or generate malicious content. For example, Microsoft researchers were able to get nearly every major AI model to bypass their own restrictions for creating harmful or illegal content. In June 2024, Microsoft published details about what they named “Skeleton Key” — a multi-step process that eventually gets the AI model to provide prohibited content. Additionally, AI-generated tools can bypass traditional cybersecurity defenses (like CAPTCHA) that are intended to filter bot traffic so that (theoretically) only humans can access accounts or content.

Criminals are also using Generative AI to enhance their phishing and social engineering scams.

[…]

The most well-known case to date happened in Hong Kong in early 2024. Criminals used deepfake technology to create a video showing a company’s CEO requesting the CFO to transfer $24.6 million USD. Since there was nothing that suggested that the video was not authentic, the CFO unknowingly transferred the money to the criminals.

[…]

Although AI cannot — and should not — fully replace the human role in the incident response process, it can assist by automating detection, triage, containment, and recovery tasks. Any tools or actions that help reduce response times will also limit the damage caused by cyber incidents. Organizations should integrate these technologies into their security operations and be prepared for AI-enhanced cyberthreats because it is no longer a matter of “if it happens” but “when it happens”.

Generative AI can help cybersecurity by creating realistic risk scenarios for both training and penetration testing.

[…]

what are the future risks of AI providers having vulnerabilities or data exposures?

According to researchers at Wiz they found 2 non-password protected databases that contained just under 1 million records. AI models will generate a massive amount of data and that needs to be stored somewhere. It makes sense that you would have a database full of learning content, monitoring and error logs, and chat responses, theoretically this should have been segregated from the administrative production environment or have additional access controls to prevent an unauthorized intrusion. This vulnerability allowed researchers to access administrative and operational data and the fact that anyone with an Internet connection could have potentially manipulated commands or code scripts should be a major concern to the DeepSeek organization and its users. Additionally, exposing secret keys or other internal access credentials is an open invitation for disaster and what I would consider a worse case scenario. This is a prime example of how important it will be for AI developers to secure and protect the data of their users and the internal backend code of their products.

[…]

Source: Generative AI’s Impact on Cybersecurity – Q&A With an Expert

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com