Loading Now

Generative AI: Transforming Cyber Attacks from Malware to Deepfakes

A digital artwork depicting cybersecurity themes with AI elements in a blue color scheme and geometric patterns.

During a cybersecurity conference, experts highlighted how generative AI is enhancing hacking methods but emphasized it hasn’t led to new attack tech. AI is allowing quicker malware production and misuse of open-source utilities. Deepfake threats are present but currently limited in impact. Overall, while changes are evident, the fundamental tactics of hackers remain largely the same.

At a cybersecurity conference in National Harbor, Maryland, experts discussed how artificial intelligence (AI) is reshaping the way hackers operate. Peter Firstbrook, a distinguished VP analyst at Gartner, shared insights about the dual role of generative AI in enhancing both social engineering and attack automation. However, he cautioned that while AI is changing the landscape, it hasn’t created radically new attack methods.

Firstbrook indicated that AI’s actual impact lies in easing the creation of malware, suggesting that it significantly reduces the time needed for even novice hackers to develop tools capable of stealing sensitive data. “There’s no question that AI code assistants are a killer app for Gen AI,” he noted, highlighting the productivity leaps that hackers can achieve.

A concerning example presented was how hackers recently utilized AI to create a remote access Trojan, as reported by HP researchers. Firstbrook emphasized that attackers are evidently leveraging generative AI to churn out new malware. “It would be difficult to believe that the attackers are not going to take advantage of [this],” he said, reinforcing the idea that hackers are evolving.

The expert also noted a troubling trend of attackers employing AI to concoct deceitful open-source utilities, endangering developers who unknowingly integrate harmful code into legitimate applications. “If a developer is not careful and they download the wrong utility, their code could be backdoored before it even hits production,” Firstbrook warned.

Before AI, similar tactics were conceivable, but now, the speed with which malicious packages can flood code repositories like GitHub poses a significant challenge for maintaining security. “It’s a cat-and-mouse game,” Firstbrook said, emphasizing how generative AI accelerates the attackers’ pace.

As for deepfakes, while they appear to be an emerging threat in phishing schemes, the current impact remains relatively contained. A recent Gartner survey found that 28% of organizations faced a deepfake audio attack, while 21% saw a deepfake video attack. Yet, only 5% experienced actual breaches resulting in financial or intellectual property theft. Firstbrook called this a “big new area” nonetheless.

Concerns linger about AI’s capacity to enhance attack efficiency; as counterintuitive as it sounds, heightened volume could lead to greater profitability in attacks. He mused, “If I’m a salesperson, it typically takes 100 inquiries to get a ‘yes.’ So, if they can automate their attacks, they can move a lot quicker.

However, Firstbrook reassured that not all fears surrounding generative AI have manifested just yet. He pointed out that no entirely new attack techniques have emerged from this technology so far. “So far, that has not happened, but that’s on the cusp of what we’re worried about,” he remarked.

Street-savvy analysts reference the MITRE ATT&CK framework, which catalogs the ongoing evolution of hacker strategies. Despite the advancements, Firstbrook noted, “We only get one or two brand-new attack techniques every year,” stressing that while the tools and tactics may change, the systematic nature of attacks remains consistent.

In essence, while generative AI is undoubtedly enhancing the capabilities of hackers—from fine-tuning malware to crafting deceptive codes—there remains a level of caution in overstating its current impact on the creation of new attack techniques. As cybersecurity professionals navigate this fast-evolving landscape, it’s clear that the game has changed but the fundamentals remain. Keeping a vigilant eye on the developments in AI and its implications for cybersecurity will be crucial going forward.

Original Source: www.cybersecuritydive.com

Amina Hassan is a dedicated journalist specializing in global affairs and human rights. Born in Nairobi, Kenya, she moved to the United States for her education and graduated from Yale University with a focus on International Relations followed by Journalism. Amina has reported from conflict zones and contributed enlightening pieces to several major news outlets, garnering a reputation for her fearless reporting and commitment to amplifying marginalized voices.

Post Comment