Seeing isn't believing: The rise of AI-powered deception and how to guard against it

By Erich Kron, Security Awareness Advocate, KnowBe4.

  • Wednesday, 11th September 2024 Posted 6 days ago in by Phil Alsop

The era of artificial intelligence, and most recently generative AI, is revolutionizing both business and social interactions. While AI holds both positive and perilous potential, it’s being embraced by and large by businesses of all types and sizes. As these systems become increasingly sophisticated, the line between truth and fiction is blurring, creating a landscape of potential AI-enabled fakery that poses challenges while also presenting opportunities.

From deepfake videos that can impact stock prices to AI-generated misinformation campaigns that can sway elections, the risks are real. But how exactly does AI power deception and, most importantly, what can business leaders do about it?

Examples of AI-Powered Deception

AI-powered deception can take a variety of forms from the simple to the complex.

At a very basic level, we’re all likely familiar with fake social media accounts—or bots—designed to spread disinformation across platforms. AI-powered bots can mimic human behavior, making them difficult to detect and swaying opinion through deception and misinformation.

Take it another step further and AI-generated text can be disseminated not only on social media but through the news media and other purportedly legitimate sites. Current large language models (LLMs) can generate human-like text on virtually any topic, making it possible to create fake news articles, blog posts, and social media content at scale.

For instance, In 2023, the Republican National Committee released an ad containing AI-generated images depicting a dystopian future. While clearly fictional, the ad still raised concerns about the use of synthetic media in political advertising. Fake AI-generated news articles have also been distributed and circulated online, often indistinguishable from real reporting to the average reader. There have also been multiple reports of scammers using AI voice cloning to impersonate family members and trick people into sending money.

From a less nefarious, although still troublesome, standpoint, users of generative AI tools like ChatGPT are likely familiar with the tendency of these tools to “hallucinate” or make things up, leading to the proliferation of false or inaccurate information if users don’t verify the facts.

 

Deepfakes and synthetic media are highly realistic videos or images created using AI algorithms, which can depict individuals saying or doing things they never actually said or did. The implications, clearly, are sobering. With the help of AI-powered tools, even individuals with limited technical skills can fabricate convincing videos of public figures or private individuals, which can then be used to spread disinformation or harass and intimidate targets. For instance, in March 2022, a deepfake video of President Zelenskyy calling on Ukrainian troops to surrender was widely distributed, although quickly debunked.

And it can get even more frightening. Personalized propaganda can be used to manipulate individuals by analyzing user data and behavior, and microtargeting them through AI algorithms that deliver tailored disinformation based on their personal interests, fears, and biases.

Implications of AI-Powered Deception

The implications are clear. As people are exposed to a constant stream of manipulated content and conflicting narratives, they may become increasingly uncertain about who and what to believe. On a broader scale, the potential for AI to be used in sophisticated state-sponsored disinformation campaigns poses a significant threat to national security and the stability of global geopolitics.

So, what can organizations, and individuals, do to combat the risks? A number of things.

Steps to Minimize Risk

§  Develop critical thinking and media literacy skills. Take steps to educate yourself and others about potential biases and errors that can be introduced by AI. Be critical of the information provided and take steps to ensure that you are not, however inadvertently, spreading misinformation. In a world of deepfakes, a questioning mindset is your first line of defense. Make a habit of doubting before believing, and always seek to verify information through multiple credible sources.

§  Always double-check the information provided by AI tools, especially for critical tasks or decisions. If you hear or read something that seems “incredible,” check it out. Use, and encourage staff to use, the SIFT method: Stop, Investigate the source, Find better coverage, Trace claims to their original context.

§  Be aware of emotional manipulation tactics. Your ability to understand and manage your own emotions is an important guard against potential manipulation. Practice mindfulness, self-reflection, and emotional regulation to keep your feelings from being used against you.

§  Use AI-detection tools and fact-checking resources. Tools and resources are emerging to serve as a guard against the spread of misinformation. While AI-detection tools are not yet infallible, they can be a useful aid. Popular fact-checking services that are known for their commitment to unbiased reporting include FactCheck.org, a nonpartisan nonprofit dedicated to reducing deception and confusion in U.S. politics. PolitiFact: A Pulitzer Prize-winning fact-checking website that rates statements made by politicians and public figures on a Truth-O-Meter scale, ranging from "True" to "Pants on Fire." And Snopes, a service that’s been debunking urban legends, hoaxes, and misinformation since 1994.

 

§  Implement organizational policies and cybersecurity awareness training related to the use of AI and steps to take to ensure accuracy and limit the spread of misinformation.

Finally, develop—and work with staff to develop—a healthy “inner skeptic.” Approach information generated through AI, or information you read or see that makes outrageous or questionable claims, with healthy caution—especially when it seems too good (or bad) to be true.

In an AI age, seeing (or hearing) is no longer believing. Today’s business executives must be increasingly savvy as they seek to understand both the perils and positive potential of AI.

Opportunities and risks in the channel

Posted 4 days ago by Phil Alsop
By Dennis Frank, Vice President, EMEA Strategic Partners & Alliances and EMEA Inside Sales.
By Martin Reynolds, Field CTO at Harness.

How CISOs can ensure AI is secure-by-design

Posted 4 days ago by Phil Alsop
By Brandon Green, Senior Solutions Architect & Threat Modeling SME, IriusRisk.
By Pulsant.
By Andersen Cheng, Founder & Chairman of Post-Quantum.

The Channel Must Simplify MDR

Posted 6 days ago by Phil Alsop
By Johnny Ellis, Senior Director EMEA Channels, Arctic Wolf.
By Mark Gilliland, Director at Cloudhouse.

Data centres: The good they do

Posted 6 days ago by Phil Alsop
The positive impact of data centres on people, society, business and government. By Ed Ansett, Chairman and Co-Founder, i3 Solutions Group, Part...