Hot Posts


2024 Political Peril: The Growing Threat of AI Misleading Voters

2024 Political Peril: The Growing Threat of AI Misleading Voters

2024 Political Peril: The Growing Threat of AI Misleading Voters

Political scientists with an interest in technology and computer engineers have been warning for years that anybody would soon be able to produce phony pictures, videos, and sounds that were convincing enough to trick voters and maybe affect an election using inexpensive, strong artificial intelligence techniques.

Given how cheap and easy it was to spread other sorts of fake information on social media, it was especially surprising when produced images started to surface that they were typically poor quality, unreliable, and expensive to produce. Artificial intelligence and so-called "deep fakes" have been around for at least a year.

No more.

Modern generative AI systems are capable of producing cloned human voices, as well as highly lifelike images, videos, and audio, fast and affordably. When coupled with powerful social media algorithms, this fake and digitally created content may target extremely specific populations and disseminate swiftly, thereby lowering the bar for dirty campaign techniques.

Read Also: OpenAI's ChatGPT Development Leads to a Doubling of Losses, Amounting to $540 Million

The ramifications for the campaigns and elections in 2024 are significant and unsettling: In addition to producing targeted campaign emails, messages, or videos quickly, generative AI has the potential to be used to deceive voters, pose as candidates, and sabotage elections at a scale and pace that have never before been achieved.

A.J. Nash, vice president of intelligence at the cybersecurity company ZeroFox, said, "We're not prepared for this. The audio and visual capabilities that have evolved, in my opinion, represent a significant advancement. It will have a significant impact when you can accomplish that on a wide scale and share it on social channels.

AI specialists can rapidly list a variety of concerning scenarios in which synthetic media is produced using generative AI in order to mislead voters, smear a politician, or even incite violence.

Read Also: Top 10 Mind-Blowing AI Tools That You Can Use In 2023

Read Also: ChatGPT: What Is It and How Does It Work?

Read Also: Is ChatGPT Open Source?

To name a few: Videos of candidates presenting speeches or interviews they never gave; automated robocall messages in a politician's voice directing people to cast votes on the incorrect date; audio recordings of a candidate purportedly admitting to a crime or expressing bigoted sentiments. fake pictures purporting to be local news stories that fraudulently imply a candidate withdrew from the campaign.

Oren Etzioni, the founding CEO of the Allen Institute for AI, who resigned last year to launch the charity AI2, said, "What if Elon Musk personally calls you and tells you to vote for a certain candidate?" "Many people would pay attention. But he's not there.

Former president Donald Trump, who is vying for the presidency in 2024, has distributed AI-produced content to his social media followers. An AI voice-cloning technique was used to generate an altered video of CNN anchor Anderson Cooper that Trump uploaded on his Truth Social platform on Friday, distorting Cooper's response to Trump's town hall on CNN last week.

Read Also: What is Visual ChatGPT And How to Use it

Read Also: These ChatGPT Rivals Are Designed to Play With Your Emotions

An other illustration of this digitally changed future may be seen in a dystopian political advertisement released by the Republican National Committee last month. The opening statement of the online commercial is "What if the weakest president we've ever had was re-elected?" and features an odd, somewhat warped image of Vice President Joe Biden. It was released after Biden declared his intention to run for office again.

Here are some AI-generated pictures: Armed forces and armored military vehicles are patrolling local streets as tattooed criminals and waves of immigrants cause terror; Taiwan is under assault; the US economy is failing; and boarded-up businesses are everywhere.

The RNC's description of the advertisement states, "An AI-generated look into the country's potential future if Joe Biden is re-elected in 2024."

According to Petko Stoyanov, global chief technology officer of Forcepoint, an Austin, Texas-based cybersecurity firm, the RNC admitted using AI, but other parties, such as dishonest political campaigns and hostile foreign enemies, won't. According to Stoyanov, organizations attempting to influence American democracy would use artificial intelligence (AI) and synthetic media to destroy confidence.

What happens when a cybercriminal or national government impersonates someone on a worldwide scale. What effect is there? Have we got any options? stated Stoyanov. There will be a significant increase in false information coming from foreign sources.

Read Also: 6 Tips for Using ChatGPT to Brainstorm Better

Ahead of the 2024 election, AI-generated political misinformation has already gone popular online, including doctored videos of Biden seeming to bash transgender people and photographs of kids purportedly studying satanism in libraries.

Some social media users were also duped by AI photos purporting to be of Donald Trump's mug shot, despite the fact that the former president didn't take one when he was arrested and charged with falsifying corporate records in a Manhattan criminal court. Trump was shown resisting arrest in other AI-generated photographs, although their author was eager to recognize their source.

Rep. Yvette Clarke, D-N.Y., who has already backed legislation requiring anybody making synthetic pictures to carry a watermark stating the fact, has presented legislation that would compel candidates to label campaign commercials made with AI in the House.

Some states have provided their own suggestions for tackling deepfake issues.

According to Clarke, she is most concerned that generative AI would be used to produce a film or audio that incites violence and divides Americans before the 2024 election.

According to Clarke, it's crucial that we stay current with technology. "We must install some guardrails. It just takes a moment for everyone to trick another person. People lead hectic lifestyles and lack the time to thoroughly investigate every piece of information. An armed AI might be very disturbing during a political season.

The use of deepfakes in political advertising was censured earlier this month by a trade group for political consultants in Washington, calling them "a deception" with "no place in legitimate, ethical campaigns."

Other types of AI have long been used in political campaigns, employing data and algorithms to automate activities like finding funders or identifying voters on social media. Campaign strategists and tech entrepreneurs are optimistic that the most recent advancements will also bring some benefits in 2024.

The CEO of the forward-thinking digital agency Authentic, Mike Nellis, claimed to utilize ChatGPT "every single day" and urged his team to do the same, provided that any work created using the program is subsequently vetted by human eyes.

The most recent endeavor of Nellis, in collaboration with Higher Ground Labs, is Quiller, an AI tool. It will compose, send, and assess the success of fundraising emails, all of which are time-consuming activities for campaigns.

Thanks for reading this article “2024 Political Peril: The Growing Threat of AI Misleading Voters” if you like this article so place a comment and follow for the latest and trending news.

Post a Comment