In the midst of a high-stakes election taking place during a mind-blowing heat wave, a storm of confusing deepfakes is sweeping across India. The variety seems endless: AI-powered mimicry, ventriloquism, and trick editing effects. Some are crude, some are humorous, some are so obviously false that they could never be expected to be seen as real.
The overall effect is confusing and adds to a social media landscape already inundated with misinformation. The volume of online waste is too large for any electoral commission to track, let alone debunk.
A diverse group of fact-checking vigilante groups have emerged to fill the void. As the wheels of the law turn slowly and unevenly, the task of tracking down deepfakes has been taken on by hundreds of government workers and private fact-checking groups based in India.
“We have to be prepared,” said Surya Sen, a forestry official in Karnataka state who has been reassigned during the election to manage a team of 70 people searching for misleading AI-generated content. “Social media is a battleground this year.” When Mr. Sen’s team finds content they believe is illegal, they tell social media platforms to remove it, publicize the hoax, or even ask for criminal charges to be filed.
Celebrities have become common fodder for politically pointed stunts, including Ranveer Singh, a Hindi film star.
During a videotaped interview with an Indian news agency on the Ganges River in Varanasi, Singh praised the powerful Prime Minister, Narendra Modi, for celebrating “our rich cultural heritage.” But that’s not what viewers heard when an altered version of the video, with a voice that sounded like Mr. Singh’s and near-perfect lip syncing, circulated on social media.
“We call this lip-syncing deepfakes,” said Pamposh Raina, who heads the Deepfakes Analysis Unit, an Indian media collective that has opened a tip line on WhatsApp where people can submit suspicious videos and audio for be examined. She said Mr Singh’s video was a typical example of authentic footage edited with an AI-cloned voice. The actor lodged a complaint with the Cyber Crime Unit of the Mumbai Police.
In this election, no party has a monopoly on misleading content. Another doctored clip began with authentic footage showing Rahul Gandhi, Modi’s most prominent opponent, participating in the mundane ritual of being sworn in as a candidate. An AI-generated audio track was then added.
In reality, Gandhi did not resign from his party. This clip also contains a personal inquiry, which makes Gandhi appear to say that he “could no longer pretend to be a Hindu.” The ruling Bharatiya Janata Party presents itself as a defender of the Hindu faith and its opponents as traitors or impostors.
Sometimes political deepfakes veer into the supernatural. Dead politicians have a way of coming back to life through strange AI-generated images that support their descendants’ real-life campaigns.
In a video that appeared a few days before voting began in April, a resurrected H. Vasanthakumar, who died of Covid-19 in 2020, spoke indirectly about his own death and blessed his son Vijay, who is running for the old seat his father’s parliamentarian. in the southern state of Tamil Nadu. This appearance followed the example of two other late titans of Tamil politics, Muthuvel Karunanidhi and Jayalalithaa Jayaram.
Modi’s government has been crafting laws that are supposed to protect Indians from deepfakes and other types of misleading content. A 2021 “IT Rules” law holds online platforms responsible, unlike the United States, for all types of objectionable content, including impersonations intended to cause insults. The Internet Freedom Foundation, an Indian digital rights group, which has argued that these powers are too broad, is tracking 17 legal challenges to the law.
But the prime minister himself seems receptive to some types of AI-generated content. A pair of videos produced with artificial intelligence tools show two of India’s most important politicians, Modi and Mamata Banerjee, one of his staunchest opponents, emulating a viral YouTube video of American rapper Lil Yachty making “the exit MORE DIFFICULT HISTORY”.
Modi shared the video on X and said that such creativity was “a pleasure.” Election officials like Mr. Sen in Karnataka called it political satire: “A rock star Modi is fine and not rape. “People know this is false.”
Police in West Bengal, where Banerjee is chief minister, issued notices to some people for posting “offensive, malicious and inciting” content.
In search of deepfakes, Sen said his team in Karnataka, working for an opposition-controlled state government, vigilantly browses social media platforms like Instagram and X, searching for keywords and repeatedly updating the accounts of popular influencers.
The Deepfakes Analysis Unit has 12 media fact-checking partners, including a pair close to the Modi national government. Raina said her unit also works with outside forensic laboratories, including one at the University of California, Berkeley. They use AI detection software like TrueMedia, which scans media files and determines if they should be trusted.
Some tech-savvy engineers are perfecting artificial intelligence forensic software to identify which part of a video was manipulated, down to individual pixels.
Pratik Sinha, founder of Alt News, the most venerable of India’s independent fact-checking sites, said the possibilities of deepfakes have yet to be fully exploited. Someday, he said, videos could show politicians not only saying things they didn’t say but also doing things they didn’t do.
Dr. Hany Farid has taught digital forensics at Berkeley for 25 years and collaborates with the Deepfakes Analysis Unit on some cases. He said that while “we are detecting bad deepfakes,” if more sophisticated deepfakes were to enter the picture, they could go undetected.
In India, as elsewhere, the arms race continues between deepfakers and fact checkers, fighting from all sides. Dr. Farid described this as “the first year, I would say, that we really started to see the impact of AI in interesting and more nefarious ways.”