Developers of artificial intelligence technologies are nearing the development of tools that will greatly simplify the process of creating highly realistic synthetic videos.
A prominent expert in AI has stated that in confidential experiments, they have reached a point where distinguishing between artificial and authentic videos is no longer possible, a milestone that was not expected to be reached so soon.
This advancement is anticipated to become widely accessible, including to malevolent international forces, potentially as early as the beginning of 2024. Compounding the problem is the fact that major social media platforms have reduced their content moderation workforce and loosened their misinformation policies.
As the 2024 presidential campaign intensifies, it is projected that an increased number of individuals will have access to more sophisticated means of producing and spreading false information across various platforms, with diminished oversight. This situation is set to amplify the confusion witnessed in 2020, which will pale in comparison.
A high-ranking ex-official in national security has indicated that leaders such as Russia's Vladimir Putin view these AI tools as an effortless, economical, and scalable strategy to sow discord among Americans. According to U.S. intelligence, Russia previously attempted to influence the 2020 election to favor the re-election of the former President. There is concern among top officials in the U.S. and Europe that Putin may attempt to interfere in the 2024 election to support a candidate who is inclined to reduce U.S. assistance to Ukraine.
By the year 2025, it is estimated that over 90% of online material may be produced by AI, according to some projections.
This could lead to what is termed "model collapse," where AI systems become overwhelmed by processing content generated by other AI technologies rather than by humans. Indeed, there is a push from the White House and certain congressional leaders for regulations to distinguish between authentic and synthetically generated videos. The leading proposal involves the implementation of mandatory watermarking to clearly identify AI-created videos.
However, attempts to develop such technology have been met with challenges, as effective watermarking solutions have not been fully realized yet. Furthermore, with a politically divided Congress, especially in the period before a presidential election, the introduction of robust regulations is not anticipated.
Determining which content is "AI-generated" is also becoming increasingly difficult as AI is integrated more deeply into the tools used for creating and editing media.
Reid Hoffman, the co-founder of LinkedIn and a vocal advocate for AI, expresses concern over this issue. He notes the potential for negative consequences when AI and amplification intelligence intertwine.
Hoffman argues against open-source models, which are freely available for anyone to use, viewing them as substantial threats due to the lack of oversight. He advocates for and is involved with closed models, such as OpenAI's ChatGPT, which have mechanisms for self-regulation.
The administration and certain specialists have raised alarms that open-source AI models, like the one developed by Meta, may be exploited for nefarious purposes.
The situation might escalate to such an extent that some AI developers have mentioned they are accelerating the rollout of more advanced AI technologies so that the public has ample time to understand and adjust to the potential repercussions well in advance of the election.
A high-ranking official from the administration expressed that the primary concern is the potential for this technology, along with other AI tools, to mislead voters, defraud consumers on a large scale, and orchestrate cyberattacks.
Another alarming misuse is the creation of non-consensual synthetic pornography, often referred to as revenge porn, which has been a prevalent form of abuse in the early stages of AI misuse.
The reality is that there is a limited scope for what the government or any individual can do to prevent the onslaught of counterfeit content.
Self-protection is key:
Stay vigilant. Being informed, such as by reading this discussion, is a good step. Maintain a high level of scrutiny for online content before taking any action.
Inform others. It’s important to educate those around you, particularly younger generations, about these emerging challenges. Discuss the issue openly and don’t rely solely on external entities for protection.
Curate your digital environment. Consider managing your social media presence by being selective about the apps you use and the accounts you follow. Disconnect from sources known to disseminate falsehoods.
Exercise discernment when sharing. Avoid engaging with or spreading content unless you’re confident of its authenticity. If something strikes you as suspect, it probably is.
Get involved. The concept of AI may be daunting, but it’s an integral part of the future. By educating yourself, you can better navigate the AI landscape and leverage it positively in your life.