Tech News Summary:
- Meta, the parent company of Facebook, Instagram, and Threads, is implementing new measures to tag and disclose AI-generated content on its platforms in response to concerns about potential misuse.
- Meta will begin tagging digitally created or modified images, videos, and audio on its platforms and requiring users to disclose if their content is generated by AI. They may also add prominent labels for content that poses a high risk of misleading the public.
- Meta is collaborating with industry partners to develop common standards for identifying AI content and using invisible markers to identify AI-generated content. They are also anticipating discussions about authenticating synthetic and non-synthetic content, and staying ahead of potential misuse of emerging technologies such as AI.
Meta, the parent company of Facebook and Instagram, has announced new election safeguards aimed at combating misinformation and interference in political conversations online. As part of the effort, Meta will start tagging posts and threads that have been created or shared by AI systems.
The company said Tuesday it would start to label posts and stories as “public interest information” if they meet certain criteria, such as being shared by politicians, politicians running for office, or public figures with more than 100,000 followers, Axios reported. In addition, threads in the comments section of public posts, which are designed to make it easier to follow conversations, will be monitored closely for election-related content.
The move comes amid widespread concerns about the impact of AI-generated content on social media, particularly in the context of political campaigns. The European Union’s proposed Digital Markets Act (DMA) prescribes “clear labelling of automated agents” in social media.
Meta has been under pressure to take action in the lead-up to the 2022 midterms, with politicians and experts alike voicing concerns about the potential for AI-generated propaganda to influence voters.
In a statement, Meta said that the new measures are part of its ongoing efforts to prevent election interference and promote transparency on its platforms. The company also highlighted its partnerships with organizations and fact-checkers to identify and counter misinformation.
“We’re designing these features to help ensure people have more information about the posts they’re seeing in their feeds and to provide more context and agency over their experience,” the statement said.
The announcement comes as social media companies face increasing scrutiny over their handling of political content and misinformation. Meta has been under intense pressure from lawmakers and regulators to curb the spread of harmful content and disinformation on its platforms.
The move is part of Meta’s efforts to restore trust in its platforms and ensure that users have accurate information about political conversations online. It remains to be seen how effective the new safeguards will be in practice, but Meta’s announcement is a step in the right direction toward combating election-related misinformation.