AI-Powered Social Media Manipulation App Impact facilitates zealots flooding posts with AI texts to look real

Impact, an app that describes itself as “AI-powered infrastructure for shaping and managing narratives in the modern world,” is testing a way to organize and activate supporters on social media in order to promote certain political messages. The app aims to summon groups of supporters who will flood social media with AI-written talking points designed to game social media algorithms.
In video demos and an overview document provided to people interested in using a prototype of the app that have been viewed by 404 Media, Impact shows how it can send push notifications to groups of supporters directing them at a specific social media post and provide them with AI-generated text they can copy and paste in order to flood the replies with counter arguments.
[…]
The app also shows another way AI-generated content could continue to flood the internet and distort reality in the same way it has distorted Google search results, book sold on Amazon, and ghost kitchen menus.
[…]
One demo video viewed by 404 Media shows one of the people who created the app, Sean Thielen, logged in as “Stop Anti-Semitism,” a fake organization with a Star of David icon (no affiliation to the real organization with the same name), filling out a “New Action Request” form. Thielen decides which users to send the action to and what they want them to do, like “reply to this Tweet with a message of support and encouragement” or “Reply to this post calling out the author for sharing misinformation.” The user can also provide a link to direct supporters to, and provide talking points, like “This post is dishonest and does not reflect actual figures and realities,” “The President’s record on the economy speaks for itself,” and “Inflation has decreased [sic] by XX% in the past six months.” The form also includes an “Additional context” box where the user can type additional detail to help the AI target the right supporters, like “Independent young voters on Twitter.” In this case, the demo shows how Impact could direct a group of supporters to a factual tweet about the International Court of Justice opinion critical of Israel’s occupation of the Palestinian territories and flood the replies with AI-generated responses criticizing the court and Hamas and supporting Israel.
[…]
Becca Lewis, a postdoctoral scholar at the Stanford Department of Communication, said that when discussing bot farms and computational propaganda, researchers often use the term “authenticity” to delineate between a post shared by an average human user, and a post shared by a bot or a post shared by someone who is paid to do so. Impact, she said, appears to use “authentic” to refer to posts that seem like they came from real people or accurately reflects what they think even if they didn’t write the post.
“But when you conflate those two usages, it becomes dubious, because it’s suggesting that these are posts coming from real humans, when, in fact, it’s maybe getting posted by a real human, but it’s not written by a real human,” Lewis told me. “It’s written and generated by an AI system. The lines start to get really blurry, and that’s where I think ethical questions do come to the foreground. I think that it would be wise for anyone looking to work with them to maybe ask for expanded definitions around what they mean by ‘authentic’ here.”
[…]
The “Impact platform” has two sides. There’s an app for “supporters (participants),” and a separate app for “coordinators/campaigners/stakeholders/broadcasters (initiatives),” according to the overview document.
Supporters download the app and provide “onboarding data” which “is used by Impact’s AI to (1) Target and (2) Personalize the action requests” that are sent to them. Supporters connect to initiatives by entering a provided code, and these action requests are sent as push notifications, the document explains.
“Initiatives,” on the other hand, “have access to an advanced, AI-assisted dashboard for managing supporters and actions.”
[…]
“I think astroturfing is a great way of phrasing it, and brigading as well,” Lewis said. “It also shows it’s going to continue to siphon off who has the ability to use these types of tools by who is able to pay for them. The people with the ability to actually generate this seemingly organic content are ironically the people with the most money. So I can see the discourse shifting towards the people with the money to to shift it in a specific direction.”

Source: AI-Powered Social Media Manipulation App Promises to ‘Shape Reality’

This is basically a tool which can really only be used for evil.

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com