Remember the discourse surrounding Taylor Swift and The Life of a Showgirl? Remember how people said it was “trad wife” propaganda, full of MAGA — or even Nazi — dogwhistles? From using the word “savage” in a song to selling a necklace with lightning bolts, certain parts of the internet felt like they were on fire with discourse. There’s just one problem: It looks a lot of that chatter was part of a targeted attack.
As first reported by Rolling Stone, the research group Gueda has published a new report examining 24,679 posts from 18,213 users across 14 platforms. They found that 28% of the conversation was driven by just 3.77% of accounts. Notably, that small percentage of users “exhibited non-typical behavior amplifiers” i.e. behaved more like bots than real users.
That group was highly involved with pushing content suggesting that Taylor had become more MAGA or Nazi-aligned. It would then trigger organic conversations from real people. Between October 13-14, Gueda estimates that “73.9% of the day’s narrative” was “conspiracy posts.” This was during the time that Taylor’s merch store released a necklace with lightning bolts on it to celebrate the song “Opalite.”
“The false narrative that Taylor Swift was using Nazi symbolism did not remain confined to fringe conspiratorial spaces; it successfully pulled typical users into comparisons between Swift and Kanye West,” the report reads. “This demonstrates how a strategically seeded falsehood can convert into widespread authentic discourse, reshaping public perception even when most users do not believe the originating claim.”
The report suggests that these left-coded “critiques” were actually part of a coordinated attack. However, it’s not clear who might be behind it. One thing that the report did note is that there was some overlap between accounts pushing the Swift ‘Nazi’ narrative and those active in a separate astroturf campaign attacking Blake Lively.”
Gueda founder and CEO, Keith Presley, told BuzzFeed, “While tracking the online activity around the article, I noticed something we didn’t directly address in the research: how Swifties can engage without unintentionally amplifying narratives pushed by inauthentic accounts. The key is to avoid feeding the algorithm. Limit interactions with illicit content by following a simple rule: observe but don’t interact, counter but don’t reply, redirect but don’t tag. This approach allows you to manage harmful narratives strategically without boosting them in the algorithmic ecosystem.”
The report then generated its own online discourse from fans:
However, some insisted the conversation was still genuine:
BuzzFeed has reached out to a representative for Taylor for comment. You can read the full report here.





