top of page

OPINION: The Invisible Threat: Why Inorganic Activity on Social Media Is a Growing Risk for Public Companies and Their Investors

In November 2022, a fake tweet claiming Eli Lilly would begin giving away insulin for free went viral on Twitter. The tweet wasn’t from the company’s official account, but it was convincing enough and amplified quickly enough that it caused Eli Lilly’s stock to plunge and triggered chaos across the pharmaceutical sector. Much of that virality was not organic; it was driven (at least in part) by inorganic users and coordinated inauthentic behavior – accounts that don’t represent real behavior but have real influence.


Most investors and executives still view inorganic behavior on social media as a fringe concern that is something more aligned with politics or spam than corporate performance. That mindset is outdated, and potentially dangerous. Social media manipulation is becoming a systemic financial risk, and yet it remains virtually invisible among boardroom agendas and investor calls.


Inorganic behavior is no longer just about inflating follower counts or selling fake sunglasses. It’s now being used to manufacture outrage, amplify false narratives, and apply reputational pressure at moments of maximum vulnerability. When combined with AI-generated content, they become a high-speed tool of disruption – one that can cost companies millions in minutes.


Take another example: in 2023, a fake AI-generated image of an explosion near the Pentagon circulated online. Inorganic users helped push the image to virality before it was debunked. This caused a brief but significant dip in U.S. equity markets, triggered by nothing more than a well-crafted visual and coordinated amplification.


More recently, a boycott campaign against Canadian grocery giant Loblaws gained traction online, with nearly 20% of the tweets promoting the boycott traced back to fake accounts. What looked like a grassroots uprising was in part artificially inflated, yet the reputational damage was real.


So, why aren’t more companies paying attention?


One reason is that most communications and social media teams are optimized for reach, engagement, and content rather than adversarial analysis. They measure what people are saying, but not how those messages are spreading or who’s spreading them. As a result, manufactured campaigns can take hold before anyone inside the company realizes the threat isn’t organic, and by then, the damage is done.


Another issue is that the risk of synthetic amplification falls through the cracks of corporate governance. Boards focus on regulatory compliance, cyber risk, and financial disclosures, but few include social amplification audits in their regular risk management reviews. Fewer still ask whether a sudden online crisis is being inflated by coordinated inauthentic activity.


This oversight is costly. False or manipulated narratives, when paired with synthetic amplification, can create the illusion of consensus. This type of synthetic consensus can influence journalists, investors, and policymakers, and lead to real-world consequences that range from stock price declines to customer churn to executive resignations.


What’s more is that the tools to manipulate public opinion are becoming more accessible than ever before. AI makes it easy to generate realistic content such as screenshots, fake voices, or even synthetic personas. Inorganic user networks are also available as a service and can give deceiving content scale and momentum. These tools are now being used to pressure companies in highly targeted ways, often timed around earnings reports, layoffs, or controversial policy decisions.


For shareholders, this represents a new kind of market vulnerability. And for public companies, it’s a governance gap that needs closing.


What can companies do?


First, stop assuming that all social media engagement is organic. It’s not. Companies should continue to monitor sentiment and consider the structure of online narratives to reveal who is amplifying what, and whether that amplification is coordinated or authentic.


Second, companies must invest in narrative risk monitoring tools that go beyond traditional brand listening and include systems to detect coordinated amplification patterns, inorganic user networks, and synthetic engagement (ideally in real time). This kind of visibility helps leaders know when they're facing a real reputational crisis versus an artificial one.


Third, leadership teams should conduct regular risk assessments around synthetic media and influence operations. Just as cybersecurity became a board-level concern over the last decade, information manipulation must now become top-of-mind. That includes tabletop exercises, escalation protocols, and scenario planning for when – not if – this kind of manipulation hits.


Finally, we need to reframe the conversation. This isn’t just a PR issue. It’s a financial one. The gap between perceived and actual sentiment can move markets. When a fake tweet can erase billions in market cap in minutes, the threat is no longer theoretical.


Reputation risk has entered a new era where the biggest voices aren’t necessarily human, and the loudest outrage may not even be real. For investors and executives alike, it’s time to bring synthetic amplification out of the shadows and into the boardroom. Because if companies want to protect their brands, their customers, and their shareholders, they can’t afford to fight ghosts blindfolded.


Keith Presley is the CEO and co-founder of GUDEA, a data intelligence company focused on identifying coordinated inauthentic behavior, synthetic amplification, and narrative manipulation across digital platforms. He brings more than a decade of experience at the intersection of technology, campaigns, and public discourse, including senior leadership roles in national and statewide political organizations and service in the U.S. Navy Reserve. Presley works with enterprises, investors, and public-sector leaders to understand how manufactured online activity distorts perception, influences markets, and creates real-world risk.

bottom of page