Social media platforms play a central role in the circulation of information in contemporary societies. Their scale, speed, and reliance on user interaction influence how content is produced, prioritised, and received. The terms below describe mechanisms, behaviours, and practices commonly associated with the spread of misleading or false information in online environments.
Algorithm
An algorithm is the set of rules a platform uses to rank, recommend, and surface content. In misinformation contexts, algorithms often privilege engagement over accuracy, elevating emotionally charged or divisive material. This can give misleading claims disproportionate reach, reinforce echo chambers, and allow false narratives to spread widely without requiring coordination or intent from individual users.
Social media algorithms are cited as driving the viral spread of misinformation online by prioritising engagement over accuracy, prompting UK lawmakers to urge stronger action to curb algorithmic amplification of false content - UK Government
Amplification
Amplification refers to the artificial or strategic boosting of content to increase its visibility and perceived importance. In misinformation campaigns, amplification can involve repeated posting, coordinated sharing, or leveraging high-reach accounts to make fringe claims appear mainstream. The effect relies less on persuasion than on repetition, exploiting familiarity and social proof to shape perception.
Once the inaccurate or false information is identified, decide whether it is worth debunking publicly. That depends on where the information sits on the Trumpet of Amplification, which shows the path disinformation often takes online - BBC
Astroturfing
Astroturfing is the practice of disguising coordinated or sponsored messaging as spontaneous grassroots activity. In social media environments, it involves fake accounts, scripted talking points, or paid participants posing as ordinary users. The aim is to create the illusion of broad public support or outrage, distorting perceptions of consensus and manipulating debate.
Coverage of climate and government inquiries notes “astroturfing” campaigns where organised submissions masquerade as grassroots opposition to science and policy, undermining genuine public debate - Renew Economy
Bot
A bot is an automated account programmed to post, like, share, or follow content at scale. While some bots perform benign functions, misinformation-focused bots are used to flood platforms with specific narratives, boost engagement metrics, or harass opponents. Their speed and volume can overwhelm organic discussion and give false impressions of popularity or urgency.
Analyses of social media bots outline their role in automating and amplifying misinformation, with bots mimicking human activity to spread controversial or false content on health and politics - Oxford Academic
Botnet
A botnet is a network of coordinated bots controlled by a single operator or organisation. In misinformation campaigns, botnets are used to amplify narratives simultaneously across platforms, trending topics, or comment sections. Their coordinated timing and scale are designed to manufacture consensus, crowd out dissent, and exploit platform algorithms sensitive to rapid engagement spikes.
A Kremlin-linked botnet is reported distributing fake articles defending political decisions, illustrating coordinated automated networks spreading misinformation to shape public opinion - The Insider
Brigading
Brigading occurs when groups of users coordinate to target a person, post, or platform space, often arriving en masse from another community. In misinformation contexts, brigading can be used to drown out corrective information, intimidate critics, or manipulate visibility through mass reporting or engagement. The tactic relies on numbers rather than argument.
Platforms are cracking down on brigading networks, where coordinated groups mass-harass and manipulate visibility of content, including false or misleading posts - Informa
Catfishing
Catfishing involves creating false online identities to deceive others about who is speaking. Within misinformation ecosystems, catfished personas may pose as experts, locals, or sympathetic community members to lend credibility to false claims. By exploiting trust and social cues, these fabricated identities can influence discussion and lower scepticism toward misleading narratives.
Investigations highlight how disinformation infrastructure has been repurposed to facilitate catfishing scams, blending identity deception with wider online manipulation tactics - FactCheckHub
Clickfarm
A clickfarm is a paid operation where individuals are hired to generate likes, shares, comments, or views at scale. In misinformation campaigns, clickfarms inflate engagement metrics to signal popularity and legitimacy. This artificial activity can mislead algorithms and users alike, pushing false or manipulative content into wider circulation.
Reporting on historic and current online manipulation shows clickfarms - large setups of mobile devices generating fake engagement - to artificially boost visibility and credibility of content, often linked to misinformation spread - Prospect
Coordinated Inauthentic Behaviour
Coordinated inauthentic behaviour describes organised efforts to manipulate online discourse using deceptive accounts or tactics while concealing coordination. This includes networks of fake or repurposed accounts posting in sync. The goal is to influence political or social debate by fabricating grassroots momentum and masking the true source of narratives.
Platforms reported removing networks engaged in coordinated inauthentic behaviour — fake or duplicated accounts used to spread political mis- and disinformation linked to Russian, Iranian, and Chinese influence campaigns ahead of major elections. This tactic hides real sources while amplifying misleading narratives - TaylorWessing
Dogpiling
Dogpiling refers to the rapid accumulation of hostile responses against a single user or viewpoint. In misinformation dynamics, dogpiling discourages correction by raising the social cost of dissent. Targets may withdraw or self-censor, allowing false claims to circulate unchallenged while conformity and silence are rewarded within the group.
Social media platforms are introducing anti-toxicity features to limit “dogpiling,” where groups collectively attack or overwhelm users - a form of coordinated harassment that often accompanies misinformation debates and deters corrective voices - The Verge
Doxing
Doxing is the publication of private or identifying information about an individual without consent. In misinformation campaigns, it is used to intimidate journalists, researchers, or critics who challenge false narratives. The threat of exposure and fear of personal risk suppresses correction and debate.
Online harassment, including doxing - publishing private information about critics - has been widely reported as part of harassment campaigns that accompany misinformation and smear attacks against journalists, experts, and public figures - Media Defence
Echo Chamber
An echo chamber is an information environment in which users are primarily exposed to views that mirror their own. Platform algorithms and self-selection reinforce this insulation, limiting exposure to challenge. In misinformation contexts, echo chambers allow false claims to circulate unchecked, with repetition and social reinforcement substituting for evidence or verification.
Parliamentary inquiries have highlighted how foreign bots and influence operations exploit echo chambers - closed networks of like-minded users - to spread divisive misinformation and widen social divides online - UK Government
Engagement Bait/Engagement Farming/Weaponised Virality
Engagement bait refers to content designed to provoke reactions - likes, shares, comments - rather than to inform. In misinformation ecosystems, this often takes the form of outrage, fear, or sensationalism. By exploiting emotional triggers, engagement bait boosts visibility through algorithms, allowing misleading or false claims to spread widely regardless of accuracy. Weaponised
AI is accelerating the production of engagement bait on social media by enabling emotionally charged, misleading, or sensational content to be generated at scale. Algorithmic systems then amplify this material based on clicks and reactions rather than accuracy, increasing the reach and persistence of disinformation across platforms - Kosovo Two Point Zero
Filter Bubble
A filter bubble arises when algorithms personalise content based on past behaviour, narrowing the range of information a user sees. Over time, this can exclude corrective or contradictory material. In misinformation dynamics, filter bubbles reinforce existing beliefs and make false narratives appear more credible by limiting exposure to alternative perspectives.
Researchers warn that filter bubbles - personalised content feeds tailored to user behaviour - isolate users from diverse viewpoints, making them more susceptible to misinformation that aligns with their pre-existing preferences - Taylor & Francis
Flooding
Flooding involves overwhelming an information space with large volumes of content to crowd out other voices. In misinformation campaigns, flooding can bury corrections, confuse audiences, and exhaust moderators. The tactic relies on quantity rather than quality, reducing clarity and making it harder for users to identify reliable information.
Researchers argue that the abundance of online information can still advantage those in power. While sensitive material is often technically accessible, organized actors shape what citizens consume by raising the costs of finding it. Through information flooding and friction, they overwhelm users with noise or make content harder to access, dampening political impact without relying primarily on fear - Margaret Roberts
Hashtag Hijacking
Hashtag hijacking is the deliberate use of popular or unrelated hashtags to insert misleading narratives into broader conversations. In misinformation contexts, this tactic exploits trending topics to reach new audiences and distort discussion. It allows coordinated actors to piggyback on visibility generated by legitimate events or movements.
Hashtag hijacking is a form of cyber content attack that arises in modern social media microblogging platforms. It occurs when a hashtag is used for a different purpose than the one originally intended, such as tagging messages with undesirable content and surfacing this content to a target audience - HS Talks
Influence Operation
An influence operation is a coordinated effort to shape perceptions, attitudes, or behaviour through information manipulation. On social media, this may involve fake accounts, selective amplification, and narrative framing. Influence operations are often long-term, seeking to normalise certain views rather than persuade through single messages.
States - such as Russia and China - have taken to Facebook, Twitter and YouTube to create and amplify conspiratorial content designed to undermine trust in health officials and government administrators, which could ultimately worsen the impact of the [COVID-19] virus in Western societies - Centre for International Governance Innovation
Inorganic Reach
Inorganic reach refers to visibility generated through artificial means rather than genuine user interest. This includes bots, paid engagement, or coordinated sharing. In misinformation environments, inorganic reach is used to inflate the apparent popularity of claims, misleading users and algorithms into treating fringe narratives as widely supported.
Alongside [its] positive aspects, the rise of social media has also brought misinformation, which can spread like wildfire across these platforms. … We found varying levels of inorganic activity across the channels, with some of those exhibiting high levels being suspended by YouTube - International Conference on the Web and Social Media
Information Laundering
Information laundering is the process by which false or dubious claims are moved through successive layers of sources to gain credibility. A narrative may begin on fringe platforms, be echoed by influencers, and eventually appear in mainstream discourse stripped of its origins. Each step obscures accountability and increases legitimacy.
Just as ill-gotten money needs to be moved from an illegitimate source into an established financial institution, disinformation is most powerful when a façade of legitimacy is created through “information laundering - German Marshall Fund
Keyboard Warrior
A keyboard warrior is an individual who aggressively engages in online conflict from a position of anonymity or distance. In misinformation dynamics, keyboard warriors often amplify false narratives, harass critics, and escalate polarisation. Their behaviour prioritises confrontation and visibility over accuracy or constructive debate.
Today, a faceless stranger can ruin your life in 280 characters - and never face a single consequence. The rise of the internet troll and the keyboard warrior has redefined how reputations are made, and unmade. But what they peddle isn’t justice - it’s destruction - Nation.Cymru
Meme Warfare
Meme warfare uses images, slogans, and humour to convey political or social messages rapidly. In misinformation contexts, memes simplify complex issues into emotionally charged cues that spread easily. Their ambiguity and humour make false or misleading claims harder to challenge, allowing narratives to circulate without scrutiny.
I believe memetic warfare could be effective in countering Daesh’s recruiting and propaganda efforts and in modern conflict in general, including operations other than war. Trolling, it might be said, is the social media equivalent of guerrilla warfare, and memes are its currency of propaganda. Daesh is conducting memetic warfare. The Kremlin is doing it. It’s inexpensive. The capabilities exist. Why aren’t we trying it? - Jeff Giesea
Narrative Capture
Narrative capture occurs when a particular framing of events becomes dominant, narrowing how an issue can be discussed. In misinformation environments, this often results from sustained repetition and selective amplification rather than evidence. Once captured, alternative interpretations are marginalised, and new information is filtered to fit the prevailing storyline rather than reassessed on its merits.
News is increasingly held hostage by those intent on narrative capture - the altering of a narrative in pursuit of a particular end, fundamentally changing the way information is consumed, processed and interpreted. As facts become a grey area, opinion becomes assumed fact, and rumour passed off as news, how does reporting need to change to stay relevant but impartial? - BBC
Online Disinhibition Effect
The online disinhibition effect describes how anonymity, distance, and lack of immediate consequences reduce self-restraint in digital spaces. In misinformation contexts, this can encourage impulsive sharing, harassment, and the spread of unverified claims. Reduced social cues weaken accountability, making extreme or misleading statements more likely to circulate.
While online, some people self-disclose or act out more frequently or intensely than they would in person… Rather than thinking of disinhibition as the revealing of an underlying "true self," we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation - John Suler
Polarisation
Polarisation refers to the widening divide between opposing groups, with fewer shared reference points. Social media accelerates this by rewarding identity-driven content and conflict. In misinformation dynamics, polarisation increases receptivity to false claims that affirm group identity, while opposing information is dismissed as biased or malicious.
Digital spaces have also emerged as a fertile ground for fake news campaigns and hate speech, aggravating polarization and posing a threat to societal harmony. Despite the fact that this dark side is acknowledged in the literature, the complexity of polarization as a phenomenon coupled with the socio-technical nature of fake news necessitates a novel approach to unravel its intricacies - Vasist et al
Rage Bait
Rage bait is content crafted to provoke anger or moral outrage in order to drive engagement. In misinformation ecosystems, it exploits emotional reactions to boost visibility, often at the expense of accuracy. Anger increases sharing and reduces scrutiny, allowing misleading narratives to spread rapidly.
An investigation from BBC social media investigations correspondent Marianna Spring found some users on X were being paid "thousands of dollars" by the social media site, for sharing content including misinformation, AI-generated images and unfounded conspiracy theories - BBC
Shadow Ban
A shadow ban is the covert reduction of a user’s content visibility without explicit notification. In misinformation debates, claims of shadow banning are often used to discredit moderation and reinforce persecution narratives. The concept itself can become a tool of information disorder by undermining trust in platforms and accountability mechanisms.
An investigation by The Markup found that the platform demoted images of the Israel–Hamas war, deleted captions without warning, and denied users the option to appeal - The Markup
Signal Boosting
Signal boosting involves deliberately amplifying a message to increase its reach. While it can support legitimate causes, in misinformation contexts it is used to elevate misleading narratives through coordinated sharing or high-profile accounts. The practice relies on visibility and repetition rather than validation of accuracy.
Artificial amplification (aka "signal boosting") of media content is a cause for concern because it can make it relatively easy to manipulate mass opinion, which in turn can have disastrous effects on the stability of democratic systems of governance - Monmouth University
Sockpuppet
A sockpuppet is a fake or secondary account controlled by a single individual to create the illusion of multiple voices. In misinformation campaigns, sockpuppets are used to agree with each other, harass critics, or seed narratives. This manufactured consensus distorts perceptions of support and debate.
Sockpuppets are usually created in large numbers by a single controlling person or group. They are typically used for block evasion, creating false majority opinions, vote stacking, and other similar pursuits. Sockpuppets are as old as the internet - Gen Digital
Swarming
Swarming refers to the rapid convergence of many accounts on a target or topic. In misinformation environments, swarming overwhelms critics, derails discussion, and suppresses correction through volume and intimidation. The tactic substitutes numerical pressure for argument or evidence.
A 2023 EU report revealed over 750 coordinated disinformation campaigns in Europe, targeting media outlets (e.g., Euronews, Reuters), and public trust during key elections in Ukraine, Spain, and Poland. Tactics included fake celebrity voices/images and Telegram‑based “swarming” manipulations - The Recursive
Troll/Troll Farm
A troll is an individual who deliberately provokes, disrupts, or derails online discussion. In misinformation contexts, trolls may spread false claims, bait opponents, or sow confusion. Their objective is often disruption rather than persuasion, exploiting emotional reactions to fracture discourse.
A troll farm is an organised group of individuals paid or directed to engage in coordinated online manipulation. In misinformation campaigns, troll farms create and amplify narratives at scale, harass opponents, and simulate public opinion. Their structured activity blurs the line between organic debate and organised influence.
UK-funded expert research has exposed how the Kremlin is using a troll factory to spread lies on social media and in comment sections of popular websites. The cyber soldiers are ruthlessly targeting politicians and audiences across a number of countries including the UK, South Africa and India - UK Government
User-Generated Misinformation
User-generated misinformation arises when ordinary users share false or misleading content without intent to deceive. It spreads through misunderstanding, haste, or trust in unreliable sources. While less coordinated than influence operations, its scale makes it a major driver of information disorder.
There are multiple challenges and risks faced by platforms that surround user-generated misinformation, including moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks - Cyber Peace
Viral
Viral describes content that spreads rapidly across networks through high levels of sharing. In misinformation dynamics, virality is often driven by emotion rather than accuracy. Once viral, false claims become difficult to correct, as reach outpaces verification and repetition reinforces belief.
Several unique features of social media encourage viral content with low oversight. Rapid publication and peer-to-peer sharing allow ordinary users to distribute information quickly to large audiences, so misinformation can be policed only after the fact (if at all) - American Psychological Association
VLOPs
Very Large Online Platforms (VLOPs) are digital services with massive user bases and systemic influence over information flows. Their scale means design choices, algorithms, and moderation policies have outsized effects on misinformation. Failures or biases at this level can amplify information disorder across societies.
Under the EU’s Digital Services Act, very large online platforms (VLOPs) or very large online search engines (VLOSEs) are required to conduct risk assessments and audits to identify and mitigate systemic risks on their platforms, including the spread of disinformation - Tech Policy

