What Are Social Bots and How Do They Manipulate Public Opinion?

Social media engagement metrics—such as likes, comments, shares, and trending topics—are often used as indicators of public opinion. However, a growing body of research suggests that a significant portion of online activity may be generated by automated accounts known as social bots. These bots can mimic human behavior, amplify specific narratives, and create the illusion of widespread public support. While not all bots are malicious, coordinated bot activity can distort online discussions, manipulate engagement metrics, and influence how information spreads across digital platforms. For businesses, this creates risks such as brand reputation manipulation, fake traffic generation, and declining trust in user-generated content. As bots become more sophisticated and capable of bypassing basic defenses like CAPTCHAs or IP blocking, behavior-based detection has become essential. By analyzing activity patterns, interaction speeds, and behavioral signals, organizations can identify automated traffic and protect the integrity of their platforms.
Mar 12, 2026
What Are Social Bots and How Do They Manipulate Public Opinion?

Today, many people rely on social media trends, comments, and engagement metrics to understand public opinion.

But an important question arises:

Are all the voices we see online actually human?

Recent research suggests that a significant portion of online activity may be generated by automated accounts known as social bots. These automated agents can simulate human behavior and influence online discussions at scale.

On modern social platforms, social bots can manipulate likes, comments, shares, and trending algorithms, making artificial engagement appear like genuine public support.

What Are Social Bots?

A social bot is an automated account that operates on social media platforms.

Academic research defines a social bot as:

💡

An automated account that performs content creation, distribution, and relationship formation on social media platforms.

These bots can automatically perform actions such as:

  • Posting content

  • Retweeting and sharing

  • Writing comments

  • Following users

  • Amplifying messages

Not all bots are malicious. Some organizations use bots for notifications, customer service, or information dissemination. However, malicious bots are designed to manipulate public perception.


How Social Bots Manipulate Public Opinion

1. Creating Fake Consensus

Social bots can post the same message repeatedly or artificially increase engagement around a particular opinion.

This creates the illusion that a large number of people support a specific viewpoint, even if that is not the case.

2. Amplifying Content

Research shows that bots are especially active during the early stages of information diffusion.

By rapidly sharing or retweeting content, bots can trick platform algorithms into identifying the content as popular.

3. Targeting Influential Users

Studies suggest that even a small number of bots can influence public opinion within social networks. Bots often target influential users, highly polarized communities, and trending hashtags; once humans begin interacting with the bot content, the message spreads organically.

Key Case Studies in Social Bot Opinion Manipulation
Key Case Studies in Social Bot Opinion Manipulation

Key Case Studies in Social Bot Opinion Manipulation

1. Team Jorge - Organizations Selling Social Bots for Opinion Manipulation

In 2023, an international investigation exposed an organization known as “Team Jorge,” which sold online influence operations as a commercial service. The group reportedly used proprietary software called Advanced Impact Media Solutions (AIMS) to run coordinated opinion-manipulation campaigns.

Through this system, they could create and manage over 30,000 fake social media accounts across platforms such as Twitter, Facebook, LinkedIn, and Telegram. The investigation revealed that the organization offered these services to influence political campaigns, elections, and even corporate competition.

▶ source : link

2. 2025 South Korean Constitutional Court Case

Ahead of the impeachment ruling of Yoon Suk Yeol, allegations emerged that macro programs were used to mass-post messages on the public bulletin board of the Constitutional Court of Korea. In response, the cyber investigation unit of the Seoul Metropolitan Police Agency launched a preliminary investigation.

The controversy arose after around 270,000 posts opposing the impeachment were uploaded within a short period, raising suspicions that automated programs had been used. Authorities found that scripts enabling automatic post submissions were being shared within certain online communities, allowing users to register posts automatically with just a few clicks.

▶ source : link

3. COVID-19 Disinformation Bots

During the pandemic, a large amount of misinformation related to vaccines spread widely online, and automated bots played a significant role in amplifying it. Studies found that these bots accelerated the spread of false information by promoting conspiracy-related content, repeatedly posting specific messages, and encouraging retweets from ordinary users.

▶ source : link

Why Social Bots Matter for Businesses and Platforms

The issue of social bots is not limited to the political sphere. It also poses significant risks to businesses and digital platforms.

1. Brand Reputation Manipulation

Competitors or malicious actors can use automated bots to generate large volumes of negative reviews or comments, artificially shaping public perception of a brand.

2. Fake Traffic Generation

Automated accounts can create artificial website visits or generate fraudulent ad clicks, distorting traffic metrics and increasing operational or marketing costs.

3. Declining Consumer Trust

Users often rely on online reviews and comments as genuine customer opinions. As bot activity increases, the credibility of platforms and user-generated content can decline, ultimately eroding consumer trust.

Detecting and Preventing Social Bots

Detecting and Preventing Social Bots

Social bots are designed to behave like real users, making them difficult to identify through written content or simple visual inspection. However, their activity patterns often differ clearly from those of genuine users. Modern social bots are also designed to bypass simple detection methods such as CAPTCHAs or basic IP blocking. As a result, a behavior-based detection approach has become essential for effectively identifying and blocking these automated accounts.

Key detection methods include the following:

1. Detection of Abnormal Activity Patterns

Social bots often generate interactions—such as page views or clicks—at speeds that are physically impossible for human users.

2. Content Pattern Analysis

Bots frequently repeat identical message templates in comments or posts, and their posting intervals tend to be unnaturally regular.

3. User Behavior Data Analysis

Unlike typical user behavior—such as natural mouse movements or scrolling patterns—bots leave behind mechanical, script-based behavioral traces.

Bot detection systems that analyze user behavioral data can identify these patterns and distinguish between genuine users and automated traffic.

As social media becomes a primary channel for public discourse, social bots pose a growing threat to digital trust.

Research shows that bots can significantly influence information flows, amplify narratives, and manipulate online perception.

For businesses and digital platforms, detecting and mitigating automated manipulation is no longer optional—it is essential for maintaining trust, fairness, and platform integrity.


Share article

STCLab Inc.