## Bluesky Unveils Landmark Transparency Report Amidst Explosive Growth and Evolving Safety Challenges
Bluesky, the rapidly expanding social media platform, has just pulled back the curtain on its first-ever comprehensive transparency report. This pivotal document offers a deep dive into the actions undertaken by its dedicated Trust & Safety team, alongside critical insights into initiatives like age-assurance compliance, the ongoing battle against influence operations, and the efficacy of automated content labeling systems.
In a year marked by significant expansion, the decentralized social network — a compelling alternative to giants like X and Threads — witnessed a remarkable nearly 60% surge in its user base during 2025. The platform grew from 25.9 million to an impressive 41.2 million users, a figure that encompasses accounts hosted on Bluesky’s proprietary infrastructure as well as those operating their own servers under the AT Protocol.
This vibrant community was incredibly active, generating a staggering 1.41 billion posts throughout 2025, representing 61% of all content ever shared on Bluesky. Media-rich interactions were also on the rise, with 235 million posts containing visual elements, accounting for 62% of all media content ever uploaded to the network.
Beyond user activity, the report also highlights a dramatic fivefold escalation in legal requests from various entities, including law enforcement, government regulators, and legal representatives. The platform processed 1,470 such requests in 2025, a sharp increase from 238 in 2024. While Bluesky had previously issued moderation updates in 2023 and 2024, this latest report marks a significant shift towards a broader, more exhaustive overview, encompassing regulatory compliance and account verification alongside traditional moderation metrics.
—
### The Surge in Moderation Reports: A Growing Community’s Challenges
Bluesky’s commitment to maintaining a safe online environment is underscored by the significant increase in user-submitted moderation reports. Compared to 2024, which saw a 17-fold jump, 2025 recorded a substantial 54% rise in user reports, climbing from 6.48 million to 9.97 million. Crucially, Bluesky notes that this growth closely mirrors its 57% user expansion over the same period, suggesting that report volume is largely proportional to the platform’s overall growth rather than an exponential increase in problematic content per user.
#### Decoding User-Generated Flags
Approximately 3% of Bluesky’s user base, translating to 1.24 million individuals, actively contributed to content moderation by submitting reports in 2025. The data reveals key areas of concern:
* **Misleading Content and Spam:** This category led the pack, constituting 43.73% of all reports. Within this, spam alone accounted for a massive 2.49 million reports, indicating a persistent challenge for the platform.
* **Harassment:** Making up 19.93% of reports, harassment encompasses a range of behaviors. Hate speech was the most frequently flagged offense with around 55,400 reports, followed by targeted harassment (approximately 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports). Bluesky emphasizes that a significant portion of “harassment” reports fell into a “gray area” of anti-social behavior, such as rude remarks, which didn’t strictly meet the criteria for more severe classifications like hate speech.
* **Sexual Content:** This category comprised 13.54% of reports, with the vast majority (1.52 million) pertaining to mislabeled adult content. This highlights the importance of proper metadata tagging, which empowers users to customize their content filters. More severe issues within this category included non-consensual intimate imagery (around 7,520 reports), abuse content (approximately 6,120), and deepfakes (over 2,000 reports).
* **Violence and Other Categories:** Reports concerning violence totaled 24,670, broken down into sub-categories like threats or incitement (about 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports). A catch-all “other” category captured 22.14% of reports that didn’t fit into these or other smaller categories like child safety, site rule violations, or self-harm.
—
### Proactive Measures and Promising Trends
Beyond the crucial input from its users, Bluesky’s automated systems also played a significant role, flagging 2.54 million potential violations independently. The report highlights several areas where strategic interventions have yielded positive results.
#### Automated Defenses and Strategic Interventions
One notable success involved a 79% reduction in daily reports of anti-social behavior. This dramatic decline followed the implementation of a system designed to identify and de-emphasize toxic replies by placing them behind an additional click, a mechanism similar to features found on other major social platforms. Furthermore, the platform saw a consistent month-over-month decrease in user reports, with reports per 1,000 monthly active users falling by an impressive 50.9% between January and December 2025.
#### Combating Influence Operations and Legal Demands
In a clear demonstration of its aggressive stance against malicious actors, Bluesky successfully identified and removed 3,619 accounts suspected of engaging in influence operations, with the majority believed to originate from Russia. This robust enforcement reflects the company’s stated intent to take a firmer hand in moderation, a commitment evident throughout the report.
The dramatic increase in legal requests—a fivefold jump—further underscores the growing scrutiny and operational complexities faced by Bluesky as it continues its expansion. This report serves as a vital benchmark, reflecting Bluesky’s evolving strategies to balance rapid growth with a steadfast dedication to platform integrity and user safety.## Bluesky’s Escalating Moderation Efforts: A Deep Dive into 2025’s Transparency Report
Maintaining a healthy and safe online environment is a monumental task for any social platform, and Bluesky’s latest transparency report for 2025 offers a compelling look into their vigorous efforts. The data reveals a significant scaling up of moderation actions, signaling a robust commitment to platform integrity and user experience. From sweeping content removals to strategic labeling, Bluesky is actively refining its approach to digital governance.
### Scaling Up Moderation: A Closer Look at Bluesky’s 2025 Efforts
The sheer volume of moderation activity in 2025 paints a clear picture: Bluesky is not shying away from decisive action. The platform undertook an astounding **2.44 million moderation actions**, encompassing both user accounts and individual pieces of content. This figure represents a dramatic increase from the prior year, highlighting a period of intense growth and corresponding challenges in content oversight.
#### The Big Picture: Millions of Moderation Actions
To put 2025’s numbers into perspective, the previous year saw the removal of 66,308 user accounts. The platform’s commitment to automation is evident, with **35,842 of these account removals** in the earlier period being attributed to sophisticated automated systems. This blend of human oversight and technological efficiency is central to Bluesky’s strategy.
#### Beyond Accounts: Record-Level Removals
Moderation isn’t solely focused on user profiles. The report indicates that human moderators meticulously addressed **6,334 specific content records**, while automated systems proactively identified and removed an additional **282 problematic entries**. This multi-faceted approach ensures that both persistent offenders and individual instances of policy violations are thoroughly addressed.
### Tackling Misconduct: Suspensions and Permanent Bans
Beyond temporary content removals, Bluesky also employs a range of punitive measures to ensure compliance with its community guidelines.
#### Temporary Measures and Evasion Countermeasures
In 2025, the platform implemented **3,192 temporary account suspensions**, designed to provide a corrective measure for users engaging in minor infractions. More severely, Bluesky issued **14,659 permanent account removals** specifically targeting instances of ban evasion. This proactive stance against users attempting to circumvent prior penalties underscores the platform’s dedication to maintaining a fair and secure ecosystem. The primary drivers behind these permanent exclusions were behaviors such as **inauthentic activity, engagement in spam networks, and identity impersonation**.
### A Strategic Shift: Prioritizing Content Labeling
One of the most noteworthy trends highlighted in the 2025 report is Bluesky’s evolving philosophy towards content management. While account removals remain a critical tool, the platform appears to be increasingly emphasizing content labeling as a preferred method for managing sensitive or policy-violating material.
#### Labeling vs. Takedowns: A Growing Trend
Bluesky’s data strongly suggests a strategic pivot towards content identification and flagging. In 2025, the platform applied an astonishing **16.49 million labels to content**, representing a remarkable **200% year-over-year increase**. This surge in labeling far outpaced the growth in account removals, which, while still significant, saw a 104% increase from 1.02 million to 2.08 million during the same period. The majority of these labels were affixed to content categorized as **adult, suggestive, or explicit nudity**, indicating a focused effort to manage potentially sensitive visuals on the platform.
Bluesky’s 2025 transparency report illustrates a platform deeply invested in refining its moderation strategies. The substantial increase in both automated and human-led actions, coupled with a clear preference for content labeling, demonstrates a dynamic and evolving approach to fostering a safer and more positive online community. As the digital landscape continues to evolve, Bluesky’s commitment to transparency and robust moderation will undoubtedly remain a cornerstone of its operations.

