Two years since its inception, the social network Bluesky is embarking on a significant revision of its Community Guidelines and other related policies. The platform, which competes with notable social media giants like X, Threads, and decentralized networks such as Mastodon, is actively seeking user feedback on the proposed changes. These updates aim to enhance clarity and provide detailed information regarding user safety protocols and the appeals process.
Many of the adjustments to Bluesky's policies are driven by emerging global regulations, including the U.K.'s Online Safety Act (OSA), the EU's Digital Services Act (DSA), and the U.S.'s TAKE IT DOWN Act. These regulations necessitate a more structured approach to online safety, prompting Bluesky to refine its policies to better align with these legal requirements. Notably, the platform has updated its Terms of Service to ensure compliance with these laws, which now include provisions for age verification for users in certain regions.
For instance, as per the U.K.’s OSA, platforms hosting adult content are required to implement age verification measures. Consequently, Bluesky users in the U.K. will need to verify their age by either scanning their face, uploading identification, or providing credit card details to access the platform.
Bluesky has also made the complaints and appeals process more comprehensive. One significant update is the introduction of an informal dispute resolution process, wherein Bluesky commits to engaging in phone conversations with users regarding their disputes prior to proceeding with formal processes. The company emphasizes that “most disputes can be resolved informally,” contrasting sharply with the often opaque procedures of larger networks like Facebook and Instagram, where users frequently face bans without clear explanations.
Moreover, Bluesky is allowing users to pursue certain claims of harm in court, rather than through traditional arbitration. This is a notable shift, as many tech companies typically prefer to mediate disputes outside of the court system.
The guidelines encompass a range of common-sense policies aimed at preventing violence or harm, including prohibitions on self-harm promotion, animal abuse, and the sharing of illegal or sexually exploitative content. Notably, Bluesky also allows for provisions related to journalism, parody, and satire, enabling journalists to engage in factual reporting on sensitive topics such as violence and mental health.
Despite these efforts, Bluesky faces challenges in defining the nuances of terms like “threat,” “harm,” and “abuse.” The policy explicitly urges users to respect others by refraining from hate speech, harassment, or bullying. The guidelines specifically ban content that incites discrimination or hatred based on protected traits, including race, gender identity, and sexual orientation. This area has previously posed challenges for Bluesky, particularly in its interactions with marginalized communities.
Moreover, some users have expressed concerns that Bluesky has adopted a left-leaning stance, leading to backlash regarding perceived humorlessness and a lack of diversity in thought within the community. The original vision for Bluesky emphasized providing users with tools to cultivate their desired community, including blocking, reporting, and moderation options tailored to individual values. However, many users still prefer that the platform take a more active role in moderation.
As Bluesky moves forward with its policy revisions and seeks user feedback, the platform is clearly committed to fostering a safer and more respectful online community. By aligning its policies with global regulations and prioritizing user input, Bluesky aims to create a more engaging social network that encourages positive interaction among its users.