• The Log Out Report
  • Posts
  • #002: Unpacking The Community Notes Algorithm— It's More Fascinating Than You Think

#002: Unpacking The Community Notes Algorithm— It's More Fascinating Than You Think

Understanding the logic behind Community Notes, the implications of DOGE's reach into federal data, and a new smartphone that's tackling deepfakes head-on.

In partnership with

Read time: 4 minutes and 45 seconds

A screenshot of what community notes will look like on Meta.

COMMUNITY NOTES GOES LIVE ON META NEXT WEEK

Next week, Meta will officially roll out Community Notes, replacing its 2016-era third-party fact-checking program (¹). Over 41% of humans are daily active users on at least one Meta account(²), so whether you love it, hate it, or don’t care, it’s a shift worth understanding.

Here’s everything we know about Community Notes so far:

  • Community Notes will be rolled out on Facebook, Instagram, and Threads for users in the U.S. starting Tuesday, March 18th.

  • Any U.S. user 18+ with an account older than six months, in good standing, and verified with two-factor authentication (2FA) can contribute.

  • 200,000+ U.S. users have already signed up to be contributors. Meta will “gradually and randomly" begin choosing people to be added from the waiting list.

  • Contributors can add notes to posts they believe need more context, and other contributors will vote on whether or not the note is helpful.

  • Meta will use the exact same system as Twitter (X), which made the Community Notes algorithm open-source in 2023.

  • The algorithm is based on bridge-based ranking, meaning that if enough “diverse contributors,” people who would have usually disagreed, find the note helpful, the note will be added to the post (more on this below).

  • Notes are anonymous, must include a supporting link, and are capped at 5 submissions per day.

  • Notes are 500 characters max and are available in six languages.

  • Unlike Meta’s old fact-checking program, adding a Community Note to a post won’t reduce its reach.

  • Currently, contributors are not permitted to submit notes on advertisements.

How Notes Get Approved: Bridge-Based Ranking

Bridge-based ranking is designed to prevent groupthink by rewarding consensus across different perspectives rather than just majority rule(³).

The formula behind it:

Bridging Score = Σ (User Biasᵢ × Ratingᵢ)

The bridging score calculation measures how predictable a user’s ratings are. If someone consistently approves notes from only one political group, their rating is weighted less. Each note earns a bridging score based on the mix of users who rated it:

  • A high bridging score means the note is seen as useful by people with diverse views → so the note is shown to the public.

  • A low bridging score means the note only appeals to one specific group → so it won’t be made visible to everyone.

  • neutral bridging score means the note doesn’t strongly bridge or alienate perspectives →, so it may or may not pass the visibility threshold.

The algorithm also flags clusters of users who always vote the same way, which helps prevent coordinated efforts to game the system.

Limitations & Risks

  • It’s slow: Notes need time to gather diverse votes. In fast-moving situations (breaking news, elections, emergencies), Community Notes might show up too late to be useful.

  • Misinformation is still possible: In some instances, Community Notes have unintentionally amplified misinformation, and in others, approved notes have turned out to be incorrect or misleading themselves.

  • Potential for bias: A small, dominant group of contributors (e.g., mostly U.S.-based users) could shape what’s considered "neutral" or "accurate" globally.

If you want to check out the algorithm for yourself, it’s available on GitHub.

NOTE: TLOR is not in the business of stressing you out. We only cover politics when it impacts your digital literacy, privacy, and well-being. Our content is bipartisan but we are firmly pro-informed decisions. If politics isn’t for you right now, we get it— skip to SECTION G3 .

🐶 DOGE is Watching

Elon Musk’s Department of Government Efficiency (DOGE) is quietly amassing a staggering amount of sensitive personal data on all U.S. individuals, not just federal workers(⁴). What kind of data are we talking about?

  • Financial data: tax information, bankruptcy filings, financial transaction records, lifetime wages, citizenship status, type and amount of benefits received, etc.

  • Health & medical data: medical records, SSN #s for medicare recipients, etc.

    • For Veterans: records of substance abuse and addiction, mental health issues, even notes from therapy sessions.

  • Personal & Demographic Data: names, addresses, social security numbers, contact information.

  • Other Market Data: Consumer complaints, business information, financial records, and business plans.

Where DOGE has access (read-only): Internal Revenue Service (IRS), Consumer Financial Protection Bureau (CFPB), Veterans Affairs (VA), Center for Medicare and Medicaid Services (CMS), and the Social Security Administration (SSA).

Musk says this is about cutting waste and fraud. Meanwhile, lawsuits are stacking up, Congress is asking questions, and privacy advocates aren’t buying it. Not to mention, recent reports indicate that some DOGE staffers are using non-government-issued devices on WiFi networks with weaker security protocols, raising significant national security concerns (⁵). It doesn’t help that X suffered a massive cyberattack last week.

Whether this is the future of government efficiency or a privacy overreach remains to be seen. 

Other Stories:

  • 🏛️ Congress vs. Algorithms: A Social Media Crackdown

    A new bill would ban kids under 13 from social media and stop platforms from using personalized algorithms on anyone under 17. Lawmakers say it’s about protecting kids, while critics call it government overreach.

  • ⚖️ Big Tech’s Legal Headache Just Got Bigger

    There are now 1,464 lawsuits against Meta, Google, Snap, and ByteDance alleging their platforms harm users. The courts are becoming the next battleground for social media accountability.

EPISODE ONE IS OUT NOW
#001: AI Snake Oil: Arvind Narayanan on AI Hype, Hopes, and False Promises

Arvind Narayanan, a Computer Science Professor at Princeton, joins Siara on The Log Out Podcast to talk about the AI bubble, fake AI, and why AGI isn’t just around the corner. We get into the biggest myths driving the AI hype, who’s profiting from the illusion, and what it all means for the future.

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

Gif of HONOR's Magic 7 Pro detecting a deepfake in real time.

THIS SMARTPHONE ALTERNATIVE DETECTS DEEPFAKES IN REAL-TIME

As deepfake scams rise, innovations like HONOR's Magic 7 Pro offer hope. Its AI-driven detection tool analyzes video calls in real time, identifying potential deepfakes within six seconds by scrutinizing pixel-level details. Users receive immediate alerts if manipulations are detected, enhancing protection against sophisticated scams. We're hoping this development is the first of many in a new wave of products designed to safeguard us from emerging digital threats (⁶).

Thank you for reading this week’s issue, and a special thank you to Arvind Narayanan for joining us on the podcast. Stay tuned for a new issue next week.

Now go touch grass.

- Siara

If you enjoyed this newsletter, please support Log Out by subscribing to our podcast or sharing this newsletter with a friend.

Have questions, want to contribute to the report, or have a guest suggestion for the Log Out Podcast? Email [email protected].

Sources