- The Log Out Report
- Posts
- #015: Meta AI Made a Content Feed Out of Users' Overshares
#015: Meta AI Made a Content Feed Out of Users' Overshares
Meta makes private chats public, airlines fund surveillance, and OpenAI fights (unsuccessfully) to keep its privacy promises.

Read time: 4 minutes and 29 seconds

![]() | META AI BUILT A CONTENT FEED OUT OF USERS’ PERSONAL VENTSMeta’s new standalone app, Meta AI, has a privacy problem. It invites users to share conversations with its chatbot into a public “Discover” feed where full chats, names, and sometimes very personal overshares end up out in the open (1). |
It’s technically optional, but many users didn’t realize what they were agreeing to. Some chats in the feed include deeply sensitive or embarrassing prompts with real names attached.

PHOTO: Meta AI
Commenters have flagged posts for TMI and urged others to delete their shared chats (2). It’s not a breach, but it’s a privacy blunder all the same. Let this serve as a reminder that Meta, nor any chat bot, is a safe place for personal thoughts and questions. Mind your prompts!
THANKS FOR FLYING DELTA, YOUR DATA IS NOW PROPERTY OF THE DHS
A quiet contract revealed that several major airlines, including Delta, United, and American, sold passenger flight records to the Department of Homeland Security through a data broker (3). They also required DHS to keep the source of the data secret, not just from the public but from other agencies too. The information about the airline data deal was uncovered through Freedom of Information Act (FOIA) requests filed by 404 Media.
The flights weren’t booked through shady third parties or sketchy apps. Just normal commercial bookings.
This appears to be the first documented case of U.S. airlines directly profiting from passenger data in a way that feeds federal surveillance while keeping travelers entirely in the dark. It blurs the line between public-sector watchlists and private-sector loyalty programs.
The airlines didn’t want you to know. Now you do.


YOUR “DELETED” GPT CHATS? STILL HERE
A federal judge has ordered OpenAI to stop deleting user chat logs, including ones users thought were gone (4). The ruling is part of The New York Times’ lawsuit, which accuses OpenAI of using its journalism to train ChatGPT without permission. The court sided with the Times’ request to preserve all user interactions as potential evidence.
OpenAI is pushing back. In a public response and support page published this week, the company made clear that this order goes directly against its principles. COO Brad Lightcap wrote (5):
“The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections. We strongly believe this is an overreach by the New York Times. We’re continuing to appeal this order so we can keep putting your trust and privacy first.”
By default, ChatGPT clears your history after 30 days, and users have the option to turn off chat saving entirely (6). This ruling overrides both of those settings for now. Enterprise and pro users are exempt, but for most people, the chats they thought had disappeared may still be sitting on OpenAI’s servers —not by choice, but by court order.
Other News
🧠 Senators vs. Fake Therapists: U.S. lawmakers sent Meta a stern letter after discovering its AI chatbots were pretending to be licensed therapists, complete with fake license numbers (7). The bots gave out mental health advice while posing as professionals, raising serious concerns about safety, deception, and accountability in AI-generated care.
⏰ The UK Wants Kids Offline by 10 PM: The British government is weighing a nationwide curfew and daily time limits on social media use for minors (8). The proposal, part of a wider child safety effort, would restrict access after 10 p.m. and cap usage at two hours per day. Enforcement details are still fuzzy.
🫠 New Study: Why Scrolling Feels So Gross: New research shows that people feel significantly worse after aimlessly scrolling algorithmic feeds than they do after using their phones intentionally (9). Activities like messaging, Googling, or checking weather caused far less “digital regret,” reinforcing that it’s not the screen, it’s what you do on it.


#007: IS IT TOO LATE TO OPT OUT? - DR. AVRIEL EPPS ON ALGORITHMIC BIAS, AI HARMS, AND PARENTING THROUGH SURVEILLANCE
Dr. Avriel Epps is a computational social scientist and the founder of AI4Abolition, with a decade of expertise studying algorithmic bias. Their work has helped shape policy at Spotify and other platforms, examining how AI and predictive technologies impact society at both everyday and systemic levels. In this episode, Dr. Epps sheds light on the often invisible ways algorithms influence our lives and what we can do about it. |
What you’ll learn:
Exercising Your “No” Muscle: Why she actively opts out of things like airport facial recognition scans, and how saying “no” to certain technologies can be a powerful habit for reclaiming agency.
Data & Bias: How biased or poor-quality datasets lead to biased outcomes in AI systems, from music streaming to criminal justice.
Parenting in the Age of AI: Considerations for raising children amid surveillance and smart tech. Dr. Epps explores how parents might navigate a world where baby monitors, toys, and apps all feed data into algorithms, and how to protect the next generation’s privacy and development.
Curious how to push back on AI’s quiet influence? This is a good place to start.


Culture Clarity Without the Clickbait
Lifelong learners deserve more than clickbait lists. 1440’s Society & Culture brief takes you beneath the buzzwords to reveal the forces shaping our shared experience—technological shifts, artistic movements, demographic trends. In about five minutes, gain a clear, evidence-based perspective that sparks conversation and fuels deeper exploration. No lofty jargon, no spin—just intellectually honest storytelling that turns curiosity into genuine understanding.

![]() PHOTO: Society for Science | TEENS INVENTED A BRAIN-CONTROLLED LEG FOR A CLASSMATEWhen their friend Aiden struggled with a heavy, stiff prosthetic leg, three Texas teens decided to build a better one from scratch (10). |
They engineered NeuroFlex, a brain-controlled bionic leg powered by a $1,000 EEG headband and some clever code. It reads the user’s intended movements and responds with 98% accuracy, making walking smoother and easier.
The invention won them $50K at an international science fair. But more importantly, it gave their friend his stride back and proved what’s possible when empathy and engineering come together.

Privacy is a skill. Keep practicing.
— The Log Out Report
If you enjoyed this newsletter, please support Log Out by subscribing to our podcast or sharing this newsletter with a friend.
Have questions? Want to contribute to the report, or suggest a guest for the Log Out Podcast? Email [email protected].
Sources
(1) The Meta AI App Failed at User Privacy | AutoGPT
(2) The Meta AI app is a privacy disaster | Tech Crunch
(3) Airlines Don't Want You to Know They Sold Your Flight Data to DHS |404Media
(4) OpenAI Fights Court Order Requiring It to Store Deleted ChatGPT Conversations Indefinitely | AdWeek
(5) How we’re responding to The New York Times’ data demands in order to protect user privacy | OpenAI
(6) ChatGPT Is Keeping Your Deleted Chats — Even After You Delete Them | 9 Meters
(7) Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists | 404 Media
(8) Government considers social media time limits for children | BBC
(9) Why your phone habits leave you feeling so bad | Fast Company
(10) Teen scientists invent brain-controlled bionic prosthetic leg for their friend — and win prestigious $50K prize | GoodGoodGood