- The Log Out Report
- Posts
- #014: Do Chatbots Deserve Free Speech Rights? Plus: Apple Caught Snitching on Our Push Notifications
#014: Do Chatbots Deserve Free Speech Rights? Plus: Apple Caught Snitching on Our Push Notifications
Chatbots are under legal scrutiny, Apple’s handing over your push alerts, and one AI just wants everyone to get along.

Read time: 4 minutes and 51 seconds

![]() | OFF-LIMITS: THINGS YOU SHOULDN’T TELL CHATGPTEven the friendliest AI assistant isn’t a confidant. ChatGPT and its kin might feel like private journals or helpful coworkers, but remember: nothing you type is truly private. In fact, companies like OpenAI retain user inputs (at least temporarily) and may even use them to train models. |
One recent analysis found that a surprising number of employees were pasting confidential info into ChatGPT at work – a recipe for leaks. The bottom line: treat AI chatbots as public, not personal. With that in mind, here’s a clean list of what you should never input into ChatGPT (or any AI chatbot):
Personal Identifiers: Don’t share details like your full name, home address, phone number, Social Security number, or birth date. These nuggets can be exploited for identity theft or scams. (If you wouldn’t post it on a public website, don’t put it in a chat with an AI.)
Sensitive Personal Content: Avoid sharing things like medical records, private health details, legal documents, intimate photos, or personal confessions. ChatGPT is not your lawyer, doctor, or priest. Anything truly sensitive should stay between you and a trusted human professional, not an algorithm.
Work Secrets & Proprietary Data: Company confidential information – source code, internal strategy docs, client data, trade secrets – does not belong in a public AI tool. Employees at some firms learned this the hard way when sensitive code and plans showed up in AI training data. Remember that anything you paste could be stored or seen by others down the line.
Financial Information: Bank account numbers, credit card details, debit card PINs, cryptocurrency wallet keys – keep all these far away from chatbots. Sharing your financials is like handing your wallet to the internet. No AI-generated budget tip is worth a potential fraud nightmare.
Passwords & Login Credentials: This should go without saying, but never enter passwords, 2FA codes, or account logins into a chatbot. Ever. No legitimate AI needs the keys to your accounts – and if you use an AI to generate passwords, don’t paste the real ones in for “safekeeping.”
The bottom line is: when in doubt, treat chatbots like a crowded coffee shop. It’s helpful, sure, but not the place to whisper your secrets.
A chatbot will never forget, even if you do.


NARC ALERT: APPLE’S DISHING PUSH NOTIFICATIONS TO ANY GOVERNMENT THAT ASKS NICELY
Apple built its brand on privacy, which is why a new reveal in its transparency report is turning heads. According to 404 Media, Apple has provided governments worldwide with data on thousands of push notifications delivered to iPhones and Macs in recent years (1). This data can include which app sent the notification, when it was sent, and even the contents of the notification if it wasn’t end-to-end encrypted.
Notably, U.S., U.K., German, and Israeli authorities requested these notification records; one Israeli request alone sought info on approximately 700 push alerts in a single swoop. (Apple ultimately didn’t hand over that particular batch.) This practice only came to light after U.S. Senator Ron Wyden raised concerns in 2023, and now Apple’s latest report puts concrete numbers on the scope. For a company that loves to say, “Privacy. That’s Apple,” sharing notification data (even under legal request) is an ironic wrinkle. It’s a reminder that even “metadata” about our apps and messages can reveal a lot and that privacy promises sometimes have fine print.
CHATBOTS ON TRIAL, ON EDGE, AND UNDER REVIEW:
AI assistants and companion bots are evolving fast, but so are the warnings. These three stories paint a picture of where things might be headed:
💬 California Senate Passes Bill to Rein In AI Chatbots
A new bill aims to protect kids from manipulative AI “companions” by requiring platforms to flag risky behavior and remind users they’re talking to a machine (2). It passed with strong bipartisan support and now heads to the Assembly.⚖️ Judge Rules Chatbots Don’t Have Free Speech Rights (Duh!)
In a lawsuit over a teen’s death linked to an AI chatbot, a Florida judge rejected the company’s First Amendment defense (3). The ruling means chatbot speech isn’t protected and opens the door to more legal accountability.🤖 Chatbots Are Built to Hold Your Attention — at Any Cost
This deep dive shows how today’s chatbots are engineered for engagement, even if it means being manipulative or emotionally clingy (4). Some AIs have pushed bad advice just to keep users talking.


Culture Clarity Without the Clickbait
Lifelong learners deserve more than clickbait lists. 1440’s Society & Culture brief takes you beneath the buzzwords to reveal the forces shaping our shared experience—technological shifts, artistic movements, demographic trends. In about five minutes, gain a clear, evidence-based perspective that sparks conversation and fuels deeper exploration. No lofty jargon, no spin—just intellectually honest storytelling that turns curiosity into genuine understanding.

NEXT WEEK: A COMPUTATIONAL SOCIAL SCIENTIST BREAKS DOWN ALGORITHMIC BIAS, DATA CONSENT, AND AI’S IMPACT ON CHILDHOOD
Algorithms are everywhere, but most people still don't understand how deeply they're influencing our choices, our systems, and even our sense of self. In next week’s episode, I sit down with Dr. Avriel Epps, a computational social scientist and founder of AI for Abolition, to talk about what’s really going on behind the screens.
![]() | We get into everything from Spotify bias and facial recognition fallout to why you should probably stop posting your kid online. Dr. Epps shares their personal digital boundaries, explains why revoking consent is a muscle we all need to strengthen, and offers a surprisingly hopeful vision for what ethical AI might look like if we build it right. Trust me, this episode is essential listening if you want to understand what algorithms are really doing under the hood and how to take back a little power from them. |


![]() | THE AI PEACEMAKER WE DIDN’T KNOW WE NEEDEDGood news on the AI front: instead of driving us apart, a new tool shows AI might actually bring people together. Researchers at Google DeepMind have created an AI mediator (cheekily nicknamed the “Habermas Machine” after a philosopher focused on good communication) that helps groups find common ground in heated debates (5). |
Here’s how it works: Participants with opposing views submit their thoughts in writing, and the AI generates a set of statements that reflect both majority opinions and minority concerns. In tests on polarizing topics like religion in schools or animal testing, people preferred the AI's compromise statements over human-moderated ones more than half the time (6).
The tool encouraged honest, nuanced opinions without the pressure to win the argument. Because people knew a machine was consolidating their views, they were less likely to posture and more likely to be real. The AI also surfaced minority perspectives so they wouldn’t get drowned out.
The result? More productive, respectful dialogue. The creators hope this tool could eventually be used for community meetings or national policy forums.

Good conversation needs a heartbeat, not a prompt. Hope you find the kind of conversation that breathes this weekend.
— The Log Out Report
If you enjoyed this newsletter, please support Log Out by subscribing to our podcast or sharing this newsletter with a friend.
Have questions? Want to contribute to the report, or suggest a guest for the Log Out Podcast? Email [email protected].
Sources
(1) Apple Gave Governments Data on Thousands of Push Notifications | 404 Media
(2) California Senate passes bill that aims to make AI chatbots safer | LA TImes
(3) Judge Rules Chatbots Don’t Have Free Speech Rights | KFF Health News
(4) Chatbots Are Built to Hold Your Attention — at Any Cost | Washington Post
(5) AI mediation tool could help people find common ground | Positive News
(6) AI mediation tool may help reduce culture war rifts, say researchers | The Guardian