#010: The Waymo Experience: Sit Back, Relax, Get Recorded

If AI can record you, undress you, and monetize you...what protections do you have? Here's where we're at.

In partnership with

Read time: 3 minutes and 47 seconds

This photo was (ironically) generated using AI.

WAYMO’S DRIVERLESS TAXIS ARE LISTENING AND IT WONT EVEN ASK YOU HOW YOUR DAY WAS

If you catch a ride in a Waymo robotaxi, be aware: the company is prepping a feature to use interior camera footage, which is inherently tied to your identity, to train its generative AI (1).

An unreleased privacy policy spotted by a researcher shows Waymo may even share rider data to personalize ads during the ride (2).

That means unintentional hot mic moments could be recorded and stored for up to a year unless you manually delete them. Waymo says riders will have an opportunity to opt-out of having their data sold or used in AI, but by default, your conversations could end up in Waymo’s training vaults.

AN INCREASINGLY POPULAR GROK AI USE-CASE: UNDRESSING VICTIMS

Over on X, Elon Musk’s chatbot Grok is being used for the on-demand digital “undressing” of women(3). Yes, users can reply to someone’s image with a prompt to “remove her clothes,” and Grok will whip up a fake image of the woman in lingerie or less, then post it in the thread.

Researchers flagged this disturbing feature, which essentially creates AI-generated NSFW content from unsuspecting users’ pics. It’s non-consensual, invasive, and happening in public, raising serious questions about platform policies (or lack thereof) around AI and personal images.

THE “TAKE IT DOWN ACT”: TOO LITTLE, TOO LATE?

PHOTO: Andrew Harnik/Getty Images) (4)

If the Grok news made you think, “Shouldn’t that be illegal?” You’re not wrong— and as of April, it finally is.

In April of 2025, Congress passed the Take It Down Act, targeting non-consensual intimate imagery (NCII), including AI-generated deepfakes(5).

Platforms now have 48 hours to remove flagged content or face fines and up to two years in prison, with tougher penalties for cases involving minors. But the law is only as strong as its enforcement.

While some platforms claim to ban AI-generated NCII, critics say the rules lack real consequences, leaving victims to track down and report abusive content themselves.

Other News

  • ⚖️ OpenAI’s For Profit U-Turn
    After pushback from civic leaders, OpenAI is keeping its nonprofit arm in control, walking back plans to spin off its for-profit business. The decision comes over a year after Elon Musk’s lawsuit claiming OpenAI strayed from its mission.

  • 🎥 Google’s PR Push for Gen Alpha’s Love
    Google is backing a project called “100 Zeroes,” producing TV shows that aim to make Big Tech seem less villainous. Instead of releasing them on YouTube, Google plans to partner with established studios to reach broader, younger audiences.

  • 🎵 SoundCloud’s AI Grab: User Content = Training Data
    SoundCloud’s new policy lets AI models train on user-uploaded music, a move that’s raising red flags for artists. With no clear opt-out, some say it’s another example of platforms monetizing creators’ work without their consent.

  • 📵 Virginia’s 1-Hour Social Media Limit
    Virginia’s new law restricts kids under 16 to just one hour of social media per day, with age verification required. Critics argue it’s easy for teens to bypass and tough for platforms to enforce without overstepping privacy boundaries.

NEXT WEEK: SIARA SITS DOWN WITH THE INTERNET’S FRIENDSHIP COACH

Danielle Bayard Jackson, aka “The Friendship Expert,” knows the ins and outs of modern friendships. She’s spent years helping wpeopl navigate the messy middle between “hey, how are you?” and “are we even friends anymore?”. In this episode, she’s unpacking how technology is complicating those connections.

Danielle on AI companion chatbots.

In the interview, Danielle and Siara get into:

  • The rise of AI companions and how they cannot, by definition, be empathetic

  • The unspoken rules of digital communication (and why they’re so confusing)

  • How “read receipts” and shared locations are messing with our trust

  • Why sharing friendship drama on TikTok isn’t wise

    Dropping 6:00 AM ET on Thursday, May 14th.

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

PHOTOGRAPH: BONO

BONO WANTS TO MAKE GIVING AS EASY AS SCROLLING

A new startup called Bono wants to make donating to charity as easy as doomscrolling. It just launched with $1.6 million in pre-seed funding to build a platform (named after “pro bono,” not the U2 singer) that lets you give to multiple vetted charities in one go(6).

Users can set up a donation in a few taps, and Bono automatically distributes the funds across your chosen causes. To keep generosity going, it provides a weekly impact report showing what your money accomplished. Bono is also teaming up with influencers, letting creators donate their sponsored content fees to the causes they promote.

It’s a modern mash-up of crowdfunding, transparency, and a bit of social media savvy all to help doing good feel good.

Algorithms process; humans perceive. Make space for your own thoughts this weekend. We’ll see you next time.

Now go touch grass.

- The Log Out Report

If you enjoyed this newsletter, please support Log Out by subscribing to our podcast or sharing this newsletter with a friend.

Have questions? Want to contribute to the report, or suggest a guest for the Log Out Podcast? Email [email protected].

Sources