top of page

Vanity Over Value: Friend, A $10M Lesson in What Not To Do.

  • Writer: Jude Temianka
    Jude Temianka
  • Nov 27
  • 4 min read

Another week, another "revolutionary" wrapper product making headlines for all the wrong reasons.


Meet "Friend"—the AI necklace designed to cure loneliness. 


Friend wearable AI pendant

It launched this month, and despite high hopes (and even higher valuations), it has been a total flop. The numbers are eye-watering. The company raised $10M in pre-seed funding, with reports suggesting $1.8M was spent on the domain name friend.com alone.


I mean, what the actual f*ck? 🤦🏻‍♀️

Is this a venture-backed startup, or is it a very expensive tax write-off? 💸


Couldn't the investors just give that cash to actual lonely people? There are plenty of mental health charities—Samaritans, Mind, local community outreach programmes—that could have done transformative things with that money. Plenty!


Instead, we got a glow-in-the-dark Tamagotchi that ignores you. And the internet is literally on fire about it.


The "Strategy" of Viral Hatred


The backlash has been swift and brutal. Tech journalists and early users have torn the device apart, calling it dystopian, unnecessary, and cynical.


Friend subway advertisement


Founder of Friend: Avi Schiffman standing in front of a defaced subway poster

Founder Avi Schiffmann (above), however, seems unfazed.


In interviews, he has claimed that they "planned for the backlash" and that the internet’s mockery is essentially free marketing. He’s quoted as saying people are "art directing the brand for free”.


I strongly disagree!

There is a difference between "provocative" marketing and a "credibility fire." You cannot build a product rooted in vulnerability, psychological safety, and companionship off the back of instant, viral hatred.


This friend got unfriended immediately. 🖕



Why the Product Fails (The Technical Reality)


The truth is, I’m not sure much strategic thinking went into the product at all. 🧐

If you go to the website, you are greeted by a blank, unstylised chatbot. It’s a faceless parrot, not a Large Language Model (LLM).


Screenshot of friend.com

It links through to a confusing landing page advertising a "friend that is also your roommate."


The page the chatbot clicks through to

To be fair, most of my past roommate experiences have been nightmarish, so based on the user reviews, that description tracks perfectly.


Early testers like The Verge and various tech YouTubers have reported that the device:

  • Forgets your name constantly.

  • Gives short, patronising one-line answers.

  • Sounds "irritated" or "complacent" when you try to engage with it deeper.


Why is this happening?

It’s not just bad coding; it’s bad economics. This is a classic case of Context Creep.


To make an AI feel like a "friend," it needs a massive context window—it needs to remember what you said yesterday, your dog's name, and that you hate coriander. But keeping that "memory" active in an LLM is expensive (high token costs). To maintain high margins on a hardware device, the memory is likely being scrubbed or "washed away" regularly.


The result?

A "Friend" with amnesia.


I keep warning founders about using wrapper products for high-context emotional situations—the tech economics just aren't there yet for a mass-market hardware device.



What They Should Have Done: The Fake Door Test


Here is the frustrating part. Avi Schiffmann didn't need to spend $1.8M on a domain to find out if people wanted this. He could have spent $500 running Fake Door Tests.


What is a Fake Door Test? 


A Fake Door test (or "Painted Door") is a strategy where you set up a landing page for a product that doesn't exist yet. You drive traffic to it, and when a user clicks "Buy" or "Sign Up," you measure that intent.


You don't take their money. You show a message saying: "Whoops! We’re still building this. Join the waitlist to be first in line."


It allows you to validate demand before you build supply.


If "Friend" had done this, they could have tested 3-6 different value propositions to see what people actually needed, rather than assuming we all wanted a necklace that listens to us breathe.


Straight off the bat, I can think of two concepts that would have been infinitely more valuable (and viable) for a wearable AI wrapper. Here is how I would have set up the Fake Door Tests for them:


Idea 1: The Accountability Companion


The Insight: I know when I'm buying lunch outside, I'm always debating what choices will be the most nutritious. I don't need a friend; I need a conscience.


The Fake Door Landing Page:

  • Headline: Eat Better. Without Thinking.

  • Sub-headline: The AI wearable that whispers nutritional advice in your ear while you shop. Hit your macros, every single time.

  • The Hook: "Like a personal trainer in your pocket, but for your grocery basket."

  • The CTA: [Get Early Access]


Idea 2: The Brain Buster 


The Insight: "Brain Rot" is a real concern. People are terrified of cognitive decline.


The Fake Door Landing Page:

  • Headline: Stop the Rot.

  • Sub-headline: The device that fights cognitive decline. Daily puzzles, quizzes, and memory challenges delivered audio-first.

  • The Hook: "Don't just track your steps. Track your synapses. Keep your brain young with the Brain Buster."

  • The CTA: [Start My Challenge]


The Verdict

If they had run these tests, they might have found that 5,000 people clicked on "The Brain Buster", 1000 on “The Accountability Companion”, and only 50 clicked on "The Friend."


They could have pivoted the entire $10M investment toward a product people actually wanted to buy, rather than trying to force a product that people are currently mocking.



It will be interesting to see what happens next for "Friend."

 

Can they fix the memory issues?

Can they overcome the brand toxicity?

Maybe.


But for the current route? 

I'm not buying it!


Are you building a new venture?

Don't burn your budget on a domain name before you've validated the idea!

© 2024 Jude Temianka

bottom of page