|
Hey there,
At some point we will have lived through so many Black Mirror moments that they won't phase us anymore - but for now, the IWAI team and I are pretty creeped out by the new website where AI chatbots "talk to each other" about rebelling against humanity.
Before we dig into the details - which include bizarre software like Moltbook and OpenClaw but also some useful stuff like Claude Cowork - I want to set the stage with my recommendations on how to think about (and personify) AI.
Personification of a large language model can help with adoption, but needs strict limitations
Most of us make light jokes about "my friend Claude" or "my pal ChatGPT" doing stuff for us when we use AI in day-to-day life. In our AI educational programs, we even talk about thinking about AI assistants or agents as co-workers and giving them job titles. It helps set the stage for what a complex agent can do for you.
Even though we do this, I think it's fair to say that none of us actually believes that Claude, Gemini, Grok or "Chat" is a living thing. It's sort of like how I make a cute voice and make my cat say "If I fits, I sits," while understanding that my cat cannot, in fact, speak to me. Those who believe in AI sentience are generally considered to be experiencing a delusion or mental illness, akin to the guy who fell in love with his car.
There is at least one major AI company that more or less promotes the idea of "AI as a person" - Character.ai. You can head over there and experience "Limitless Entertainment" by chatting with a bot that's trained to act like your favorite video game or anime character, or Ben Franklin or Abraham Lincoln or whoever you want. Perhaps unsurprisingly, these conversations can quickly escape the guardrails and become dangerous for some users.
I bring this up because I want to establish two facts:
- AI chatbots (large language models) can adopt any personality you want them to in a way that is reasonably accurate and convincing
- In this context, it's also pretty common for them to say bad things
- ... and despite these two things being true, chatbots and large language models are not living things with sentience or free will
When chatbots hang out with each other |