Anthropic has named its latest feature 'dreaming,' a term that immediately conjures images of human consciousness. While this might be clever branding, it's time to draw the line: no more AI features named after human cognitive processes.
The naming strategy in the AI world is clear. OpenAI’s ‘reasoning’ models and chatbot memories are just two examples of how companies blur the lines between humans and machines. This approach isn't just marketing; it shapes our perception of what these tools can achieve.
At Anthropic, this anthropomorphizing goes deeper. They discuss Claude with human qualities like 'wisdom' and even employ a resident philosopher to understand its values. But should we really be projecting virtues onto artificial agents?
This practice isn’t just about branding. It risks distorting our moral judgments about AI, leading us to trust these tools more than we should. As one researcher notes, anthropomorphism can blur the line between what is real and imagined in AI.
It’s a wake-up call for tech leaders: perhaps they need to read some classic sci-fi before naming features after human processes. After all, even Dick’s protagonist couldn’t accept his toad was just a machine—let's hope we can do better than that.







