At its recent Code with Claude developers’ conference in San Francisco, Anthropic unveiled a new feature for their Managed Agents called ‘dreaming’. This function allows agents to review and curate events from recent interactions, storing important information that can inform future tasks. Crucially, this is currently limited to the Claude Platform’s Managed Agents.
Dreaming acts as an organised process where sessions and memories are reviewed, ensuring that key information isn’t lost over the course of long-term projects. The concept echoes human memory management—similar to how many language models use a process called compaction for lengthy conversations.
For developers using Managed Agents on the Claude Platform, this feature promises more efficient project handling and better context retention. However, it also raises questions about AI’s evolving nature: is our technology getting closer to mimicking human cognition or are we just making it seem that way?
This announcement comes at a time when discussions around ethical AI practices are gaining momentum. As AI systems become more adept at managing and understanding complex information, the challenges of ensuring they operate ethically also increase.







