I recently took the plunge and got the Claude professional plan, and so got access to their Projects feature. I love Projects so much, I decided to write this pean in praise of it. Projects embodies much of what I’ve been saying about the need for smarter application design around LLMs, and not just smarter models.
What are projects? Projects have been out for little over half a year now, so many users of Claude have probably had time to get used to them. In short, projects are ways to collect and organize information inside the Claude application for a single purpose. Users can upload “project knowledge” that becomes persistently visible to the model across chat sessions within the project, and all chat sessions within the project are grouped together. Users can also provide “project instructions” which help inform the model about how to operate across all sessions in the project’s scope.
That’s pretty much it. But this relatively simple introduction of frontend cleverness into the mix pays off in spades. Claude stays more informed and efficient, saving me from having to waste tokens keeping it up to date on the project’s progress. Instead I just update the project’s knowledge to refresh Claude’s global awareness across the project’s sessions. At the same time that it organizes the LLM’s context about the project, it organizes my thinking about the project. The project becomes a unified knowledge repo about everything in the project, and I can then query the LLM about the documents that have been uploaded to the project’s knowledge.
Projects is an example of what one might dub context engineering, a new subfield of software development that is primarily concerned with pre-processing the data which informs an LLM’s context window prior to its ingestion by the model. Context engineering is largely a frontend discipline, as the focus is bringing the user’s intent and the model’s states closer together, although it conceivably could have backend components as well.
Too much focus in my opinion is invested in brute force strengthening the model, and ignoring cost-effective gains in UX and context management that can be achieved much more cheaply by enhancing what gets built around the model.
While we’re on the subject, another great piece of frontend creativity is ChatGPT’s canvas feature. Canvases are almost like projects but for individual chat sessions, but are more about enhancing the actions one can take rather than organizing information. They provide various controls to work more closely with ChatGPT on a particular task. You almost wish the two would join forces…
The relative simplicity of Claude’s Projects feature speaks to the potential of the direction it’s heading in. More frontend ingenuity could go a long way to unlock opportunities for LLMs. Projects is not even accomplishing that much, it’s just beautifully shaping a few preliminaries prior to the ingestion of context by the model. In doing so Projects is also taking cognitive load off the user, allowing both the user and the model to focus on what’s most important to the project’s goals.
A few suggestions for making Projects even better, should Anthropic ever come across this post:
Figure out some way to dynamically push chat session results to the project knowledge. This could allow fruitful new results to be reflected in the global context.
More explicit RAG-like retrieval mechanisms, that permit files stored in knowledge to be explicitly loaded in, updated and returned to storage. Leverage the artifacts system more effectively to create an internal economy of artifacts within the project scope.
Fusion of Projects with a ChatGPT Canvas-like feature. Canvases could form a lower level (chat session level) complementary feature to the projects, and synergize well with my other suggestions for a RAG-like dynamic ecosystem.
Anyways I just wanted to draw attention to Projects, the burgeoning field of context engineering it hints at, and raise awareness for increased frontend inventiveness for LLM application design as an area for potentially easy gains.