We're wondering if any novel approaches are being developed to enhance the scalability and contextual understanding of Large Language Models (LLMs) through the introduction of some type of dynamic, shared contextual space I don't know like some kind of "Generative Volatile Real-Time Trained llm " (GVRTTT) with an auxiliary dynamic LLM (b) for lack of better terms and understanding.
We’ve read about many ideas that aim to address the issues of diminishing returns as LLMs scale, by leveraging the emergent properties of conversations. But has anyone or group put in any interesting reports or arXiv research around these areas.
Key Concepts I've read about and I'm loosely summarizing:
Any information about programming llms to engage with emergent shared spaces in group conversations and dialogue in group (such as board room dynamics or even interpersonal conversation dynamics between people in scenarios like brainstorming or business project management?
Or even between a user and an llm like copilot in real time?
Conversations naturally develop a shared understanding over time, resulting in a richer, deeper context than individual exchanges. This space evolves as the conversation flows, capturing nuances and emerging ideas or barriers.
Any thoughts around using specialized attention mechanisms to manage and process this emergent shared space, where the llm needs a potentially robust system with a type of dedicated RAM space, so to speak? Transformer space or llm(b) space that might continuously feed the evolving context of user to llm discussions back through a contextual specialized volatile transformer, sitting in the middle somewhere, back into the flow of tokens within a conversation in order to increase llm depth of understanding of conversations and workflows?
Cheers, Techno Panda
(interested in any developing research on the scalability problem or solutions around deeper contextual understanding)
- marksenBrass Contributor
hii marksen