Blog Post

Azure AI Foundry Blog
9 MIN READ

Build your dream team with Autogen

yanivva's avatar
yanivva
Icon for Microsoft rankMicrosoft
Jun 05, 2024

 

The motivation - Is it possible to solve multi step tasks?

 

100 years ago, a guy called Francis Galton, ran an experiment, he asked 787 villagers to guess the weight of an Ox. Surprisingly none of them answered correctly, but when he averaged the different answers, the average was really close, how close? 10 pounds!

The idea of using LLMs as multi-agent systems involves deploying multiple LLMs that can interact with each other to achieve complex goals that a single model might not be able to handle alone. This approach leverages the skills and instructions for each agent to create a more capable and comprehensive system, imagine a purpose-built agents' fleet that can execute various complicated tasks.

 

While large language models (LLMs) demonstrate remarkable capabilities in a variety of applications, such as language generation, understanding, and reasoning, they struggle to provide accurate answers when faced with complicated tasks.

According to this research (More agents is all you need), the performance of large language models (LLMs) scales with the number of agents instantiated. This method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. 

Updated Jun 13, 2024
Version 6.0