top of page

Multi Agent Frameworks for Customer support



We introduce a robust new approach for managing competing priorities introduced within prompts for large language model (LLM) agents. Our approach splits the objectives into multiple personas, and creates prompting that is complimentary and additive amongst multiple agents, allowing each to follow its own directives, while still engaging with other agents and customers, in effect replicating a human-equivalent team. The division of objectives into multiple agents, with inter-linking functions and dependent objectives, provides a transparent and flexible customer service structure that can be altered to suit different circumstances and outcomes. We test our approach on a sample of customer types, from co-operative to disruptive, and find that our approach not only creates better, more transparent outcomes, but also reduces costs of running these systems.


Generative AI systems have enormous potential to augment and automate client service engagements across a range of different industries. The capabilities of large language models to engage in a variety of conversations, and to guide as well as respond to different topics, as well as different communication styles is a vast improvement on previous generations of chat-bots.


One of the weaknesses of the systems is around balancing the objective to engage with a customer, with other potential actions, such as moving a conversation to a human operator if the customer becomes dissatisfied or abusive. Prompting alone can-not specify the decision to move from one mode of engagement to another, while still specifying the different outcomes. Without this strategic overlay, conversations can go on indefinitely without conclusion, raising the costs for the supplier, and in worse cases, can lead to customer frustration and even anger.


A. Single versus Multiple Agents


The underlying principle is that a well-thought designed system, which aligns multiple micro-objectives to a larger macro-objective, enables each digital assistant to concentrate on a singular task. This strategic segregation of activity guarantees that every assistant excels in its particular micro-objective. The following graph illustrates that agents can be arranged either hierarchically or laterally. In a hierarchical setup, one agent must accomplish its task and deliver an output to another agent for it to reach its own objective (as depicted on the right side of the graph). In a lateral configuration, two agents must simultaneously achieve their objectives to fulfill a common higher-end sub-objective (as shown on the left side of the graph).

Multi-agents frameworks can reduce competing priorities by essentially separating these into different agents that monitor and/or engage with each other before engaging with the customer. In addition to the implications.


a. Increased AI governance and control functions through greater transparency of objectives and conversations. Much more akin to a training manual for a live customer service desk.


b. Better oversight across multiple customer agents: the oversight agents can be used for multiple customer engagements, and therefore align with the enterprise’s policies and business, and even be monitored by the risk function. This effectively removes silos between different systems, and establishes a ‘governance agent framework’ that de-risks the adoption of Generative AI agents.


c. Easier to adjust to changing needs: The interaction between the agents, and their decision criteria, are transparent and can be readily adjusted. This removes the ambiguity around prompt structuring and prioritization.


d. Reduces sensitivity to prompt structure: The system is not as sensitive to prompt crafting, and for specific language models’ interpretation of different prompts. This effectively makes the system more robust and future-proof as new models are deployed.


Reduces the average cost of conversation: by defining criteria in which to end or change a conversation, this removes excessive communication elements creating a more streamlines, and cheaper system to deploy.


For a full copy of the paper, please reach out to me Here

36 views0 comments
bottom of page