Latest Research Papers
2024-12-30
arXiv
Distributed Mixture-of-Agents for Edge Inference with Large Language Models
This paper explores a distributed Mixture-of-Agents (MoA) architecture for edge inference with large language models, using decentralized gossip algorithms to enable collaboration among edge devices. It ensures queuing stability and demonstrates that certain MoA configurations produce higher-quality responses, as evaluated on the AlpacaEval 2.0 benchmark.
Mixture-of-Agents (MoA) has recently been proposed as a method to enhance
performance of large language models (LLMs), enabling multiple individual LLMs
to work together for collaborative inference. This collaborative approach
results in improved responses to user prompts compared to relying on a single
LLM. In this paper, we consider such an MoA architecture in a distributed
setting, where LLMs operate on individual edge devices, each uniquely
associated with a user and equipped with its own distributed computing power.
These devices exchange information using decentralized gossip algorithms,
allowing different device nodes to talk without the supervision of a
centralized server. In the considered setup, different users have their own LLM
models to address user prompts. Additionally, the devices gossip either their
own user-specific prompts or augmented prompts to generate more refined answers
to certain queries. User prompts are temporarily stored in the device queues
when their corresponding LLMs are busy. Given the memory limitations of edge
devices, it is crucial to ensure that the average queue sizes in the system
remain bounded. In this paper, we address this by theoretically calculating the
queuing stability conditions for the device queues under reasonable
assumptions, which we validate experimentally as well. Further, we demonstrate
through experiments, leveraging open-source LLMs for the implementation of
distributed MoA, that certain MoA configurations produce higher-quality
responses compared to others, as evaluated on AlpacaEval 2.0 benchmark. The
implementation is available at:
https://github.com/purbeshmitra/distributed_moa.