2025-01-17
arXiv

Evolving Deeper LLM Thinking

Dale Schuurmans (Google Research) , Kuang-Huei Lee , Ian Fischer , Yueh-Hua Wu , Dave Marwood
The paper introduces Mind Evolution, an evolutionary search strategy for scaling inference in Large Language Models, which outperforms other methods like Best-of-N and Sequential Revision in natural language planning tasks without the need for a formal solver.
We explore an evolutionary search strategy for scaling inference time compute in Large Language Models. The proposed approach, Mind Evolution, uses a language model to generate, recombine and refine candidate responses. The proposed approach avoids the need to formalize the underlying inference problem whenever a solution evaluator is available. Controlling for inference cost, we find that Mind Evolution significantly outperforms other inference strategies such as Best-of-N and Sequential Revision in natural language planning tasks. In the TravelPlanner and Natural Plan benchmarks, Mind Evolution solves more than 98% of the problem instances using Gemini 1.5 Pro without the use of a formal solver.