DSPy from Stanford NLP applies the AutoResearch pattern to prompts instead of training code. You define a metric (accuracy on a dev set, for example), and DSPy's optimizers search for better prompts, few-shot examples, and instruction phrasings.
Its MIPROv optimizer uses Bayesian optimization to search the space of instructions and demonstrations. COPRO generates and refines instructions through coordinate ascent. Both follow the same loop: propose a change, measure the metric, keep or discard.
The connection to AutoResearch is direct. AutoResearch searches program space for better training code. DSPy searches program space for better prompts. The agent loop is identical. The search domain is different.