AMD GPU support is early-stage. The autoresearch-everywhere project has AMD listed on its roadmap but hasn't shipped a validated path yet. The autokernel project (over stars) added ROCm support for its kernel optimization loop, showing the stack works in principle.
If you want to try it yourself, you need a recent ROCm release and a supported GPU (Radeon RX GRE or above). Install PyTorch with the ROCm backend, then point it at the standard AutoResearch repo. The training loop is standard PyTorch, so it runs on any backend that supports torch.compile. Expect rough edges. The community is still ironing out compatibility.