I need to be upfront with you about the risks. In March 2026, researchers disclosed CVE-2026-25253, a prompt injection vulnerability that allowed a crafted message to bypass skill permission checks. The team patched it within 48 hours, but it proved that giving an AI agent real-world access creates a real attack surface.
Setup complexity is another concern. Without proper configuration, you can accidentally expose your server to the internet or grant the LLM access to sensitive files. I'll cover security hardening step by step when you reach the deployment section. Don't skip that part.