I need to be upfront with you about the risks. In March , researchers disclosed CVE--, a prompt injection vulnerability that allowed a crafted message to bypass skill permission checks. The team patched it within hours, but it proved that giving an AI agent real-world access creates a real attack surface.
Setup complexity is another concern. Without proper configuration, you can accidentally expose your server to the internet or grant the LLM access to sensitive files. I'll cover security hardening step by step when you reach the deployment section. Don't skip that part.