Category:AutomationAI
Prompt Injection: Defense in Depth
Practical controls to reduce LLM prompt injection and tool misuse.
The real risk
The model follows instructions you didn’t intend.
Defense in depth
- Treat tool calls as untrusted input
- Validate and constrain actions server-side
- Use allowlists + schemas
- Log and review tool invocations
Keep it boring
The safest LLM feature behaves like a normal API with strict boundaries.
Related posts

Trading bot keeps getting 429s after deploy: stop rate limit storms
When deploys trigger 429 storms: why synchronized restarts amplify rate limits, how to diagnose fixed window vs leaky bucket, and guardrails that stop repeat incidents.

Agent keeps calling same tool: why autonomous agents loop forever in production
When agent loops burn tokens calling same tool repeatedly and cost spikes: why autonomous agents loop without stop rules, and the guardrails that prevent repeat execution and duplicate side effects.

Retries amplify failures: why exponential backoff without jitter creates storms
When retries make dependency failures worse and 429s multiply: why exponential backoff without jitter creates synchronized waves, and the bounded retry policy that stops amplification.