Dec 25, 20251 min read
Category:AutomationAI
Prompt Injection: Defense in Depth
Practical controls to reduce LLM prompt injection and tool misuse.
The real risk
The model follows instructions you didn’t intend.
Defense in depth
- Treat tool calls as untrusted input
- Validate and constrain actions server-side
- Use allowlists + schemas
- Log and review tool invocations
Keep it boring
The safest LLM feature behaves like a normal API with strict boundaries.
Related posts

Automation > CryptoJan 31, 2026
Crypto exchange rate limiting: fixed window vs leaky bucket (stop 429s)
A production-first playbook to stop 429 storms: diagnose the limiter type, add guardrails, and log the signals you need to stop guessing.

Automation > AgentsJan 16, 2026
Why agents loop forever (and how to stop it)
A production playbook for preventing infinite loops: bounded retries, stop conditions, error classification, and escalation that actually helps humans.

Automation > EngineeringJan 14, 2026
Retry backoff and jitter: safe defaults to prevent retry storms
An incident-ready retry policy for production automation: stop rules, exponential backoff + jitter, caps, budgets, and the logs operators need.