Safety Guardrails (Agent Verification)
Automatically takes your business's safety seriously. To prevent our AI from "hallucinating" or making up facts, offers, or policies, every bot comes equipped with our Verification Guardrails system.
How it Works
When an Automatically AI is about to send a response to your customer, it doesn't just send it blindly. Instead, the response is routed through a secondary "Verification Engine" that fact-checks the AI against the facts and actions it performed during that turn.
- Fact Checking: The verifier makes sure the AI hasn't claimed to do something it didn't actually do (like claiming it processed a refund when it didn't).
- Self-Correction: If the AI makes a mistake or hallucination, the verification engine catches it before the customer sees it. The AI is then given a chance to correct its mistake internally.
- Failsafe Escalation: If the AI is repeatedly unable to generate a factually accurate response after multiple attempts, the Verification Guardrails will shut down the AI for that specific conversation and instantly trigger an escalation to your Human Inbox.
Why it Matters
This ensures the AI will never lie to your customers. It guarantees that if the AI promises it looked something up, it actually queried your integrations, and if it gets confused, a human agent is brought in to take over rather than risking your company's reputation.
Was this helpful?
Reach out to support if you need assistance or are stuck.