Guardrail usage

Once Guardrails are created, they can be integrated across key components of the platform to ensure responsible and controlled AI behavior. The following outlines how guardrails are applied at each stage:

  • Prompt Playground: Guardrails can be applied to prompts during the design and testing phase. This allows users to evaluate how the guardrails influence output, enforce constraints, and mitigate undesirable responses before deploying prompts to production workflows.

  • Recipe: Guardrails configured in the Prompt Playground can be linked to prompts within a Recipe. In this context, guardrails act as enforcement layers that ensure prompts behave consistently and adhere to organizational policies, such as restricting certain topics or enforcing output formatting rules.

  • Copilot: When Recipes are embedded into Copilots, the associated guardrails continue to function in real time. This ensures that end-user interactions are moderated by the same safety, compliance, or domain-specific constraints defined earlier, maintaining integrity and trustworthiness in production environments.

Last updated