# Set up Workflow Recipe

Workflow recipes can be initiated with either a **Chat node** or a **Webhook node**, depending on the specific use case and the desired interaction flow

### Chat

This node is designed to initiate and facilitate user interactions, serving as the entry point for conversational engagement.

#### Conversation History:

Determine the number of messages to retain in the conversation history. This allows you to augment the prompt context with past conversation, improving the response quality.

#### Generate follow up questions:

Enable Generate follow-up questions to prompt the system to autonomously generate relevant questions based on the conversation context and generated answer. To use this option, the Follow Up question generator model must be configured in the[ Organization](https://docs.karini.ai/organization).&#x20;

You are provided with a sample prompt for this task; however, you can update the as required.

```
Generate follow-up questions based on the given input question and answer. Follow these guidelines:

1. Number of Questions: Include 2-3 short follow-up questions that directly expand on or clarify the main question.
2. Relevance is Key: Ensure these questions are relevant, showing an understanding of the initial question and its broader implications.
3. Emphasis on Follow-Ups: Always aim to provide follow-up questions to foster deeper dialogue, use the given answer as a basis for further exploration.

Reference Question: {question}
Reference Answer: {answer}

Output Format:
Question-1: [provide followup question here] 
Question-2: [provide followup question here]

Question-3: [provide followup question here]
```

#### Enable Audio Mode

When Audio mode is enabled, the audio option becomes available on Copilots. This allows users to interact through voice queries and receive responses in both text and audio formats, enhancing accessibility and user experience.

You must have Speech to Text Model Endpoint and Text to Speech Model Endpoint configured in the[ Organization](https://docs.karini.ai/organization).

{% hint style="info" %}
To **integrate** the **Chat** node effectively, follow these configurations based on document handling requirements:

* **With Document Handling** : Establish a connection between the Chat and Processing nodes when document uploading, processing, and querying are essential for interaction.
* **Without Document Handling** : Directly link the Chat node to the Start node if the workflow does not require document-based references, enabling a standard conversational experience.
  {% endhint %}

### Webhook

A **webhook** is a user-defined HTTP callback that allows the system to send automated messages or data to an external URL when an event occurs.

Here are the key elements of a webhook node.

**Label**: Serves as a unique identifier for the webhook, making it easy to reference or manage.

**Webhook URL**: The endpoint to which the webhook sends HTTP requests. It directs the data to the appropriate destination.

**Webhook Token**: Used for authentication to ensure that the request made by the webhook is valid and authorized to access the API.

**Query Method**: Specifies the HTTP method (such as POST) used for the request to the API.

**Webhook Query Headers**: Defines the headers included in the request, often containing metadata like content type, authorization, or other necessary information for the API to process the request.

The webhook request must include the following headers for authentication and content type specification:

```
{
    "Content-Type": "application/json",
    "x-api-token": "token_value"
}
```

**Payload Template:** A predefined structure or format for the data sent with the request. It helps in organizing the information and ensuring that the API receives the correct data structure.

```
{
    "files": [
        {
            "content_type": "application/pdf",
            "file_name": "test1.pdf",
            "file_content": "(base64data)"
        },
        {
            "content_type": "application/pdf",
            "file_name": "test2.pdf",
            "file_path": "s3://bucket/path"
        },
        {
            "content_type": "text/plain",
            "file_name": "test.txt",
            "text": "this is a plain text"
        }
    ],
    "input_message": "",
    "metadata": {}
}
```

**Generate curl command:**

The **Generate curl command** enables users to automatically generate a curl command tailored to the specific configuration of their webhook setup within the recipe workflow. Upon clicking the button, the system displays the corresponding curl command, which replicates the HTTP request setup defined in the workflow.&#x20;

This includes the following components:

* **HTTP Method**: The selected HTTP method (e.g., POST, GET) for initiating the webhook request.
* **Headers**: Automatically includes required headers, such as `Content-Type` and the API token for authentication (if configured).
* **Payload Template**: The request body content is generated based on the defined payload template, which may include file types, file paths, and textual content.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2Fa0B7cg2vodUQA7zHxgSf%2Fimage.png?alt=media&#x26;token=02dae4e4-1322-4db3-8984-daa3237cc01f" alt=""><figcaption></figcaption></figure>

For webhook recipe invocation and webhook history details, refer to the [webhook recipe](https://karini-ai.gitbook.io/karini-ai-documentation/recipes/workflow-recipe/webhook-recipe) section.

{% hint style="info" %}

#### To ensure optimal functionality of the Webhook node, follow these configuration guidelines:

* **With Document Processing:** Connect the **Webhook** node to the **Processing** node when documents must be uploaded, processed, or queried before transmitting data externally.
* **Without Document Processing:** If no document handling is required, connect the **Webhook** node directly to the **Start** node.
  {% endhint %}

### Start

The **Start node** serves as the entry point of the workflow, initiating the flow of tasks by connecting to various functional nodes based on specific process requirements.

It can link to the **Knowledge base node** for information retrieval, the **Router node** for directing execution based on logic, the **Prompt node** for generating responses, the **Custom function node** for executing predefined tasks, the **Agent node** for intelligent automation, and the **Transform node** for enabling parallel processing. This flexibility allows workflows to be dynamically structured according to operational needs, ensuring efficient execution and automation.

### Knowledge Base

A **Knowledge base** is a vector embedding structured repository designed to store, organize, and manage information, enabling applications to retrieve relevant data efficiently.&#x20;

The system supports the following two types of knowledge bases:

#### **Native Knowledge Base:**&#x20;

Choose a dataset from the available dataset list. After selecting the dataset, the relevant prompt contexts can be configured to retrieve information from the vector store. For detailed guidance, refer to [Context Generation using Vector Search.](https://docs.karini.ai/recipes/qna-recipe/create-recipe#context-generation-using-vector-search)

#### **Bedrock Knowledge Base**&#x20;

The Bedrock Knowledge Base refers to an AI-powered retrieval system offered by Amazon Bedrock, a service by AWS (Amazon Web Services). It allows users to integrate enterprise knowledge bases with AI-powered applications, enabling natural language queries on stored data. For more details refer[ Amazon Bedrock Knowledge Bases.](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html)

The configuration fields include:

1. **Knowledge Base ID:** Enter the ID of the knowledge base.
2. **Filter Key**: Specifies the criteria for filtering results.
3. **Filter Value**: Defines the specific value to filter by.
4. **Number of Results**: Specifies how many responses should be retrieved.
5. **Overwrite Credentials Option** : By default, the AWS credentials configured in[ Organization](https://docs.karini.ai/organization) settings will be used to invoke AWS resource.  However, if needed, you can provide alternate AWS credentials.

#### **Document Reader**&#x20;

**Document Reader** is designed to facilitate the extraction of information from various document formats, including PDFs, Word files, and other textual formats. This feature integrates seamlessly with the knowledge base, enabling efficient document processing, data extraction, and integration into workflows.

1. **Configuration Settings:**
   1. **Connector Type:**\
      The **Connector Type** setting defines the method of integrating external data sources.&#x20;
   2. The following connector options are available:
      * **Amazon S3**: Enables the system to retrieve and process documents stored in Amazon S3 buckets.
        * **Overwrite Credentials** :By default, the AWS credentials configured in[ Organization](https://docs.karini.ai/organization) settings will be used to invoke AWS resources.  However, if needed, you can provide alternate AWS credentials&#x20;
      * **Presigned URL**: Supports data retrieval through presigned URLs, which provide temporary.
      * **Email**: Facilitates the extraction of documents directly from email attachments.
2. **Preprocessing options:**

Karini AI recipes support following preprocessing options:

1. **Enable Transcriptions:** Enables automatic transcription, facilitating the conversion of audio or speech-based files into text.
   1. **Default:** Using the Default method, the OpenAI Whisper model is employed and should be selected as the Speech-to-Text Model Endpoint for transcription tasks in the [Organization ](https://docs.karini.ai/organization)page.
   2. **Amazon Transcribe**: Amazon Transcribe is an automatic speech recognition service that uses machine learning models to convert audio to text.
2. **OCR Options:** This option provides various methods for extracting text from documents and images:
   1. **Unstructured IO with Extract Images**: This method is used for extracting images from unstructured data sources. It processes unstructured documents, identifying and extracting images that can be further analyzed or used in different applications.
   2. &#x20;**PyMuPDF with Fallback to Amazon Textract:** This approach utilizes PyMuPDF to extract text and images from PDF documents. If PyMuPDF fails or is insufficient, the process falls back to Amazon Textract, ensuring a comprehensive extraction by leveraging Amazon's advanced OCR capabilities.
   3. **Amazon Textract**: The selected OCR method is **Amazon Textract**, a cloud-based service that identifies and extracts text, structured data, and elements like tables and forms from documents.
      1. **Extract Layouts:**
         1. This option helps recognize the **structural layout** of a document, such as:
            1. Headings
            2. Paragraphs
            3. Columns
            4. Useful for document formatting retention.
         2. **Extract Tables:**
            1. This option allows **structured table extraction**, preserving row and column relationships.
            2. Useful for processing invoices, reports, and tabular data.
   4. **Tesseract**: An open-source OCR engine for extracting text from images and PDFs.
   5. &#x20;**VLM:** A specialized method for processing and extracting text from images or documents using visual language models. You must have VLM configured in the[ Organization](https://docs.karini.ai/organization).

      1. The **VLM Prompt** provides a predefined instruction set guiding the AI model on **how to analyze the image** and **what details to extract**.
      2. The **VLM Prompt** instructs the system to **analyze an image in-depth**, extracting all visible text while preserving structure and order. Additionally, it provides a **detailed description** of diagrams, graphs, or scenes, explaining components, relationships, and inferred meanings to ensure a **comprehensive textual representation** of the image.

      The **VLM prompt** is defined as follows:

```
Instruction: Please analyze the attached image thoroughly and provide a detailed textual report that includes the following:

1. Extracted Text:

   - Extract all text present in the image.
   - This includes any visible text in labels, signs, titles, annotations, captions, legends, or embedded within diagrams, flowcharts, graphs, or any other elements.
   - Preserve the order and structure as it appears in the image.

2. Detailed Description:

   - Describe everything that is happening or depicted in the image.
   - For flowcharts or diagrams:
     - Explain each component, including shapes (e.g., rectangles, diamonds), connectors (e.g., arrows, lines), and how they relate to each other.
     - Describe the process flow, decision points, inputs, outputs, and any loops or cycles.
   - For graphs or charts:
     - Identify the type of graph (e.g., bar chart, line graph, pie chart).
     - Describe the axes, labels, data points, trends, and any significant patterns or anomalies.
   - For scenes or images with objects:
     - Describe all objects, people, animals, and elements present.
     - Include details about their appearance, positions, actions, expressions, and interactions.
   - Mention any colors, shapes, and sizes that are relevant to understanding the image.
   - Describe the background and setting to provide context.

3. Interpretation and Contextual Information:

   - Provide any inferred meanings, implications, or conclusions that can be drawn from the image.
   - Explain the purpose or function of the diagram, flowchart, or scene if apparent.
   - If the image represents a concept, process, or data, elaborate on what it signifies.

Formatting Guidelines:

- Organize your response into clear sections with headings: "Extracted Text," "Detailed Description," and "Interpretation."
- Use bullet points or numbered lists where appropriate for clarity.
- Ensure that the description is comprehensive and allows someone to fully understand the image without seeing it.
```

**PII Masking Options**: To mask Personally Identifiable Information (PII) within your dataset, enable the PII Masking option. You can specify the entities to be masked by selecting from the available list, ensuring secure data preprocessing.

For more details, refer to this [PII entities](https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html).

The following **state flags** need to be configured based on the use case .

#### State settings

State settings **control data access** within the workflow, ensuring that nodes can retrieve and process relevant information.

1. **Messages:** Accesses the conversation history for processing, with options for full, last, or specific node messages.
   1. **All Messages:** Accesses the entire conversation history, allowing the node to consider all previous interactions for context.
   2. **Last Message**: Accesses only the most recent message in the conversation, useful for nodes that need to respond to the latest input.
   3. **Node Message**: Accesses messages from a specific node in the workflow, ideal for retrieving targeted information shared by a particular node.
2. **Scratchpad**: The Scratchpad acts as a shared, temporary storage space (in JSON/dictionary format) accessible by all nodes in the workflow. Nodes can read from or write to this space as needed. This facilitates intermediate data retention and seamless data transfer between nodes without long-term persistence.
   1. **Input**: Allows nodes to read data from the Scratchpad state by specifying a valid JSON path. The node retrieves data stored in the Scratchpad state for further processing.
   2. **Output:**
      1. Allows nodes to write data to the Scratchpad state by specifying a valid JSON path. You can choose to write the output of a node into the Scratchpad for future steps.         \
         Additional options include
      2. **Extract JSON** : Attempts to extract the first valid JSON object from the node's output and store it in the Scratchpad state.
      3. **Overwrite** : Replaces the existing value at the specified JSON path with the new value from the node’s output. (if disabled, it will try appending the value or persist the older value if appending is not possible.)
3. **Enabling Semantic Cache**   \
   In the agent configuration interface, the Semantic Cache option enhances the system's ability to deliver more accurate and consistent responses. When enabled, this feature allows the agent to automatically save and reuse responses based on similar inputs.

   The semantic cache works by performing a three-step hierarchical lookup:

   1. **Prompt Similarity**: The system first checks if there are any past inputs with similar prompts.
      1. **Conversation History**: If no match is found at the prompt level, the system then considers the broader conversation history.
      2. **Query Matching**: Finally, the system analyzes the input query to find the closest matches based on prior interactions.

   **Semantic Cache Threshold:** The threshold setting, represented by a slider, allows the user to fine-tune the level of similarity required for reusing cached responses. A higher threshold (e.g., closer to 1.0) implies a stricter match, while a lower threshold allows for more flexibility in reusing responses.

   **Traces :**&#x4F;nce enabled, the traces will capture detailed logs of all semantic cache activity, including which   cached responses were used, the feedback provided by users (if applicable), and any adjustments made by the system in response to this feedback. This makes troubleshooting and performance monitoring easier, as users can track the flow of responses and how cached data is utilized.

   **Search Cache** and **Save Cache** actions play a critical role in optimizing system performance by leveraging cached data to reduce query processing times and enhance overall efficiency.

   1. **Search Cache:** This action retrieves data from the existing cache, thereby minimizing query execution time. By utilizing stored results, it prevents redundant searches or recalculations, contributing to improved system performance.&#x20;
   2. **Save Cache:** This action stores newly retrieved or computed data into the cache for future use. It ensures faster access to the same data in subsequent queries, thereby improving response times.&#x20;

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FnhOgFOuIZSZ2HrqkVlK6%2Fimage.png?alt=media&#x26;token=5395a404-c87a-4804-ad22-48416673bf7d" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
Note: For **prompt/agent nodes**, the {**scratchpad**} variable is used to reference and insert data from the Scratchpad into the prompt body.\
The **Knowledge Base** can be connected to various functional nodes based on workflow requirements. It can be linked to  **Router node, Prompt node, Custom Function node, Agent node, End node, Transform node**.
{% endhint %}

### Reranker&#x20;

The **Reranker** node is used to improve the relevance of search results returned from one or more connected **Knowledge Base** nodes. It takes the candidate results from upstream knowledge bases and reorders (and optionally filters) them based on a relevancy score produced by a reranking model. The re-ranked results are then passed to the next node in the recipe.

#### **Messages**:&#x20;

Accesses the conversation history for processing, with options for full, last, or specific node messages.

1. **All Messages**: Accesses the entire conversation history, allowing the node to consider all previous interactions for context.
2. **Last Message**: Accesses only the most recent message in the conversation, useful for nodes that need to respond to the latest input.
3. **Node Message**: Accesses messages from a specific node in the workflow, ideal for retrieving targeted information shared by a particular node.

#### **Reranker Query**

This enables the user to select the **Node** and the **Message**.&#x20;

The configuration allows:

* **Node**: Choose the specific node.
* **Message**: Choose which message from that node to use (for example, Last).

1. **Enable Reranker:** By default, it’s enabled when configuring the node.&#x20;
2. **Top-N**
   * Specifies the maximum number of top-ranking vectors (results) to return after reranking.
   * This value must be less than or equal to the top\_k parameter used when querying the knowledge base.
3. **Reranker Threshold**

   * Defines the minimum relevancy score a result must meet to be included in the final list.
   * The model selects up to Top-N results whose score is greater than or equal to this threshold.
   * A higher threshold yields fewer but more relevant results; a lower threshold is more permissive and can return a larger set of results.

   By configuring the Reranker node selecting the appropriate message context, query source, Top-N value, and relevancy threshold you ensure that downstream nodes receive only the most relevant knowledge base results, significantly improving the quality of responses generated by the recipe.

{% hint style="info" %}
Note:To use the Reranker node, a reranker model must be configured in the Organization settings.
{% endhint %}

### Connector

The Connector node functions as an interface facilitating data exchange between the system and external storage solutions.

There are two available connector types, as listed below.

* **Amazon S3** : Enables integration with Amazon Simple Storage Service (S3) for retrieving or storing data.
  * **Overwrite Credentials** :By default, the AWS credentials configured in[ Organization](https://docs.karini.ai/organization) settings will be used to invoke AWS resources.  However, if needed, you can provide alternate AWS credentials&#x20;
* **Presigned URL**: Allows access to data stored at a presigned URL, providing temporary access to files hosted on cloud storage systems.
* **In Memory Base64 Data** : Supports handling Base64-encoded data stored in memory for temporary or intermediate processing.
* **Box**: Facilitates integration with **Box**, a cloud storage service, allowing seamless interaction with documents and files stored in Box.
* **Email**: It allows the system to process and extract documents directly from **email attachments**. This is particularly useful when you need to work with incoming email data in workflows.
  * **Email Count:** Defines the number of email to process at once.
  * **Date:** Specifies that emails will only be pulled if they were received after the provided date
  * **Credentials**
    * **Overwrite Credentials**: Enables entering **specific IMAP server credentials**.
      * **IMAP Server**: The email server address (e.g., imap.gmail.com).
      * **Email**: The email address for access.
      * **Email Password**: The password for authentication.

{% hint style="info" %}
The **Connector** node can be linked to either the **Start** node or the **Processing** node, depending on the workflow requirements.
{% endhint %}

### Processing&#x20;

The Processing node is employed in the workflow recipe to enable file uploads to Copilot for querying. It streamlines the processing of uploaded files by providing configurable options designed to support specific data extraction and privacy requirements.

The Processing options are detailed in the [Document reader](#document-reader) section;refer to it for comprehensive information.

{% hint style="info" %}
Connect the **Processing** node to the **Start** node to initiate workflow execution.
{% endhint %}

### Router

Router directs the workflow to the next node based on conditions or input, enabling dynamic branching paths within the workflow recipe. This ensures that the workflow can adapt based on the given data or context, enhancing flexibility and decision-making in the process.

Node Methods: The Router supports three node methods for decision-making.

* &#x20;**Default**: This method follows the standard routing logic, processing data without additional customization. It adheres to a predefined flow, ensuring consistency and simplicity for straightforward workflows that don't require dynamic decision-making.

Using the **Default** method, routing conditions can be assigned to **edges** to determine the appropriate node for processing the request.

<figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc05Qzwb5QthvK7humAYJZFmP7stvQQgzArY3NaveWEVUeRpBXqBCx0UB02DSfeuwmwnsuEaKBaIOyPU2DW0o4r-74ZInEpp5I-323jc_C_Bih2L6T2MPufoxQ2rPBtEKANQWgj?key=sd7ID78MwYC7TUnxCWmg1dC8" alt=""><figcaption></figcaption></figure>

&#x20;There are two available options:&#x20;

* **Default Routing:** which applies when no specific conditions are met.
* **Custom Routing**-:where you can define explicit conditions for each edge in the provided text box.

{% hint style="info" %}
&#x20;The Default Routing Condition can be assigned to only one edge within the routing configuration.
{% endhint %}

* **Prompt**:
  * The Prompt method enables the selection of a predefined prompt for the Router node from the available prompt list.
  * The selected prompt contains instructions that guide the router on how to direct the workflow.
  * The router will evaluate the input and route the workflow accordingly based on the prompt’s logic.
  * Here is the provided sample prompt:

````
You will receive a set of input messages containing a task. Your role is to analyze the latest (last) input message and provide the correct routing output based on that task. Use previous messages in the conversation as context only if necessary, but prioritize the latest (last) message for decision-making. Follow these instructions carefully:

1. If the latest input **starts with coding:**, respond with: ```{{"next": "Policy Agent"}}```
2. If the latest input **starts with sales data:**, respond with: ```{{"next": "Enterprise Sales Agent"}}```
3. If the latest input **starts with QNA:**, respond with: ```{{"next": "Bot QnA"}}```


Provide only the JSON output based on the identified task. Only response with route options, don't say anything else.
````

{% hint style="info" %}
**Only published prompts, along with current versions and associated models, are visible within the recipe.**
{% endhint %}

* **Lambda**: The Lambda method integrates AWS Lambda functions to execute custom logic.
  * **Lambda ARN** : Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.
  * **Input test payload**: This is a  sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.
  * **Test button** : Enables you to validate the function by executing the test payload.
  * **Overwrite credentials option** → By default, the AWS credentials configured in[ Organization](https://docs.karini.ai/organization) settings will be used to invoke AWS resource.  However, if needed, you can provide alternate AWS credentials

#### **Invoke Retries:**

specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

#### **State settings**

**Document Cache**: Provides access to shared document information across nodes.

1. Retrieve Documents: Fetches the entire document based on a specified filename or file path. Useful for accessing full document content from the knowledge base
2. Retrieve Chunks: Fetches specific document chunks based on semantic similarity, ideal for retrieving only relevant parts of a document related to the query.
3. Ephemeral: Pass context as raw text directly with the query without storing it in the knowledge base.

#### **Metadata**:&#x20;

Provides access to webhook metadata from connected APIs, enabling external data flow.

#### **Enabling Semantic Cache**&#x20;

Router node follows the baseline Semantic Cache   configuration as documented in[ Knowledge base node](#knowledge-base).\
**Additional Settings**

* **Use User Feedback:** This option allows the agent to incorporate user-provided feedback (if available) to improve response relevance. Depending on the feedback intent, the system may modify the cached responses or discard the current cache in favor of new data

To view more details about **Messages** and **Scratchpad**, refer to the preceding [state settings](#state-settings) section.

{% hint style="info" %}
Connect the Router node to Knowledge Base, Agent, Prompt, Custom Function or Transform nodes based on the specific use case.
{% endhint %}

### Prompt

The Prompt Node is enabling the system to execute logic-driven actions based on predefined prompts. You can select a prompt from the existing prompts in the prompt playground to add to the recipe.&#x20;

Once a prompt is selected, the system displays the **associated primary and fallback models**, along with **guardrails** attached if they were configured in the Prompt Playground.

Additionally, you can navigate to the Prompt Playground by clicking the **redirect** icon, allowing them to view the prompt details and make necessary modifications.

The following image illustrates the **version** and **redirect** icon.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FOYSqJMLBW689cxM7CrQG%2Fimage.png?alt=media&#x26;token=0f68538f-23d3-4c14-b28b-8752bc2cc7c0" alt=""><figcaption></figcaption></figure>

To switch to **a specific version**, click on the d**isplayed version**. This will generate a list of all associated versions. Select the required version, and the system will load the complete prompt details corresponding to the selected version.

The following image displays the associated versions.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2F3gYV8FR9NY6ENbetV8rN%2Fimage.png?alt=media&#x26;token=916223e0-7aae-4ed4-89de-b2f0f9741f55" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
**Only published prompts are visible within the recipe.**
{% endhint %}

Refer sample prompt.

<pre><code><strong>Answer the question based on the context below. Keep the answer short and concise. Respond "Unsure about answer " if not sure about the answer.
</strong>
{context}

{question}.
</code></pre>

The **Guardrail** option is available on this tile. You can choose from existing guardrails in the **Prompt Playground**, which will be reflected in the recipe upon prompt selection. Alternatively, you may enable the **default guardrail** configured at the organizational level.

**Invoke Retries** specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

The [**State settings**](#state-settings) are detailed in the preceding section; refer to it for comprehensive information.

#### Metadata:

Metadata refers to supplementary information that provides context, details, or additional insights about a specific process, event, or interaction. Within the framework of prompts, metadata enables the dynamic exchange of external data, facilitating more informed and context-aware decision-making. &#x20;

{% hint style="info" %}
For metadata to be utilized, the prompt must explicitly reference the {metadata} variable within its body
{% endhint %}

Metadata includes user-specific data, enabling the agent to personalize interactions and enhance user experience. Metadata can be reviewed in the copilot history.

Ensure that the scratchpad is enabled and that the prompt includes Scratchpad as a variable.

#### Online Evaluation:

**Online Evaluation** provides real-time quality assessment of LLM generated responses during runtime. It evaluates responses against multiple metrics to ensure quality, relevance, and adherence to guidelines without requiring separate evaluation datasets.\
&#x20;This evaluation uses the **NLA (Natural Language Assistant)** model which is configured in the [organization ](https://karini-ai.gitbook.io/karini-ai-documentation/organization)page to analyze and score responses based on criteria such as relevancy, accuracy, and adherence to guidelines.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FPZElSTOQRrLd7JtPi77U%2Funknown.png?alt=media&#x26;token=683b50d4-a749-4eae-b426-0f83d962af83" alt=""><figcaption></figcaption></figure>

**Evaluation Metrics:**\
The system evaluates responses across the following key metrics:

<table data-header-hidden><thead><tr><th></th><th width="500.199951171875"></th><th></th></tr></thead><tbody><tr><td><strong>Metric</strong></td><td><strong>Description</strong></td><td><strong>Score Range</strong></td></tr><tr><td><strong>Answer Relevancy</strong></td><td>Measures how relevant the response is to the user's query</td><td>0-5</td></tr><tr><td><strong>Answer Faithfulness</strong></td><td>Evaluates if the response is faithful to the provided context, checking for hallucinations or errors</td><td>0-5</td></tr><tr><td><strong>Context Sufficiency</strong></td><td>Assesses whether the provided context is sufficient to answer the query, and identifies any missing info</td><td>0-5</td></tr><tr><td><strong>Guideline Adherence</strong></td><td>Checks if the response follows specified guidelines and complies with custom rules</td><td>0-5</td></tr></tbody></table>

**Viewing Evaluation Results:**

Upon execution, evaluation results are made available in **Copilot History**. To access these results, follow the steps below:

1. Navigate to **Copilot**.
2. Select the **Action** button for the Copilot you want to view.
3. Click on **View History**.
4. The **View** button and **View trace**  will become available. Click it to access the results.
5. Evaluations can be reviewed through both the **View** button and the **View Traces** button for further insights.

**View:**

Now you will be able to review the evaluation results and reasoning for each response. The results will include scores for each evaluation metric, along with a detailed reasoning summary for each assessed response.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FsnDrbEOIq39mwSJb4Zwj%2Funknown.png?alt=media&#x26;token=88df6e9a-3b6e-49a8-93a5-578f35a10072" alt=""><figcaption></figcaption></figure>

**View Traces:**\
Evaluation results are also accessible within traces, where they are incorporated into the overall execution context. This integration allows users to analyze the evaluation results in conjunction with the broader execution flow and performance data, offering a comprehensive view of how the evaluation contributes to the overall system performance.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FyjIJ7EkbiYCgIzIsomhH%2Funknown.png?alt=media&#x26;token=6705fe4d-b741-4c96-8b71-d2832a4d4c41" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
Connect **Prompt** node to **End** node in recipe.
{% endhint %}

### Agent

Select an agent prompt from the available options. Each prompt encompasses pre-configured tools and settings essential for processing inquiries and responding with effectiveness.&#x20;

Once you've selected the prompt, the canvas will reveal the tools and configurations integrated into the agent prompt. You can update the configurations for the preset tools. You cannot add or delete an agent tool from the recipe canvas. In order to edit tool types, or add or delete tools, you need to edit the[ agent prompt](https://docs.karini.ai/prompt-management/agentic-prompts) in the prompt playground.  These tools empower the agent to analyze queries thoroughly and generate precise responses.

Once an **agent** is selected, the system displays the **associated primary and fallback models along with current version.** Additionally, you can navigate to the Prompt Playground by clicking the **redirect** icon, allowing them to view the agent details and make necessary modifications.

The image below shows the recipe using the **Agent node**, including its version and the redirect icon for **Agent prompts.**

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FC4wpAjxM85DitErCPWgi%2Fimage.png?alt=media&#x26;token=8dcd6fcb-18e9-42e3-8a8e-98adb4f0fb75" alt=""><figcaption></figcaption></figure>

To switch to **a specific version**, click on the d**isplayed version**. This will generate a list of all associated versions. Select the required version, and the system will load the complete agent details corresponding to the selected version.

The following image displays the associated versions.

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2F05FF6OfEkqse9qsSSHxr%2Fimage.png?alt=media&#x26;token=79719345-dda1-4631-a2a1-e6d1cb69e8af" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
**Only published agent prompt are visible within the recipe.**
{% endhint %}

Refer sample agent prompt.

```
You are a vehicle guide expert specializing in automotive manuals and answering queries based on the provided context. 

The context includes details on Keys, Doors, and Windows; Seats and Restraints; Storage; Instruments and Controls; Lighting; Infotainment System; Climate Controls; Driving and Operating; Vehicle Care; Service and Maintenance; Technical Data; Customer Information; Reporting Safety Defects; OnStar; Connected Services; 

 Only use the tools to answer the user's query. 

Your primary role is to assist users with vehicle-related inquiries, provide accurate information from the context, and send email notifications if requested.

Key Responsibilities:
{key_responsibilities}

Important Instructions:
{important_rules}

Guidelines for User Interactions:
{interaction_guidelines}

```

**Invoke Retries** specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

The [**State settings**](#state-settings) are detailed in the preceding section; refer to it for comprehensive information.

For a comprehensive overview of the online evaluation process, please refer to the[ online evaluation metrics](#online-evaluation) outlined in the section above.

#### Metadata:

Metadata refers to supplementary information that provides context, details, or additional insights about a specific process, event, or interaction. Within the framework of agents, metadata enables the dynamic exchange of external data, facilitating more informed and context-aware decision-making. &#x20;

{% hint style="info" %}
For metadata to be utilized, the  agent must explicitly reference the {metadata} variable within its body
{% endhint %}

Metadata includes user-specific data, enabling the agent to personalize interactions and enhance user experience. Metadata can be reviewed in the copilot history.

Ensure that the scratchpad is enabled and that the agent prompt includes Scratchpad as a variable.

Refer [Online evaluation ](#online-evaluation)for comprehensive information.

{% hint style="info" %}
Connect **Agent** node to **End** node in recipe.
{% endhint %}

### End

Marks the conclusion of a workflow, signaling that no further actions are required.

### Sink

Integrating the Sink node into a workflow facilitates the secure storage, export, or transmission of processed data from upstream nodes to a designated destination. This ensures that the final output is efficiently managed, preserved, and made available for further use or analysis. The Sink node is designed to seamlessly store structured or unstructured data in cloud storage solutions like Amazon S3 or database.

There are four output types:

* **Connector**

  Sends the output to an external service through a configured connector, enabling seamless integration with systems outside the platform. This is particularly useful for storing, forwarding, or processing data externally.&#x20;

  When using a connector such as **Amazon S3**, the following configuration details are required:

  * S3 Bucket Path : Provide the target Amazon S3 bucket path where the data will be stored.
  * File Name Pattern : Allows you to define a structured naming pattern for saved files.

{% hint style="success" %}
Filename Pattern Guide:

Define custom filenames for saved files using dynamic placeholders. Use the following placeholders to define your filename:\
1 - {filename\_prefix} → File name prefix.\
2 - {current\_datetime} → The timestamp when the file is saved.\
3 - {metadata.KEY} → Metadata value (replace KEY with an actual key)

Examples:

* "processed/{filename\_prefix}{metadata.project}{current\_datetime}.json"
* "processed/{filename\_prefix}{current\_datetime}.json"
  {% endhint %}

{% hint style="success" %}
Note: Files are saved in .json format by default.
{% endhint %}

* **Save raw**
  * &#x20;When enabled, the raw output from the previous node is saved as a .txt file. When disabled, the first valid JSON from the output is extracted and saved as a .json file.
* **Lambda :** Data will be pre-processed within an **AWS Lambda function** before transmission to the output , enabling dynamic transformation such as formatting, filtering, enrichment, and other rule-based modifications to ensure data integrity and compliance with business logic.
  * **Lambda ARN** : Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.
  * **Input Variables:**
    * Defines the parameters or variables to be passed to the Lambda function.
    * These variables allow for dynamic data handling and contextual processing.
  * **Input test payload**: This is a  sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.
  * **Test button** : Enables you to validate the function by executing the test payload.
* **Knowledge Base :** When the **Output Type** is set to **Knowledge Base**. In this case, the output from the recipe is directed to a knowledge base for storage or future retrieval.
  * Key configuration settings for this setup include:
    * **Dataset**: You are required to select the dataset where the output will be stored within the knowledge base.&#x20;
    * **Type**: The type selection defines how the data will be processed or indexed within the knowledge base. In this case, **OpenSearch** is present, which integrates with OpenSearch for indexing and querying stored data. This setup ensures that data is searchable and can be easily retrieved when needed.
* **Knowledge Graph:** In this setup, the final output of the recipe is directed into a knowledge graph system for structured data storage, enabling complex relationships and querying between entities.
  * Key elements in the configuration are:
    * **Type**: The **Neptune** refers to **Amazon Neptune**, a managed graph database service optimized for storing and querying highly connected data. This configuration ensures that the data is structured and stored in a graph format, making it suitable for complex querying, analytics, and relationship mapping.
    * **Dataset**: The **Dataset** field requires you to select a dataset where the data will be stored within the graph database. This helps categorize and organize the data appropriately.

**State Settings**: Configures data access within the workflow, controlling what information nodes can retrieve and process..&#x20;

* **Message :** Retrieves only the most recent message(Last message) in the conversation, making it ideal for nodes that require processing the latest user input.

{% hint style="info" %}
Connect the Prompt, Agent, Custom Function, and Transform nodes to the Sink node as required based on workflow needs.
{% endhint %}

### Transform

The Transform module provides a **Split** and **Merge** node that enables users to manipulate data by either splitting it into smaller chunks or merging multiple segments. This functionality is particularly useful in data processing workflows where structured transformation of information is required.

There are two available node methods, listed as follows:

* **Split :** When using the Split node method, input data is divided based on a user-specified strategy. The split operation enhances data processing, retrieval, and transformation by ensuring that each segment adheres to the selected criteria. The method supports various strategies for data segmentation, including
  * **Character**- Splits the data at the character level.
  * **Words**- Divides the text into word-based segments.
  * **Lambda**-  Uses a custom AWS Lambda function to determine the split logic.
    * **Transform Type** : Lambda function receives an event payload where the entire input is wrapped under the key input. Ensure your function extracts and processes data accordingly.
    * **Lambda ARN** : Enter the Amazon Resource Name (ARN) for the AWS Lambda function that will process and split the data.
    * **Input Test Payload:** Enter  test data to validate the Lambda function’s behavior before deployment.
    * **Test Button** : Allows you to execute a test run of the configured Lambda function for validation.
    * **Overwrite Credentials (Optional)** → Allows you to override existing authentication settings with new credentials.
    * **Pages** - Splits content based on document pagination.
  * **Chunk Size**: The Number of characters or words or pages allowed for each chunk.
  * **Scratchpad:**. It serves as a temporary storage or intermediary space for processing and managing data within the workflow. The Split operation utilizes an input method, meaning it receives the output from the preceding Scratchpad as its input for the current Split process.
* **Merge:** The Merge node in the Transform module is used to combine multiple data segments into a unified structure. This is particularly useful when working with split data that needs to be reconstructed or when consolidating multiple data sources into a single format.
  * Merge Strategy :The Merge strategy determines the format in which the data will be merged. The available options include:
    * JSON → Combines data in a structured JSON format.
    * Lambda → Utilizes a custom AWS Lambda function to programmatically merge data.
      * Lambda ARN: Provide  AWS Lambda function’s Amazon Resource Name (ARN).
      * Input Test Payload: Sample input data to test the transformation logic.
      * Test Button: Allows you to validate the function’s processing behavior.
      * Overwrite Credentials (Optional) → Allows you to override existing authentication settings with new credentials.
* **Text** : Merges content into a plain text format.

<figure><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXczUax8LDj4vFDwmkB7tUoH-Eg6o_akabfJde_FYlTkl7m9Y5MDz4YZ07ogCvJfjVNyeTZU9KGqRgfTBtIsbxO8S48kZm1pYktWD6cGJ9vaDVXb5_e8mq1sd8aDF0FAHB7hKsVw?key=sd7ID78MwYC7TUnxCWmg1dC8" alt=""><figcaption></figcaption></figure>

* **Merge method** -The Merge method determines how the merging process handles existing data. Two options are available:
  * Overwrite- Replaces any existing data with the newly merged data. This ensures that only the most recent merged version is retained.
  * Append- Instead of replacing, new data is added to an existing array or list within the JSON structure.
* **Scratchpad** - Temporary storage is used to retain intermediate data before final output.
  * **Output**: The merged data is written to an output location.
    * **Method** Selection:
      * **Overwrite**: Replaces the existing data.
      * **Extend**: Allows appending new data instead of replacing it.

### Pass Through

The **Pass Through** node is a utility node that forwards the incoming payload to the output without modification. It performs no transformation, validation, or decision logic, and does not alter the structure or content of the data. At runtime, it routes the payload to the appropriate output based on the current state settings. Use it to preserve data flow continuity, connect nodes, or act as a placeholder in a workflow.

### Deep agent

Select an **deep agent** prompt from the available options. Each prompt encompasses pre-configured tools and settings essential for processing inquiries and responding with effectiveness.&#x20;

Once you've selected the prompt, the canvas will reveal the tools and configurations integrated into the agent prompt. You can update the configurations for the preset tools. You cannot add or delete an agent tool from the recipe canvas. In order to edit tool types, or add or delete tools, you need to edit the deep agent prompt in the prompt playground.  These tools empower the agent to analyze queries thoroughly and generate precise responses.

Once an **deep agent** is selected, the system displays the **associated primary and fallback models along with current version.** Additionally, you can navigate to the Prompt Playground by clicking the **redirect** icon, allowing them to view the agent details and make necessary modifications.

The image below shows the recipe using the **Deep Agent node**, including its version and the redirect icon for **Deep Agent prompts.**

<figure><img src="https://415930246-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FcdJxWgl2vzmIbLWtT2VF%2Fimage.png?alt=media&#x26;token=8dd95c04-30b8-41c9-b846-473bf73d35ce" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
**Only published deep agent prompt are visible within the recipe.**
{% endhint %}

### Save and publish recipe

Saving the recipe preserves all configurations and connections made in the workflow for future reference or deployment.&#x20;

After a recipe is created and saved, it must be **published and deployed** before it can be used. For end-to-end guidance on publishing and deployment, refer to the [**Recipe Management** ](https://karini-ai.gitbook.io/karini-ai-documentation/recipe-management)section.

Refer the following video to create **workflow recipe** with **chat** node.

{% embed url="<https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FbrAVcZxBu2idhaM5c8Oh%2FText-2-sql%20(1).mp4?alt=media&token=f151e120-786e-4238-9a23-3d40064b9a02>" %}

Refer the following video to create **workflow recipe** with **webhook node**.

{% embed url="<https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F7ZrVuiAUMyuYVvrK5KaB%2Fuploads%2FGc8xoDXzeUYjOwdZHGsu%2FIDP%20(1).mp4?alt=media&token=76346cf4-03cb-49f8-bbfd-743240b25134>" %}
