Set up Workflow Recipe

Workflow recipes can be initiated with either a Chat node or a Webhook node, depending on the specific use case and the desired interaction flow

Chat

This node is designed to initiate and facilitate user interactions, serving as the entry point for conversational engagement.

Conversation History:

Determine the number of messages to retain in the conversation history. This allows you to augment the prompt context with past conversation, improving the response quality.

Generate follow up questions:

Enable Generate follow-up questions to prompt the system to autonomously generate relevant questions based on the conversation context and generated answer. To use this option, the Follow Up question generator model must be configured in the Organizationarrow-up-right.

You are provided with a sample prompt for this task; however, you can update the as required.

Generate follow-up questions based on the given input question and answer. Follow these guidelines:

1. Number of Questions: Include 2-3 short follow-up questions that directly expand on or clarify the main question.
2. Relevance is Key: Ensure these questions are relevant, showing an understanding of the initial question and its broader implications.
3. Emphasis on Follow-Ups: Always aim to provide follow-up questions to foster deeper dialogue, use the given answer as a basis for further exploration.

Reference Question: {question}
Reference Answer: {answer}

Output Format:
Question-1: [provide followup question here] 
Question-2: [provide followup question here]

Question-3: [provide followup question here]

Enable Audio Mode

When Audio mode is enabled, the audio option becomes available on Copilots. This allows users to interact through voice queries and receive responses in both text and audio formats, enhancing accessibility and user experience.

You must have Speech to Text Model Endpoint and Text to Speech Model Endpoint configured in the Organizationarrow-up-right.

circle-info

To integrate the Chat node effectively, follow these configurations based on document handling requirements:

  • With Document Handling : Establish a connection between the Chat and Processing nodes when document uploading, processing, and querying are essential for interaction.

  • Without Document Handling : Directly link the Chat node to the Start node if the workflow does not require document-based references, enabling a standard conversational experience.

Webhook

A webhook is a user-defined HTTP callback that allows the system to send automated messages or data to an external URL when an event occurs.

Here are the key elements of a webhook node.

Label: Serves as a unique identifier for the webhook, making it easy to reference or manage.

Webhook URL: The endpoint to which the webhook sends HTTP requests. It directs the data to the appropriate destination.

Webhook Token: Used for authentication to ensure that the request made by the webhook is valid and authorized to access the API.

Query Method: Specifies the HTTP method (such as POST) used for the request to the API.

Webhook Query Headers: Defines the headers included in the request, often containing metadata like content type, authorization, or other necessary information for the API to process the request.

The webhook request must include the following headers for authentication and content type specification:

Payload Template: A predefined structure or format for the data sent with the request. It helps in organizing the information and ensuring that the API receives the correct data structure.

Generate curl command:

The Generate curl command enables users to automatically generate a curl command tailored to the specific configuration of their webhook setup within the recipe workflow. Upon clicking the button, the system displays the corresponding curl command, which replicates the HTTP request setup defined in the workflow.

This includes the following components:

  • HTTP Method: The selected HTTP method (e.g., POST, GET) for initiating the webhook request.

  • Headers: Automatically includes required headers, such as Content-Type and the API token for authentication (if configured).

  • Payload Template: The request body content is generated based on the defined payload template, which may include file types, file paths, and textual content.

For webhook recipe invocation and webhook history details, refer to the webhook recipe section.

circle-info

To ensure optimal functionality of the Webhook node, follow these configuration guidelines:

  • With Document Processing: Connect the Webhook node to the Processing node when documents must be uploaded, processed, or queried before transmitting data externally.

  • Without Document Processing: If no document handling is required, connect the Webhook node directly to the Start node.

Start

The Start node serves as the entry point of the workflow, initiating the flow of tasks by connecting to various functional nodes based on specific process requirements.

It can link to the Knowledge base node for information retrieval, the Router node for directing execution based on logic, the Prompt node for generating responses, the Custom function node for executing predefined tasks, the Agent node for intelligent automation, and the Transform node for enabling parallel processing. This flexibility allows workflows to be dynamically structured according to operational needs, ensuring efficient execution and automation.

Knowledge Base

A Knowledge base is a vector embedding structured repository designed to store, organize, and manage information, enabling applications to retrieve relevant data efficiently.

The system supports the following two types of knowledge bases:

Native Knowledge Base:

Choose a dataset from the available dataset list. After selecting the dataset, the relevant prompt contexts can be configured to retrieve information from the vector store. For detailed guidance, refer to Context Generation using Vector Search.arrow-up-right

Bedrock Knowledge Base

The Bedrock Knowledge Base refers to an AI-powered retrieval system offered by Amazon Bedrock, a service by AWS (Amazon Web Services). It allows users to integrate enterprise knowledge bases with AI-powered applications, enabling natural language queries on stored data. For more details refer Amazon Bedrock Knowledge Bases.arrow-up-right

The configuration fields include:

  1. Knowledge Base ID: Enter the ID of the knowledge base.

  2. Filter Key: Specifies the criteria for filtering results.

  3. Filter Value: Defines the specific value to filter by.

  4. Number of Results: Specifies how many responses should be retrieved.

  5. Overwrite Credentials Option : By default, the AWS credentials configured in Organizationarrow-up-right settings will be used to invoke AWS resource. However, if needed, you can provide alternate AWS credentials.

Document Reader

Document Reader is designed to facilitate the extraction of information from various document formats, including PDFs, Word files, and other textual formats. This feature integrates seamlessly with the knowledge base, enabling efficient document processing, data extraction, and integration into workflows.

  1. Configuration Settings:

    1. Connector Type: The Connector Type setting defines the method of integrating external data sources.

    2. The following connector options are available:

      • Amazon S3: Enables the system to retrieve and process documents stored in Amazon S3 buckets.

        • Overwrite Credentials :By default, the AWS credentials configured in Organizationarrow-up-right settings will be used to invoke AWS resources. However, if needed, you can provide alternate AWS credentials

      • Presigned URL: Supports data retrieval through presigned URLs, which provide temporary.

      • Email: Facilitates the extraction of documents directly from email attachments.

  2. Preprocessing options:

Karini AI recipes support following preprocessing options:

  1. Enable Transcriptions: Enables automatic transcription, facilitating the conversion of audio or speech-based files into text.

    1. Default: Using the Default method, the OpenAI Whisper model is employed and should be selected as the Speech-to-Text Model Endpoint for transcription tasks in the Organization arrow-up-rightpage.

    2. Amazon Transcribe: Amazon Transcribe is an automatic speech recognition service that uses machine learning models to convert audio to text.

  2. OCR Options: This option provides various methods for extracting text from documents and images:

    1. Unstructured IO with Extract Images: This method is used for extracting images from unstructured data sources. It processes unstructured documents, identifying and extracting images that can be further analyzed or used in different applications.

    2. PyMuPDF with Fallback to Amazon Textract: This approach utilizes PyMuPDF to extract text and images from PDF documents. If PyMuPDF fails or is insufficient, the process falls back to Amazon Textract, ensuring a comprehensive extraction by leveraging Amazon's advanced OCR capabilities.

    3. Amazon Textract: The selected OCR method is Amazon Textract, a cloud-based service that identifies and extracts text, structured data, and elements like tables and forms from documents.

      1. Extract Layouts:

        1. This option helps recognize the structural layout of a document, such as:

          1. Headings

          2. Paragraphs

          3. Columns

          4. Useful for document formatting retention.

        2. Extract Tables:

          1. This option allows structured table extraction, preserving row and column relationships.

          2. Useful for processing invoices, reports, and tabular data.

    4. Tesseract: An open-source OCR engine for extracting text from images and PDFs.

    5. VLM: A specialized method for processing and extracting text from images or documents using visual language models. You must have VLM configured in the Organizationarrow-up-right.

      1. The VLM Prompt provides a predefined instruction set guiding the AI model on how to analyze the image and what details to extract.

      2. The VLM Prompt instructs the system to analyze an image in-depth, extracting all visible text while preserving structure and order. Additionally, it provides a detailed description of diagrams, graphs, or scenes, explaining components, relationships, and inferred meanings to ensure a comprehensive textual representation of the image.

      The VLM prompt is defined as follows:

PII Masking Options: To mask Personally Identifiable Information (PII) within your dataset, enable the PII Masking option. You can specify the entities to be masked by selecting from the available list, ensuring secure data preprocessing.

For more details, refer to this PII entitiesarrow-up-right.

The following state flags need to be configured based on the use case .

State settings

State settings control data access within the workflow, ensuring that nodes can retrieve and process relevant information.

  1. Messages: Accesses the conversation history for processing, with options for full, last, or specific node messages.

    1. All Messages: Accesses the entire conversation history, allowing the node to consider all previous interactions for context.

    2. Last Message: Accesses only the most recent message in the conversation, useful for nodes that need to respond to the latest input.

    3. Node Message: Accesses messages from a specific node in the workflow, ideal for retrieving targeted information shared by a particular node.

  2. Scratchpad: The Scratchpad acts as a shared, temporary storage space (in JSON/dictionary format) accessible by all nodes in the workflow. Nodes can read from or write to this space as needed. This facilitates intermediate data retention and seamless data transfer between nodes without long-term persistence.

    1. Input: Allows nodes to read data from the Scratchpad state by specifying a valid JSON path. The node retrieves data stored in the Scratchpad state for further processing.

    2. Output:

      1. Allows nodes to write data to the Scratchpad state by specifying a valid JSON path. You can choose to write the output of a node into the Scratchpad for future steps. Additional options include

      2. Extract JSON : Attempts to extract the first valid JSON object from the node's output and store it in the Scratchpad state.

      3. Overwrite : Replaces the existing value at the specified JSON path with the new value from the node’s output. (if disabled, it will try appending the value or persist the older value if appending is not possible.)

  3. Enabling Semantic Cache In the agent configuration interface, the Semantic Cache option enhances the system's ability to deliver more accurate and consistent responses. When enabled, this feature allows the agent to automatically save and reuse responses based on similar inputs.

    The semantic cache works by performing a three-step hierarchical lookup:

    1. Prompt Similarity: The system first checks if there are any past inputs with similar prompts.

      1. Conversation History: If no match is found at the prompt level, the system then considers the broader conversation history.

      2. Query Matching: Finally, the system analyzes the input query to find the closest matches based on prior interactions.

    Semantic Cache Threshold: The threshold setting, represented by a slider, allows the user to fine-tune the level of similarity required for reusing cached responses. A higher threshold (e.g., closer to 1.0) implies a stricter match, while a lower threshold allows for more flexibility in reusing responses.

    Traces :Once enabled, the traces will capture detailed logs of all semantic cache activity, including which cached responses were used, the feedback provided by users (if applicable), and any adjustments made by the system in response to this feedback. This makes troubleshooting and performance monitoring easier, as users can track the flow of responses and how cached data is utilized.

    Search Cache and Save Cache actions play a critical role in optimizing system performance by leveraging cached data to reduce query processing times and enhance overall efficiency.

    1. Search Cache: This action retrieves data from the existing cache, thereby minimizing query execution time. By utilizing stored results, it prevents redundant searches or recalculations, contributing to improved system performance.

    2. Save Cache: This action stores newly retrieved or computed data into the cache for future use. It ensures faster access to the same data in subsequent queries, thereby improving response times.

circle-info

Note: For prompt/agent nodes, the {scratchpad} variable is used to reference and insert data from the Scratchpad into the prompt body. The Knowledge Base can be connected to various functional nodes based on workflow requirements. It can be linked to Router node, Prompt node, Custom Function node, Agent node, End node, Transform node.

Reranker

The Reranker node is used to improve the relevance of search results returned from one or more connected Knowledge Base nodes. It takes the candidate results from upstream knowledge bases and reorders (and optionally filters) them based on a relevancy score produced by a reranking model. The re-ranked results are then passed to the next node in the recipe.

Messages:

Accesses the conversation history for processing, with options for full, last, or specific node messages.

  1. All Messages: Accesses the entire conversation history, allowing the node to consider all previous interactions for context.

  2. Last Message: Accesses only the most recent message in the conversation, useful for nodes that need to respond to the latest input.

  3. Node Message: Accesses messages from a specific node in the workflow, ideal for retrieving targeted information shared by a particular node.

Reranker Query

This enables the user to select the Node and the Message.

The configuration allows:

  • Node: Choose the specific node.

  • Message: Choose which message from that node to use (for example, Last).

  1. Enable Reranker: By default, it’s enabled when configuring the node.

  2. Top-N

    • Specifies the maximum number of top-ranking vectors (results) to return after reranking.

    • This value must be less than or equal to the top_k parameter used when querying the knowledge base.

  3. Reranker Threshold

    • Defines the minimum relevancy score a result must meet to be included in the final list.

    • The model selects up to Top-N results whose score is greater than or equal to this threshold.

    • A higher threshold yields fewer but more relevant results; a lower threshold is more permissive and can return a larger set of results.

    By configuring the Reranker node selecting the appropriate message context, query source, Top-N value, and relevancy threshold you ensure that downstream nodes receive only the most relevant knowledge base results, significantly improving the quality of responses generated by the recipe.

circle-info

Note:To use the Reranker node, a reranker model must be configured in the Organization settings.

Connector

The Connector node functions as an interface facilitating data exchange between the system and external storage solutions.

There are two available connector types, as listed below.

  • Amazon S3 : Enables integration with Amazon Simple Storage Service (S3) for retrieving or storing data.

    • Overwrite Credentials :By default, the AWS credentials configured in Organizationarrow-up-right settings will be used to invoke AWS resources. However, if needed, you can provide alternate AWS credentials

  • Presigned URL: Allows access to data stored at a presigned URL, providing temporary access to files hosted on cloud storage systems.

  • In Memory Base64 Data : Supports handling Base64-encoded data stored in memory for temporary or intermediate processing.

  • Box: Facilitates integration with Box, a cloud storage service, allowing seamless interaction with documents and files stored in Box.

  • Email: It allows the system to process and extract documents directly from email attachments. This is particularly useful when you need to work with incoming email data in workflows.

    • Email Count: Defines the number of email to process at once.

    • Date: Specifies that emails will only be pulled if they were received after the provided date

    • Credentials

      • Overwrite Credentials: Enables entering specific IMAP server credentials.

        • IMAP Server: The email server address (e.g., imap.gmail.com).

        • Email: The email address for access.

        • Email Password: The password for authentication.

circle-info

The Connector node can be linked to either the Start node or the Processing node, depending on the workflow requirements.

Processing

The Processing node is employed in the workflow recipe to enable file uploads to Copilot for querying. It streamlines the processing of uploaded files by providing configurable options designed to support specific data extraction and privacy requirements.

The Processing options are detailed in the Document reader section;refer to it for comprehensive information.

circle-info

Connect the Processing node to the Start node to initiate workflow execution.

Router

Router directs the workflow to the next node based on conditions or input, enabling dynamic branching paths within the workflow recipe. This ensures that the workflow can adapt based on the given data or context, enhancing flexibility and decision-making in the process.

Node Methods: The Router supports three node methods for decision-making.

  • Default: This method follows the standard routing logic, processing data without additional customization. It adheres to a predefined flow, ensuring consistency and simplicity for straightforward workflows that don't require dynamic decision-making.

Using the Default method, routing conditions can be assigned to edges to determine the appropriate node for processing the request.

There are two available options:

  • Default Routing: which applies when no specific conditions are met.

  • Custom Routing-:where you can define explicit conditions for each edge in the provided text box.

circle-info

The Default Routing Condition can be assigned to only one edge within the routing configuration.

  • Prompt:

    • The Prompt method enables the selection of a predefined prompt for the Router node from the available prompt list.

    • The selected prompt contains instructions that guide the router on how to direct the workflow.

    • The router will evaluate the input and route the workflow accordingly based on the prompt’s logic.

    • Here is the provided sample prompt:

circle-info

Only published prompts, along with current versions and associated models, are visible within the recipe.

  • Lambda: The Lambda method integrates AWS Lambda functions to execute custom logic.

    • Lambda ARN : Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.

    • Input test payload: This is a sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.

    • Test button : Enables you to validate the function by executing the test payload.

    • Overwrite credentials option → By default, the AWS credentials configured in Organizationarrow-up-right settings will be used to invoke AWS resource. However, if needed, you can provide alternate AWS credentials

Invoke Retries:

specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

State settings

Document Cache: Provides access to shared document information across nodes.

  1. Retrieve Documents: Fetches the entire document based on a specified filename or file path. Useful for accessing full document content from the knowledge base

  2. Retrieve Chunks: Fetches specific document chunks based on semantic similarity, ideal for retrieving only relevant parts of a document related to the query.

  3. Ephemeral: Pass context as raw text directly with the query without storing it in the knowledge base.

Metadata:

Provides access to webhook metadata from connected APIs, enabling external data flow.

Enabling Semantic Cache

Router node follows the baseline Semantic Cache configuration as documented in Knowledge base node. Additional Settings

  • Use User Feedback: This option allows the agent to incorporate user-provided feedback (if available) to improve response relevance. Depending on the feedback intent, the system may modify the cached responses or discard the current cache in favor of new data

To view more details about Messages and Scratchpad, refer to the preceding state settings section.

circle-info

Connect the Router node to Knowledge Base, Agent, Prompt, Custom Function or Transform nodes based on the specific use case.

Prompt

The Prompt Node is enabling the system to execute logic-driven actions based on predefined prompts. You can select a prompt from the existing prompts in the prompt playground to add to the recipe.

Once a prompt is selected, the system displays the associated primary and fallback models, along with guardrails attached if they were configured in the Prompt Playground.

Additionally, you can navigate to the Prompt Playground by clicking the redirect icon, allowing them to view the prompt details and make necessary modifications.

The following image illustrates the version and redirect icon.

To switch to a specific version, click on the displayed version. This will generate a list of all associated versions. Select the required version, and the system will load the complete prompt details corresponding to the selected version.

The following image displays the associated versions.

circle-info

Only published prompts are visible within the recipe.

Refer sample prompt.

The Guardrail option is available on this tile. You can choose from existing guardrails in the Prompt Playground, which will be reflected in the recipe upon prompt selection. Alternatively, you may enable the default guardrail configured at the organizational level.

Invoke Retries specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

The State settings are detailed in the preceding section; refer to it for comprehensive information.

Metadata:

Metadata refers to supplementary information that provides context, details, or additional insights about a specific process, event, or interaction. Within the framework of prompts, metadata enables the dynamic exchange of external data, facilitating more informed and context-aware decision-making.

circle-info

For metadata to be utilized, the prompt must explicitly reference the {metadata} variable within its body

Metadata includes user-specific data, enabling the agent to personalize interactions and enhance user experience. Metadata can be reviewed in the copilot history.

Ensure that the scratchpad is enabled and that the prompt includes Scratchpad as a variable.

Online Evaluation:

Online Evaluation provides real-time quality assessment of LLM generated responses during runtime. It evaluates responses against multiple metrics to ensure quality, relevance, and adherence to guidelines without requiring separate evaluation datasets. This evaluation uses the NLA (Natural Language Assistant) model which is configured in the organization page to analyze and score responses based on criteria such as relevancy, accuracy, and adherence to guidelines.

Evaluation Metrics: The system evaluates responses across the following key metrics:

Metric

Description

Score Range

Answer Relevancy

Measures how relevant the response is to the user's query

0-5

Answer Faithfulness

Evaluates if the response is faithful to the provided context, checking for hallucinations or errors

0-5

Context Sufficiency

Assesses whether the provided context is sufficient to answer the query, and identifies any missing info

0-5

Guideline Adherence

Checks if the response follows specified guidelines and complies with custom rules

0-5

Viewing Evaluation Results:

Upon execution, evaluation results are made available in Copilot History. To access these results, follow the steps below:

  1. Navigate to Copilot.

  2. Select the Action button for the Copilot you want to view.

  3. Click on View History.

  4. The View button and View trace will become available. Click it to access the results.

  5. Evaluations can be reviewed through both the View button and the View Traces button for further insights.

View:

Now you will be able to review the evaluation results and reasoning for each response. The results will include scores for each evaluation metric, along with a detailed reasoning summary for each assessed response.

View Traces: Evaluation results are also accessible within traces, where they are incorporated into the overall execution context. This integration allows users to analyze the evaluation results in conjunction with the broader execution flow and performance data, offering a comprehensive view of how the evaluation contributes to the overall system performance.

circle-info

Connect Prompt node to End node in recipe.

Agent

Select an agent prompt from the available options. Each prompt encompasses pre-configured tools and settings essential for processing inquiries and responding with effectiveness.

Once you've selected the prompt, the canvas will reveal the tools and configurations integrated into the agent prompt. You can update the configurations for the preset tools. You cannot add or delete an agent tool from the recipe canvas. In order to edit tool types, or add or delete tools, you need to edit the agent promptarrow-up-right in the prompt playground. These tools empower the agent to analyze queries thoroughly and generate precise responses.

Once an agent is selected, the system displays the associated primary and fallback models along with current version. Additionally, you can navigate to the Prompt Playground by clicking the redirect icon, allowing them to view the agent details and make necessary modifications.

The image below shows the recipe using the Agent node, including its version and the redirect icon for Agent prompts.

To switch to a specific version, click on the displayed version. This will generate a list of all associated versions. Select the required version, and the system will load the complete agent details corresponding to the selected version.

The following image displays the associated versions.

circle-info

Only published agent prompt are visible within the recipe.

Refer sample agent prompt.

Invoke Retries specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

The State settings are detailed in the preceding section; refer to it for comprehensive information.

For a comprehensive overview of the online evaluation process, please refer to the online evaluation metrics outlined in the section above.

Metadata:

Metadata refers to supplementary information that provides context, details, or additional insights about a specific process, event, or interaction. Within the framework of agents, metadata enables the dynamic exchange of external data, facilitating more informed and context-aware decision-making.

circle-info

For metadata to be utilized, the agent must explicitly reference the {metadata} variable within its body

Metadata includes user-specific data, enabling the agent to personalize interactions and enhance user experience. Metadata can be reviewed in the copilot history.

Ensure that the scratchpad is enabled and that the agent prompt includes Scratchpad as a variable.

Refer Online evaluation for comprehensive information.

circle-info

Connect Agent node to End node in recipe.

End

Marks the conclusion of a workflow, signaling that no further actions are required.

Sink

Integrating the Sink node into a workflow facilitates the secure storage, export, or transmission of processed data from upstream nodes to a designated destination. This ensures that the final output is efficiently managed, preserved, and made available for further use or analysis. The Sink node is designed to seamlessly store structured or unstructured data in cloud storage solutions like Amazon S3 or database.

There are four output types:

  • Connector

    Sends the output to an external service through a configured connector, enabling seamless integration with systems outside the platform. This is particularly useful for storing, forwarding, or processing data externally.

    When using a connector such as Amazon S3, the following configuration details are required:

    • S3 Bucket Path : Provide the target Amazon S3 bucket path where the data will be stored.

    • File Name Pattern : Allows you to define a structured naming pattern for saved files.

circle-check
circle-check
  • Save raw

    • When enabled, the raw output from the previous node is saved as a .txt file. When disabled, the first valid JSON from the output is extracted and saved as a .json file.

  • Lambda : Data will be pre-processed within an AWS Lambda function before transmission to the output , enabling dynamic transformation such as formatting, filtering, enrichment, and other rule-based modifications to ensure data integrity and compliance with business logic.

    • Lambda ARN : Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.

    • Input Variables:

      • Defines the parameters or variables to be passed to the Lambda function.

      • These variables allow for dynamic data handling and contextual processing.

    • Input test payload: This is a sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.

    • Test button : Enables you to validate the function by executing the test payload.

  • Knowledge Base : When the Output Type is set to Knowledge Base. In this case, the output from the recipe is directed to a knowledge base for storage or future retrieval.

    • Key configuration settings for this setup include:

      • Dataset: You are required to select the dataset where the output will be stored within the knowledge base.

      • Type: The type selection defines how the data will be processed or indexed within the knowledge base. In this case, OpenSearch is present, which integrates with OpenSearch for indexing and querying stored data. This setup ensures that data is searchable and can be easily retrieved when needed.

  • Knowledge Graph: In this setup, the final output of the recipe is directed into a knowledge graph system for structured data storage, enabling complex relationships and querying between entities.

    • Key elements in the configuration are:

      • Type: The Neptune refers to Amazon Neptune, a managed graph database service optimized for storing and querying highly connected data. This configuration ensures that the data is structured and stored in a graph format, making it suitable for complex querying, analytics, and relationship mapping.

      • Dataset: The Dataset field requires you to select a dataset where the data will be stored within the graph database. This helps categorize and organize the data appropriately.

State Settings: Configures data access within the workflow, controlling what information nodes can retrieve and process..

  • Message : Retrieves only the most recent message(Last message) in the conversation, making it ideal for nodes that require processing the latest user input.

circle-info

Connect the Prompt, Agent, Custom Function, and Transform nodes to the Sink node as required based on workflow needs.

Transform

The Transform module provides a Split and Merge node that enables users to manipulate data by either splitting it into smaller chunks or merging multiple segments. This functionality is particularly useful in data processing workflows where structured transformation of information is required.

There are two available node methods, listed as follows:

  • Split : When using the Split node method, input data is divided based on a user-specified strategy. The split operation enhances data processing, retrieval, and transformation by ensuring that each segment adheres to the selected criteria. The method supports various strategies for data segmentation, including

    • Character- Splits the data at the character level.

    • Words- Divides the text into word-based segments.

    • Lambda- Uses a custom AWS Lambda function to determine the split logic.

      • Transform Type : Lambda function receives an event payload where the entire input is wrapped under the key input. Ensure your function extracts and processes data accordingly.

      • Lambda ARN : Enter the Amazon Resource Name (ARN) for the AWS Lambda function that will process and split the data.

      • Input Test Payload: Enter test data to validate the Lambda function’s behavior before deployment.

      • Test Button : Allows you to execute a test run of the configured Lambda function for validation.

      • Overwrite Credentials (Optional) → Allows you to override existing authentication settings with new credentials.

      • Pages - Splits content based on document pagination.

    • Chunk Size: The Number of characters or words or pages allowed for each chunk.

    • Scratchpad:. It serves as a temporary storage or intermediary space for processing and managing data within the workflow. The Split operation utilizes an input method, meaning it receives the output from the preceding Scratchpad as its input for the current Split process.

  • Merge: The Merge node in the Transform module is used to combine multiple data segments into a unified structure. This is particularly useful when working with split data that needs to be reconstructed or when consolidating multiple data sources into a single format.

    • Merge Strategy :The Merge strategy determines the format in which the data will be merged. The available options include:

      • JSON → Combines data in a structured JSON format.

      • Lambda → Utilizes a custom AWS Lambda function to programmatically merge data.

        • Lambda ARN: Provide AWS Lambda function’s Amazon Resource Name (ARN).

        • Input Test Payload: Sample input data to test the transformation logic.

        • Test Button: Allows you to validate the function’s processing behavior.

        • Overwrite Credentials (Optional) → Allows you to override existing authentication settings with new credentials.

  • Text : Merges content into a plain text format.

  • Merge method -The Merge method determines how the merging process handles existing data. Two options are available:

    • Overwrite- Replaces any existing data with the newly merged data. This ensures that only the most recent merged version is retained.

    • Append- Instead of replacing, new data is added to an existing array or list within the JSON structure.

  • Scratchpad - Temporary storage is used to retain intermediate data before final output.

    • Output: The merged data is written to an output location.

      • Method Selection:

        • Overwrite: Replaces the existing data.

        • Extend: Allows appending new data instead of replacing it.

Pass Through

The Pass Through node is a utility node that forwards the incoming payload to the output without modification. It performs no transformation, validation, or decision logic, and does not alter the structure or content of the data. At runtime, it routes the payload to the appropriate output based on the current state settings. Use it to preserve data flow continuity, connect nodes, or act as a placeholder in a workflow.

Deep agent

Select an deep agent prompt from the available options. Each prompt encompasses pre-configured tools and settings essential for processing inquiries and responding with effectiveness.

Once you've selected the prompt, the canvas will reveal the tools and configurations integrated into the agent prompt. You can update the configurations for the preset tools. You cannot add or delete an agent tool from the recipe canvas. In order to edit tool types, or add or delete tools, you need to edit the deep agent prompt in the prompt playground. These tools empower the agent to analyze queries thoroughly and generate precise responses.

Once an deep agent is selected, the system displays the associated primary and fallback models along with current version. Additionally, you can navigate to the Prompt Playground by clicking the redirect icon, allowing them to view the agent details and make necessary modifications.

The image below shows the recipe using the Deep Agent node, including its version and the redirect icon for Deep Agent prompts.

circle-info

Only published deep agent prompt are visible within the recipe.

Save and publish recipe

Saving the recipe preserves all configurations and connections made in the workflow for future reference or deployment.

After a recipe is created and saved, it must be published and deployed before it can be used. For end-to-end guidance on publishing and deployment, refer to the Recipe Management section.

Refer the following video to create workflow recipe with chat node.

Refer the following video to create workflow recipe with webhook node.

Last updated