# Create Prompt

Karini AI's prompt playground enables domain experts to become prompt engineers with a guided experience. You can develop high quality prompts,  test them and track your prompt experimentation by saving the prompt runs.&#x20;

There are multiple ways to create a prompt:

### Create Prompt Using a Prompt Template

On prompt playground, click "Add new" to start new prompt creation. Click "Prompt templates" to access the available prompt templates. You can then select a template relevant to the task and continue to customize the prompt as required as described in the following section.

### Create New Prompt

1. On the **Prompt Playground,** click "**Add new**" to start new prompt creation.&#x20;
2. Provide a prompt name and select an appropriate task from the available list.
   * [Classification](/karini-ai-documentation/prompt-management/prompt-task-types.md#classification)
   * [Summarization](/karini-ai-documentation/prompt-management/prompt-task-types.md#summarization)
   * [Evaluation](/karini-ai-documentation/prompt-management/prompt-task-types.md#evaluation)
   * [Agent](/karini-ai-documentation/prompt-management/agentic-prompts.md)
   * [Deep Agent (Beta)](/karini-ai-documentation/prompt-management/deep-agent-beta.md)
3. Construct your prompt with appropriate instructions and  variables.&#x20;
4. Variables can be added dynamically using **"Add variable"** button.&#x20;
5. You can select the "**User Input**" option if the variable will be used as an input by the user in [copilot](/karini-ai-documentation/copilots.md) interface.
6. The constructed prompt - including the context and variables can be viewed on the right hand side **Prompt** panel.&#x20;
7. Once the prompt is created, it can be tested on the [Test & Compare](/karini-ai-documentation/prompt-management/test-prompt.md) tab.

### Special Variables

When authoring prompts, following variables are treated as special or predefined variables.&#x20;

1. **Context**:  This is a predefined variable for LLM. For prompt testing, authors can provide appropriate text as a value for this variable. However, for production use or use within recipes and copilots, the value for this variable will be replaced with relevant context retrieved from the vector store.  You can also upload a **context file** as a input to this variable. The context file must be of type **.txt** or **.pdf.** The contents from the pdf file are automatically preprocessed by doing OCR before adding them as a input to context.  The limit to file size that can be added as a context is 2 MB.&#x20;
2. **Question:** This is a predefined variable for LLM. For prompt testing ,authors can provide the question related to the context as value for this variable. Enable the **User input** checkbox on the Edit prompt page to display the question under **User Input** in the **Test & compare** section.
3. **Evaluation Metric Name:** This is a predefined variable for LLM used in **Evaluation** prompts. For prompt testing ,authors can provide the evaluation metric name as value for this variable.
4. **Evaluation Metric Description:** This is a predefined variable for LLM used in **Evaluation** prompts. For prompt testing ,authors can provide the evaluation metric description as value for this variable.
5. **Evaluation Grading Criteria:** This is a predefined variable for LLM used in **Evaluation** prompts. For prompt testing ,authors can provide the evaluation grading criteria as value for this variable.
6. **Evaluation Input:** This is a predefined variable for LLM used in **Evaluation** prompts. For prompt testing ,authors can provide the question as as value for this variable.
7. **Evaluation Output:** This is a predefined variable for LLM used in **Evaluation** prompts. For prompt testing ,authors can provide an answer to compare with the ground truth output for assessment.
8. **Evaluation Ground Truth:** This is a predefined variable for LLM used in **Evaluation** prompts. For prompt testing ,authors can provide ground truth answer for assessment.
9. **File System Instructions: Ths i**s a mandatory prompt variable that specifies what data to store, the required format (e.g., JSON, TXT, CSV), and the exact storage path. It ensures deterministic, structured, and auditable persistence. The agent must also maintain and continuously update `/<thread_id>/memories/index.txt` to catalog and summarize all files within the memories directory.

   <br>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://karini-ai.gitbook.io/karini-ai-documentation/prompt-management/create-prompt.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
