Documentation

We're excited to see what you build with Shiro. This documentation is everything you need to know to start building and using your development environment for prompt engineering.

Creating New Prompts
Prompts are the backbone of PromptShiro. Using the workshops feature, users can create test cases to test variations on prompts. Users can create multiple prompts using different prompt instructions, parameter values like temperature or language model, and provider (ie OpenAI gpt-4-32k vs gpt-3.5-turbo). Users can then run your test cases and compare the results, tweak your prompts, and continue iterating until you have found an optimized prompt setup. Users can then deploy a prompt so it can be accessed via the API. A deployed prompt can not be deleted unless the deployment is deleted by an Admin user. All prompts are version-controlled, so anytime you make an edit to a prompt the version history gets tracked along with the user who made the edit and then the newly edited prompt becomes the current version. You can continue to edit a deployed prompt, but the deployment will not change unless you redeploy the current version. Users can view all version history, share links to each version, and revert any version to make it the current version. You can watch this short tutorial video to learn more about creating new prompts or read the text version below. To create a new prompt, click the "Prompts" tab at the top of your screen when logged into your account. [Click the Prompts tab] You'll now be on the Prompts index page which will show you all the Prompts for your team account. To create a new prompt click the "New Prompt" button. [Click a New Prompt] Add a name for your prompt then add the prompt text in the body field. Prompts can contain variables, we use the Liquid template engine to enable this. For example, if you'd like to include a variable for first_name in your prompt text the text might look like this: plaintext Write a welcome letter to {{ first_name }}.[Create a Prompt] You will also need to manually add each of the variables used in your body in the Prompt Variables section, click the "+ Add a Variable" button to add more variables. This will allow you to enter values for the variables later and also run tests to ensure that the variables are included in the prompt body.   Click the "Create Prompt" button to save the prompt.  All users you invite to your team account will have access to the same prompts in the account.
Create a New Workshop and Add Prompts and Test Cases
Workshops allow you to test a set of prompt variants against a set of test cases. The prompt variants might include different instructions in the body or use different models or model parameters. The test cases will test different variable values for all the prompts in the workshop. All prompts used in a workshop must include the same set of variables. When the tests are run, a test result will be created for each test case for each prompt. You can then compare the results to determine which prompts are performing correctly, then iterate on the prompts and run the tests again to repeat the process and optimize a prompt to use in production. To create a new workshop, click on the "Workshops" tab at the top of your screen when logged in. [Click the Workshops tab] This will take you to the workshops index which will show you all the workshops in your team account. To create a new workshop, click the "New Workshop" button. Enter a name for your workshop and then add any prompts you'd like to test in this workshop. The prompts you test must take the same set of variables. [Create a Workshop] Click the "Create Workshop" button to save the workshop. [Screenshot 2024-01-26 at 11.48.16 AM.png] Test Cases Add a test case by clicking the "Add a New Test Case" button. Fill in the values you'd like to use to test against the prompts included in your workshop. [Create a Test Case] After clicking the "Create Test Case" button you will be redirected back to the workshop. To run the tests, click the "Run Tests" button. The tests will run in the background and populate on the Test Case Results view when they have completed.   
Add LLM Provider Keys to Run Tests and Deploy Prompts
In order to run tests and deploy your prompts, you will need to first add API keys for any Large Language Model providers (i.e. OpenAI) used by your prompts. Note, Shiro also uses API keys to access the Shiro API so don't confuse the two. In this article, I will be referring to LLM Provider API Keys. For each LLM provider, you can add two provider keys: • Development API Key (used when running tests in workshops) • Production API Key (used for deployed prompts) Only accounts with admin permissions can manage Provider Keys. To add a new provider key, click the "Provider Keys" tab. [Click the Provider Keys tab] Click the "New Provider Key" button to create a new provider key. Then on the next screen, enter the API key and select a provider and an Environment Type: [Enter API Key Details] Provider keys must be assigned to either the PRODUCTION or DEVELOPMENT environment. Workshops and test cases will always use the DEVELOPMENT environment provider key. Prompts can be deployed to either environment and the prompt will use the corresponding provider key for that environment when executed. The environment must also be specified when generating completions for a prompt deployment through the Shiro API GenerateCompletion endpoint. Encryption at RestWe care about data security and all of your Provider API keys are stored using encryption-at-rest at the database level. In addition, we also utilize Application-level encryption which adds an additional security layer. For example, if an attacker gained access to our database, a snapshot of it, or our application logs, they wouldn't be able to make sense of the encrypted information.  Also, the values for all provider API Keys are masked in the Shiro UI and cannot be viewed in full after they have been entered. To update a provider API Key users must re-enter the full key.
Deploy a Prompt
Deploying a prompt makes it available via the API and is a way of pinning a prompt version to the one that will be used in production. Deployments are assigned to specific environments, for example, Development or Production.  Deployed prompts can not be deleted and only accounts with admin permissions can delete deployments. To deploy a prompt, navigate to the prompt and click the "Deploy" button: [Deploy a prompt] Enter a name for the deployment and choose if you want to create a new deployment or update an existing deployment. Choose the environment for the deployment and then click the "Create Prompt Deployment" button. [Create or Update a Deployment] You then be redirected to the deployment page. Here you can copy the ID of the deployment for use in the API. By default, the deployment in the API will always use the latest prompt shown on the deployment page. [Deployment page] Editing the prompt will create a new version of the prompt, however, the deployment will continue to use the pinned version in the Latest Prompt field. To update the deployment to use the new version of the prompt, you will need to deploy the new version and choose the "update existing" option. Select the deployment to update and then click the "Create Prompt Deployment" button. [Update Existing Deployment] The deployment will then use the new version of the prompt as the Latest Prompt.
Understanding Variables with Liquid
Prompt bodies can use Liquid, an open-source template engine created by Shopify that allows for dynamic content generation. This guide helps show you how to use Liquid syntax to create dynamic prompts with variables, tags, and filters. Variables The most basic kind of expression is just the name of a variable. Liquid variables  should consist of alphanumeric characters and underscores, should always start with a letter, and do not have any kind of leading sigil (that is, they look like var_name, not $var_name). Variables in Liquid are used to store and display data. They are defined by placing them between double curly braces {{ }}. For example, {{ product_title }} would display the title of a product. Array or hash access. If you have a variable whose value is an array or hash, you can use a single value from that array/hash as follows: • my_variable[<KEY EXPRESSION>] — The name of the variable, followed immediately by square brackets containing a key expression. • For arrays, the key must be a literal integer or an expression that resolves to an integer. • For hashes, the key must be a literal quoted string or an expression that resolves to a string. • my_hash.key — Hashes also allow a shorter "dot" notation, where the name of the variable is followed by a period and the name of a key. This only works with keys that don't contain spaces, and (unlike the square bracket notation) does not allow the use of a key name stored in a variable. • Note: if the value of an access expression is also an array or hash, you can access values from it in the same way, and can even combine the two methods. (For example, site.posts[34].title.) Variables in PromptsA prompt which uses variables to pass data might look like this: ruby # Prompt Prompt.create( name: "Welcome letter standard", body: "Write a new hire welcome letter to new employee whose first name is " \ "{{ first_name }}. {{ first_name }} has been hired as a {{ job_title }}. " \ "The {{ job_title }} role is within the {{ department.name }} department". \ "Let them know that their new company email will be {{ email }}." )To create a hash for the Liquid template in the prompt body, you need to provide values for each of the variables used within the template. In this case, the variables are first_name, job_title, and email. • first_name is a string. • job_title is a string. • email is a string. • department is a hash containing a key name which contains a string. ruby # Variable values { "first_name" => "James", "job_title" => "Prompt Engineer", "email" => "james@fakeexample.com", "department" => {"name" => "Content"} }When rendering the Liquid template with this hash, it will replace {{ first_name }}, {{ job_title }}, and {{ email }} with "James", "Prompt Engineer", and "james@fakeexample.com", respectively. It will also replace {{ department.name }} with "Content". erb # Constructed prompt text Write a new hire welcome letter to new employee whose first name is James. James was hired as a Prompt Engineer. The Prompt Engineer role is within the Content department. Let them know that their new company email will be james@fakeexample.com. Also let them their company laptop will be arriving by mail shortly.Tags Tags in Liquid are used to control the flow of the template. They are defined by placing them between curly braces {% %}. For example, {% if product_available %} would only display the code between the if statement if the product is available. See a full list of tags. Using Tags in PromptsUsing Tags for control structures, like if statements, can add complexity to your prompts. ruby # Prompt Prompt.create( name: "Welcome letter standard", body: "Write a new hire welcome letter to new employee whose first name is " \ "{{ first_name }}. {{ first_name }} has been hired as a {{ job_title }}. " \ "Let them know that their new company email will be {{ email }}. " \ "The {{ job_title }} role is within the {{ department.name }} department". \ "{% if department.code == 'IT' %}" \ "Also let them know that since they are in the {{ department.name }} department" \ "their company laptop will be arriving by mail shortly. "{% endif %}" )In this case, the variables are first_name, job_title, and email. Additionally, job_title is an object with a property department_code. • first_name is a string. • job_title is a string. • email is a string. • department is a hash containing a key name which contains a string and a key department_code which also contains a string. ruby # Variable values { "first_name" => "Alice", "job_title" => "Software Developer", "email" => "alice@fakeexample.com", "department" => {"name" => "Information Technology", "department_code" => "IT"} }When rendering the Liquid template with this hash, it will replace {{ first_name }}, {{ job_title }}, and {{ email }} with "Alice", "Software Developer", and "alice@example.com", respectively. It will also replace {{ department.name }} with "Information Technology". The {% if %} condition will evaluate based on the code within the department hash. Since this condition evaluates to true it will print the additional sentence and also will replace {{ department.name }} with "Information Technology" in the additional sentence. erb # Constructed prompt text Write a new hire welcome letter to new employee whose first name is Alice. Alice was hired as a Software Developer. The Software Developer role is within the Content department. Let them know that their new company email will be james@fakeexample.com. Also let them their company laptop will be arriving by mail shortly. Also let them know that that since they are in the Information Technology department their company laptop will be arriving by mail shortly.Filters Filters are used to modify the output of a variable. They are defined by placing them after the variable and separating them with a pipe symbol |. For example, {{ product_price | money }} would display the price of a product formatted as currency. See a full list of filters. Additional ResourcesThere is much more functionality that can be achieved so please review the Liquid Documentation for more resources.
Invite and manage users for your team account
After signing up for a team account, you can invite users to that account through your "Accounts" dashboard. To access the Accounts dashboard, click on your team name in the dropdown menu in the upper right-hand corner under your profile picture: [Click on your team name from the dropdown] From the Account detail page, you can see and edit the team name for the account and the account avatar image. You will also see all users who have access to your business account. You can also edit the users here to change their account permissions or to delete the user. To add a new user click the "Invite A User" button from the Account detail page. [Click Invite a User] Fill out the new user form with the user's name and email address as well as the permissions role (admin or member) they should have. For more information on permissions view this article. [Fill in User Invitation Details] Click the "Send invitation" button to submit the form. The user will then receive an invitation via email with a link to sign up. They will automatically be added to the team account as a user during signup.
Getting Started with the Shiro API
The Shiro API is organized around REST. Our API has predictable resource-oriented URLs, accepts form-encoded request bodies, returns JSON-encoded responses, and uses standard HTTP response codes, authentication, and verbs. The Shiro API doesn’t support bulk updates. You can work on only one object per request. AuthenticationThe Shiro API uses API keys to authenticate requests. Note that we also allow users to store LLM Provider API Keys, so don't confuse the two. In this article, I will be referring to Shiro API keys for using the Shiro API. Your Shiro API keys carry many privileges, so be sure to keep them secure! Do not share your secret API keys in publicly accessible areas such as GitHub, client-side code, and so forth. All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail. Managing Shiro API KeysYou can view and manage your Shiro API keys in the Shiro Dashboard. To create a new API key choose "Profile" from the dropdown menu under your name, then click the "API" tab: [Manage API Keys] When viewing your API key, you will see a sample curl request using the token as the authorization bearer:[API Key Details] Sample curl request: curl https://openshiro.com/api/v1/me \ -H "Accept: application/json" \ -H "Authorization: Bearer cXFroMHihK7tiuf6ibWH4iGP"API DocumentationTo view the specific endpoints and details please view the Shiro API Documentation. 
User permissions options for your team account
There are two permission levels available for users added to your team account: Admin Users and Member Users. Admin Users Admin only priveleges: • Edit a deployment's environment (e.g. change from development to production) • Delete deployments • View API keys • Manage API keys Can manage your team account including: • Edit the team account name • Edit the team account profile picture • Invite users to the account • Delete invited users • Change permissions for invited users • View the billing portal • Edit the payment method • Cancel the account Member Users Admin users also have access to all member user functionality. • Create, and edit Prompts • Delete Prompt (unless it is deployed) • Access all Prompt versions • Create, edit, and delete Workshops • Create, edit, and delete Test Cases • View Test Results • Deploy a Prompt • Edit a Deployment name
How to update the email address in your profile
There are two main steps that need to be completed in order to update your email address: 1. Set a new email address in your profile 2. Confirm your new email address Step 1: Set a new email address: Log into your account then select "profile" from the drop down menu in the upper right hand corner under your profile picture. [Select "Profile" from the navigation] Then from the profile page, enter your new email address in the "Email" field and click the "Update" button. [Enter a new email address and click Update] Step 2: Confirm your new email address The system will send a confirmation email to your new email address. You must open this email and click the "confirm" link in order to start using the new email address. Until you confirm the new address your old address will continue to be used for all communication and to log into your account.
Using Evaluation Metrics for Quantitative Analysis of Prompts
When adding test cases you can also optionally add one or more Evaluation Metrics (evals) for each test case. In your workshop, you'll have a set of one or more prompts and one or more test cases. Each test case contains a set of input values for the variables used in the prompt and each test case runs against each prompt. The evals will also be run for each test case and each eval will return a score between 0 and 1. Evals allow you to enter a target value to help measure the quality of the AI-generated response to the prompt. You can run your test cases and see all responses and eval metrics. In addition to visually inspecting the AI-generated responses for qualitative analysis, you can also use the eval scores for quantitative analysis. These scores can facilitate the process of iteratively modifying prompts to improve the eval scores. These scores can also help you compare AI-generated results over time to help monitor for model drift. You can use any of the following Metric Types: Exact MatchFor an Exact Match evaluation, input the exact response expected from the classification task in the target value field. This can be helpful for classification tasks or sentiment analysis. For example, if you prompt was: text prompt = "Classify the text into neutral, negative or positive. {{ text }} Sentiment: "For a test case where the text input variable value was: "This movie is definitely one of my favorite movies of its kind" you could input "positive" in the target value field. This evaluation would score a "1" if the AI-generated response exactly matches "positive" for this test case; otherwise, it scores a "0". The exact match metric type is ideal for binary classification tasks where precision in response is crucial. Regex Match For the Regex Match evaluation, input a regex pattern in the target value field to specify the format or characteristics the response should meet. This is particularly useful for checking if responses adhere to expected patterns, allowing for flexibility in the answers. For example, if your prompt is: text prompt = "Provide a summary of the customer's feedback in three words or less. {{ text }} Summary: "If you want your eval to check if the response is exactly three words, no more, no less, you could use the regex pattern ^\b(\w+\b\s?){3}$ in the target value field. This pattern asserts that the response must consist of exactly three words. The evaluation will score a "1" if the AI's response matches this pattern, and a "0" if it does not. The regex match metric type is useful for ensuring responses meet specific formatting rules or contain certain elements without requiring exact text matches. Another example would be using the regex pattern to check if a word or set of words is included in the AI-generated text. For example, if you specified that the AI should cite its sources in the format "citation:" you could enter the regex pattern \b(citation:)\b in the target value field. The evaluation will score a "1" if the AI-generated response matches this pattern and a "0" if it does not. Similarity The Similarity evaluation method allows for a more nuanced assessment of the AI's response by comparing the semantic closeness of the AI-generated text to a target text. This method is valuable for tasks where the exact wording may vary but the underlying meaning or intent should be consistent. For example, if your prompt is: text prompt = "Summarize the main idea of the following text in a single sentence. {{ text }} Summary: "If the variable value is a block of text about a complex topic, you would input a concise summary that captures the essence of the topic in the target value field. For instance, if the topic is about renewable energy, you might enter "Renewable energy sources are essential for sustainable future energy solutions." as the target text. The similarity evaluation then computes how closely the AI's summary matches the meaning of your target summary, scoring closer to "1" for highly similar responses and closer to "0" for responses that diverge significantly in content or meaning. This evaluation type is especially suited for summarization, paraphrasing, creative writing, email copy, marketing copy, or any task where conveying the same idea in different words is acceptable or desired.

Subscribe to our newsletter

The latest prompt engineering best practices and resources, sent to your inbox weekly.