Assistant is AI powered tool built to provide assistance to agents during calls in real-time. This article provides a guide on how to set up your assistant and use it.
To start using the assistant, it needs to be configured first. If there is no assistant set up for your account, selecting Assistant from Tools, will redirect you to assistant settings page which you can use to create assistant configuration.
For assistant to be started on calls, you need to add SIP devices and/or DID numbers (Fig. 1.)
Assistant recognizes speech from calls so it must be aware of the conversation language - multiple languages can be selected.
Assistant behavior must be configured as well (Fig. 3.)
Instruction for agent speech - how should the assistant respond if the last message in the conversation is from the agent. If left blank, assistant will not respond to conversations where last message is from the agent.
Instruction for client speech - how should the assistant respond if the last message in the conversation is from the client. If left blank, assistant will not respond to conversations where last message is from the agent.
Assistant instructions - general guidelines or instructions for assistant
By default, assistant responses will be streamed meaning, text chunks will start to appear in the assistant response window as soon as they are received. If streaming is disabled, responses will only be shown after the whole response is generated by the assistant. You can configure this functionality under Advanced settings (Fig. 4.)
OpenAI API Key - if you would like to use your own API key, you can insert it here. In case you don't have your own, you can leave this field blank.
Output token limit - can be used to limit how much text the assistant responses will contain. If the response will not fit into the defined limit, end of the response will be truncated. 256 should be more than enough here.
Conversation context length can be used to configure maximum amount of messages the conversation can contain before the oldest message is removed. A request is sent to the assistant after the current speaker has done talking. For example - a new conversation is started. Agent says “Hello” - assistant will receive this information and will react according to the instructions that are added. Afterwards, client responds with “Hello” - same thing happens only this time, both messages are sent - agent's “Hello” and client's “Hello”. Assistant recognizes it as conversation. If the context length is set to 2, that means regardless of who speaks next, agent's “Hello” will be removed from the conversation that is sent to the assistant.
Assistant instructions: contains information about all of the products offered by your business titled “Our products”
Instruction for client speech: answer client questions regarding products using only the information provided in “Our Products” (reference to Assistant instructions)
Instruction for agent speech: rate how accurate did the agent responded to client's questions about the product
In result, the assistant will output answers based on instructions you add. Try to keep the instructions simple. If you want the assistant to only respond to client's questions and do nothing when agent speaks, leave Instructions for agent speech (as seen in Fig. 3.) blank.
After the assistant is configured, navigate to Live Dashboard (Fig. 5.)
All of the responses from the assistant will be displayed in conversational style. Assistant responses to client's speech will appear as blue blocks on the left side, for agent speech - green blocks on the right side. You can clear the responses section by using the Clear button on top right - this will in no way affect the ongoing call.
Alternatively, assistant dashboard can be accessed in Webphone by clicking the assistant icon highlighted in Fig. 6.