ChatGPT Integration
Azure AI Conversational Language Understanding (CLU), ChatGPT, Creating utterances, LLM training
ChatGPT Integration
Brad Crain, 10 min read
Today we are announcing next generation AI feature for eBotSpot Studio, enabled by OpenAI ChatGPT API services.
Prior to this update, when a bot encountered a question or response that it hasn’t been trained on, it would only respond in one way: ask the user to rephrase. Now, using the capabilities of this new AI enhancement, you can connect your bot to your most current and useful Large Language Model (LLM) and the bot can immediately start to use that data, along with additional few shot learning content, to compose a response. Users of eBotSpot’s chatbot authoring application can easily use this enhancement with their chatbot.
An interesting use case, I think, is when you enable your chatbot to use both CLU and ChatGPT. In this scenario, your trained CLU model will analyze the user input/utterance, if found to one of the CLU trained intents then the chat bot will provide the response the chat author has specified in the authoring product. If the utterance is not recognized, the OpenAI chat completions api is then called and ChatGPT’s generative results are then displayed to the chat user. With this approach one can avoid the dreaded “I do not understand response. Please rephrase. ” chatbot response.
What is critical is the your OpenAI model is trained up to provide the correct response. To enable this I have added ChatGPT training features within the chatbot authoring product. All easy to use by the bot author (IMHO), the underlying application code at runtime builds out the prompt that is then passed to the chatcompletions API. Once this extended prompt has been submitted to the model, the model uses this information to provide a completion which is then displayed by the chatbot.
These new training options found within the chatbot authoring product provide the following solution:
Teach ChatGPT how you want to talk to customers. This is where you teach ChatGPT how you want it to talk to customers. For example, you can instruct it to speak casually or speak formally, be brief, add limitations, and to keep your company's tone of voice. Providing information for this field is highly recommended if you want to control the behavior of the assistant and thus the response provided to the chatbot user.
Tailoring the behavior of the chatbot’s response. The bot authoring application has been enhanced to enable the bot author to enter example conversations - user says, chatbot replies; this is a very easy way to perform few shot learning right withing the eBotSpot authoring product. For now I have enabled the author to define just one user/assistant pair but will be extending the authoring application, and the bot runtime, to support more example conversations for few shot learning.
Use the business profile information in formulating a response. When defining a chatbot in the authoring product, the authoring application from day one provided the user the ability to defined information about their business – company description, contact info, list of products sold, who to contact, some of your business links, as well as free form fields where the chatbot author could write whatever they heart desired about their business. Now the bot authoring can specify with a click of a checkbox that they want the chatbot trained on this business information. This training information is embedded in the prompt in real time. The following screenshot shows how easy it is for the chatbot author to define this business information and elect to use it to training the chatbot.
eBotSpot authoring product - OpenAI settings
Under the hood: Few-shot learning is a subfield of machine learning and deep learning that aims to teach AI models how to learn from only a small number of labeled training data. The goal of few-shot learning is to enable models to generalize new, unseen data samples based on a small number of samples we give them during the training process. (ChatGPT, OpenAI, 11/2023)
Important Notes:
This support requires an API key from OpenAI to function. You can obtain an API key by signing up at https://platform.openai.com/account/api-keys.
The current integration supports the gpt-3.5-turbo and gpt-4 models from OpenAI. These are the same models found in the ChatGPT product from OpenAI.
Current implementation not supporting multi-turn conversations as of 10/2023.