ChatGPT Integration

Azure AI Conversational Language Understanding (CLU), ChatGPT, Creating utterances, LLM training

ChatGPT Integration

Brad Crain, 10 min read

Today we are announcing next generation AI feature for eBotSpot Studio, enabled by OpenAI ChatGPT API services.

 

Prior to this update, when a bot encountered a question or response that it hasn’t been trained on, it would only respond in one way: ask the user to rephrase. Now, using the capabilities of this new AI enhancement, you can connect your bot to your most current and useful Large Language Model (LLM) and the bot can immediately start to use that data, along with additional few shot learning content, to compose a response. Users of eBotSpot’s chatbot authoring application can easily use this enhancement with their chatbot. 


An interesting use case, I think, is when you enable your chatbot to use both CLU and ChatGPT.  In this scenario, your trained CLU model will analyze the user input/utterance, if found to one of the CLU trained intents then the chat bot will provide the response the chat author has specified in the authoring product. If the utterance is not recognized, the OpenAI chat completions api is then called and ChatGPT’s generative results are then displayed to the chat user.  With this approach one can avoid the dreaded “I do not understand response. Please rephrase. ” chatbot response.  

What is critical is the your OpenAI model is trained up to provide the correct response.  To enable this I have added  ChatGPT training features within the chatbot authoring product.  All easy to use by the bot author (IMHO), the underlying application code at runtime builds out the prompt that is then passed to the chatcompletions API. Once this extended prompt has been submitted to the model, the model uses this information to provide a completion which is then displayed by the chatbot.

 

These new training options found within the chatbot authoring product provide the following solution:

eBotSpot authoring product - OpenAI settings

Under the hood: Few-shot learning is a subfield of machine learning and deep learning that aims to teach AI models how to learn from only a small number of labeled training data. The goal of few-shot learning is to enable models to generalize new, unseen data samples based on a small number of samples we give them during the training process. (ChatGPT, OpenAI, 11/2023)

Important Notes: