Blog

Latest

ChatGPT responses: As good as your Data and Prompt

Brad Crain, 10 min read


Like many others, I have found that ChatGPT is not always accurate in its response. This is certainly troubling and makes me pause a bit in rolling it out for certain use cases when accuracy is required.  Companies are very concerned about the integrity and accuracy of their communications with clients and prospects -  the best companies always have this at top of mind.  So, when using ChatGPT (directly or it API based channel) to be the core source of finding information and composing responses, should you care? Absolutely.  


As an example, I was working on a Chatbot to assist older adults locate services providers that cover different service regions. Initially I had a list of providers which I provided to ChatGPT in the form of a GitHub Markup Table format within the system role content property.  When I asked ChatGPT to tell me the total number of providers that cover a specific region, the total number provided in the response was incorrect.  I struggled with this a bit, checking my table data to make sure I had it formatted correctly.  Eventually, I decided to ask ChatGPT what could be done to help it parse and examine this table more accurately. I also did more experimenting with prompts.  I made the following two key changes which have greatly improved the accuracy of the ChatGPT response.

Continue reading...

ChatGPT Integration

Brad Crain, 10 min read

Today we are announcing next generation AI feature for eBotSpot Studio, enabled by OpenAI ChatGPT API services.

 

Prior to this update, when a bot encountered a question or response that it hasn’t been trained on, it would only respond in one way: ask the user to rephrase. Now, using the capabilities of this new AI enhancement, you can connect your bot to your most current and useful Large Language Model (LLM) and the bot can immediately start to use that data, along with additional few shot learning content, to compose a response. Users of eBotSpot’s chatbot authoring application can easily use this enhancement with their chatbot. 

Continue reading...

As a mentioned in a previous post, CLU is a very good improvement over LUIS. By upgrading to CLU your solution gets improved accuracy of intent classification, multilingual support, one click integration with OpenAI to help in creating example utterances, and optional integration with Azure’s orchestration workflow service. These are all big wins.  

For our chatbot solution, to refactor our chatbot authoring and runtime solution from LUIS to CLU was straightforward.  I mentioned in this post we first migrated our LUIS schema to one using CLU.  The next step was to enhance our authoring application so the chatbot author could specify they want to use CLU (and in doing so specify the endpoint, subscription key and other required information).   Continue reading...

I needed to add additional example utterances to the CLU intents for my online insurance chatbot. CLU has two features for this , one is to upload a file of utterances, and the other is an integration with Azure OpenAI.   The challenge was to compose these additional utterances.  

Being a lean organization, I decided to use ChatGPT for assistance. I did a hybrid approach which was to prompt ChatGPT to create additional utterances in a JSON format for my intents and then uploaded these utterances into my Azure CLU schema. 

My prompt instructed ChatGPT to provide its response in a JSON array, which is the format supported by CLU’s  upload utterance file feature.  

Continue reading...

I have enhanced our chatbot authoring solution so a chatbot author can select to either use Microsoft Language Understanding Service (LUIS) or Azure Conversation Language Understanding Service (CLU) at runtime with a chatbot.  In my previous post, I mentioned that my first impression – doing some spot testing using LUIS and CLU’s LLM authoring applications -  indicated at a first pass that CLU is a great improvement over LUIS.   I have done more research now and, wow, yes, CLU is a great improvement over LUIS.

Continue reading...

As mentioned in my last post I need to modify our insurance chatbot so it uses CLU rather than LUIS.  MSFT’s documentation and tools are nicely designed for this task and the migration of the LUIS schema to CLU’s worked flawlessly.  Once I had that data in CLU, it took me a bit of time to read MSFT’s CLU online documentation which was fine. Once I had that knowledge I did the required labeling, entity creation, and model training using CLU’s application interface.  Finally, I deployed my model.

Continue Reading...

Microsoft Language Understanding (LUIS) is a natural language processing (NLP) service that allows developers to build custom language models for their applications. LUIS only provides the intelligence to understand the input text for a client application - such as a chatbot - and doesn't perform any actions.

LUIS has worked well for our bot solution but unfortunately Microsoft is retiring LUIS.  The replacement is Microsoft’s Azure AI Language service "Conversational language understanding (CLU) ", a cloud-based service that provides Natural Language Processing (NLP) features allowing users to build a custom natural language understanding model for predicting intents and entities in conversational utterances.  Microsoft states that LUIS required more examples to generalize certain concepts in intents and entities, while CLU's more advanced machine learning reduces the burden on customers by requiring significantly less data. If this is indeed true that will be a needed improvement. In this blog post, I will provide an overview of my work using LUIS and later posts will discuss the results of the migration from LUIS to CLU, as well as my work with OpenAI.

I started using LUIS years ago when building targeted chatbots - Insurance quoting & enrolling chatbots, COVID vaccine chatbots - which I deployed on websites and Facebook.  For our bot solution – which included a bot authoring product and a bot runtime - I used LUIS for NLP, Microsoft Bot Framework as an enabler for the bot runtime, Cosmos DB (formerly known as Document DB) for storage, and Node.js as my platform. The solution was deployed on Azure and still is up-and-chatting/running.

eBotSpot architecture overview

Developers train their own models using LUIS's machine learning capabilities. With LUIS, an intent represents the user’s intention, usually a task or action they want to perform and hence are requesting of the chatbot. User intents are expressed through an utterance (written text or voice).  Within an utterance there may be an entity that is specific to the utterance/request such as “What is the weather now in Seattle?” with Seattle as an entity of type Location. I used LUIS’s admin feature to define the intents I needed, specified different entities and phrase and then provided numerous examples utterances. Using the LUIS application was easy, however, it is time consuming to use as one needs to input numerous examples utterances to make it reliable.  Microsoft did offer tools to ingest content so that would certainly be a time saver (note – I didn’t use the tools)

Continue Reading...

Quick Look: Sales Ops and Developers using ChatGPT CHATGPT SAVES TIME

More time for the gym?

I had lunch with one of my friends who has a full time job where one of his responsibilities is sales operation.  He uses Tableau, Excel and numerous other tools.  He mentioned that he sometimes uses ChatGPT to generate SQL queries for data he needs to extract from the Sales DBMS. In the past he would do this without ChatGPT but in adopting ChatGPT to help him out he is able to get to the end results quicker. A star performer I am sure he will use this free time wisely, whereas others may lengthen their time at the local gym. 

The examples below illustrate performing this task both in a SQL as well generating code for a Node.js application

Continue Reading...