Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API

Summary

Blessing Adesiji
Analytics Vidhya

--

In this tutorial, we will build a WhatsApp AI-powered bot capable of conversational question and answering (QA). This type of bot is a generative chatbot, in contrast to the more common retrieval-based chatbot.

To create the bot, we will use Mantium’s Python library, Twilio WhatsApp API, and the Flask framework. The Mantium App/API enables us to get a generative response, as it supports and provides access to the OpenAI’s GPT-3 model.

Any questions? Join the Mantium’s Discord Channel

Below is what we are going to build at the end of this tutorial.

Introduction

Chatbots are applications that replicate and process written or spoken human conversations in a way that allows interactions with digital devices as if they were conversing with actual people. You may have interacted with a chatbot recently while browsing websites on the internet. You probably noticed that the chatbot automatically engages you in a pleasant and non-confrontational manner while trying to help achieve the objective of your visit to the site.

Chatbots have seen great applications in businesses, such as automating customer interactions, sales channels, etc., and they are becoming a lot smarter. They are capable of handling almost human-level interactions.

With recent advancements in the Natural language processing(NLP) field, which is developing the GPT-3 model by OpenAI in 2020, it is easier to solve language-based tasks such as generating and classifying text. It becomes even easier to use with Mantium, as they provide development support needed to build effectively with large language models. If you are wondering what OpenAI’s GPT-3 model is, it is the third generation prediction model that OpenAI created, and it has a capacity of 175 billion machine learning parameters.

With Mantium, we will use this to build a conversational QA WhatsApp bot in just a few lines of code. You don’t need to have a solid knowledge of machine/deep learning to build a use case with Mantium.

Requirements

To follow this tutorial you need the following;

Prompt Design with the Mantium App

The first and essential step is to design the prompt (conversational text generation task) with the Mantium App. Prompt design is our way of guiding the GPT-3 model to give a favorable and contextual response as it enables us to provide a text input and get a generative response from the model.

Think of this like you are playing the game of charades where you give the players information to figure out the secret word. Similarly, with GPT-3, we provide just enough information for the model to recognize patterns in the training prompt.

To do this, you don’t need to have prior deep learning experience, and almost anyone can do this in a No-Code approach to build a simple text application using the Mantium interactive UI.

I will break this into more straightforward steps that you can follow.

Step 1 — Register for a Mantium Account

Visit the app.mantium.com website to create a free account.

Step 2 — Connect your OpenAI API key.

To access the GPT-3 model through the OpenAI API, you must connect your OpenAI API key on the Mantium App.

Click on AI Providers, paste your OpenAI and click Add API Key. Notice that Mantium supports other AI providers such as Cohere, AI21, you can connect with their respective API keys, but for this tutorial, we build with the OpenAI's model.

Connect your API Key here

If you don’t have your API key for any of the AI Providers , here is documentation on how you can get one.

Step 3 — Add New Prompt

The next step is to add a new prompt. Navigate to the AI Manager drop-down tab on the left sidebar, click on Prompts, and click on the Add new prompt button, as shown below.

Step 4 — Prompt Design

Here, we will provide the information to build our training prompt.

A. Basic Info

Let’s fill in the basic information to identify our prompt. Note that you can provide any name and description.

Name of Prompt — Chatbot

Description — This is a chatbot

B. Provider

In this tutorial, we are using the OpenAI API, select OpenAI from the drop-down.

C. OpenAI Info

  1. Choose an Endpoint — Select the Completion endpoint. This endpoint provides the model with some text and generates a text completion that tries to fit the context of the provided text.
  2. Prompt Line — Here we are going to provide the model with examples, and it will figure out the patterns to perform the given task. See below a Prompt that I designed for our conversational QA generative task.
  3. Choose an Engine — The OpenAI engine determines the language model used to execute tasks. OpenAI supports various models with varying capabilities. These models are described and accessed using engines. For this tutorial, select the “Davinci” Engine.
Human: Hello, who are you?
AI: I am an AI created by Blessing. How can I help you today?
###
Human: What is a language model?
AI: A language model is a statistical model that describes the probability of a word given the previous words.
###
Human: What is Cryptocurrency?
AI: A cryptocurrency (or “crypto”) is a digital currency that can be used to buy goods and services, but uses an online ledger
with strong cryptography to secure online transactions.
###
Human: What country won the 2010 world cup?
AI: Spain
###
Human: How many pounds are in a kilogram?
AI: There are 2.2 pounds in a kilogram.
###
Human: When did the first airplane fly?
AI: On December 17, 1903, Wilbur and Orville Wright made the first flights.
###
Human: How can I learn swimming?
AI: There are many ways you can learn to swim. The first thing you can do is talk to your parents and ask if they know any instructors. You can also look online to find a class that you can take in your area.
###
Human:

Copy and Paste the above in the Prompt Line input field.

D. Basic Settings

Here you will provide the basic settings for the OpenAI model, see the image below.

  • Response Length — 150
  • Temperature — 0.8
  • Top P — 1
  • Frequency Penalty — 0.5
  • Presence penalty — 0.6
  • Stop Sequence — ###

Basic Settings Explanations.

  1. Response Length — The response length sets a limit on how much text the OpenAI API includes in its completion. Take note of your response length, as OpenAI charges by the token, where one token is roughly 4 words. The max tokens controls the length of your completion and your prompt plus completion can’t add up to more than 2,049 tokens. For this example, on the Mantium App, input 150 as your response length.
  2. Temperature & Top P — The temperature controls the randomness of the response; it is one of the most important settings as it significantly impacts the output. Increase the temperature if you want more creative completions or if you want to let the model go off on a tangent. Decrease the temperature if you want the model to follow instructions or remain factual. Note that, If you alter the temperature, it’s recommended to not alter top_p as it defaults to 1. Set the value to 0.8
  3. Frequency Penalty — This prevents word repetition. If you notice the output of a model is repeating the same word, and you want it to stop repeating the same word, increase the frequency_penalty
  4. Presence Penalty — The presence penalty prevents topic repetition. If you notice the output of a model is stuck on a particular topic, and you want it to talk about something else, try increasing the presence_penalty
  5. Stop Sequence — A stop sequence is a set of characters that signal the API to stop generating tokens.

Advanced Settings

We can leave the Advanced settings to the default value for this tutorial.

For more information about these settings, you can read this tutorial by Mantium.

Step 5 — Testing Prompt Results

On the Mantium App, we will use the Input field to test our prompt. See the image below, I asked the question How can I learn Machine Learing? and I got an interesting response. (See below)

Fill the input field, and hit the Test run button.

Test Result

Congratulations on completing the first step and an essential step. Note that you can do the above step in code by using the Mantium API/Client Library, but I chose to explain the No-code approach to give you that first understanding and easy debugging.

With Mantium’s Deploy feature, you can quickly generate a one-page application you can share with your friends to test your chatbot, but we intend to create a WhatsApp application. So, in subsequent steps, we will grab this prompt using Mantium’s Client Library and integrate it with the Twilio WhatsApp API in Python.

Configure the Twilio WhatsApp Sandbox

For this tutorial, we will use the Twilio WhatsApp Sandbox, where will develop and test our chatbot application.

Step 1 — Create an Account

Create an account with Twilio, and visit the Twilio Console, where you will click on Explore Products and select Messaging.

Step 2 — Setup Sandbox

Navigate to the sidebar, Under Messaging and Settings, select the WhatsApp sandbox settings.

Connect your sandbox by sending a WhatsApp message with the given code to the number assigned to your account. Twilio will send a reply stating that your sandbox is set, and you can complete the Two-way setup by sending “Hello” on your WhatsApp.

Note that, you can share the number assigned to your account for others to connect, and test your application in the Sandbox.

The last step is to configure the sandbox, which we will come back to.

Let’s Build the Chatbot!

Here we are going to write Python code to build the WhatsApp chatbot. The process is easy. We will develop and test our application with a few lines of code. We will take the user input (Your WhatsApp incoming message), pass the message as an input through the Mantium execute prompt method, which sends this request to the model and returns a response. Twilio processes this response, and it is returned as a reply on your WhatsApp.

If you are still unclear on what we are trying to achieve, see the illustration below.

Let’s write some code!

Step 1 — Setup a Python Virtual Environment.

Create a Python virtual environment by using the command below.

$ mkdir whatsapp-bot
$ cd whatsapp-bot
$ python -m venv whatsapp-bot-venv
$ source whatsapp-bot-venv/bin/activate

After this, create a file named app.py in the root directory of the whatsapp-bot folder.

Step 2 — Setup Mantium’s Client Library

Mantium has provided a Python client library to improve the developer’s experience, and we will use that in this tutorial.

Installation

Using the command below, install the Python client library.

pip install mantiumapi

Authenticate

Here you will authenticate by providing your login credentials in an environment variable.

With the command below, install python-dotenv

pip install python-dotenv

After this, create a .env file to store your credentials. Input your login credentials as shown below.

MANTIUM_USER='youremail.com'
MANTIUM_PASSWORD='your-password'

In app.py , you will use the code below to authenticate your Mantium account and create a prompt. Notice that I imported the Mantium prompt method, more on that later.

Here is the link to the Mantium GitHub repository for more. If you are interested you can contribute to the development of the library.

Step 3 — Setup Webhook with Flask Framework

Installation

Here we need to install the Flask framework to create the web application, and Twilio Python Helper Library to work with Twilio APIs.

pip install twilio flask

Now let’s set up a webhook with the Flask framework. We will create an endpoint that listens to POST requests, and when Twilio receives the incoming message they will invoke this endpoint. The body of the function is what processes the input message, and provides a response after Mantium executes the input.

Here is a starter template, in this we will input the body of the bot function.

Step 4 — Send Input to Mantium, and Return a Response.

Let me explain the code above.

Input message — This gets the user WhatsApp input message

incoming_msg = str(request.values.get('Body', '').lower())

You need your Prompt ID. Remember the prompt that we created above. Let’s grab the ID. The easiest way to get this is to use the URL of the prompt editing page . See the link below, the last string is your prompt ID.

https://app.mantiumai.com/ai-manager/prompt/{prompt_id}

Using the from_id method, retrieve the prompt from Mantium.

qa_prompt = prompt.Prompt.from_id('<your-prompt-id>')

Execute the Input and Get Result

Here we will send the Prompt to the OpenAI API endpoint, and we will get the response from the GPT-3 model. You need to refresh because this is asynchronously done to get a result.

result = qaPrompt.execute(incoming_msg)
result.refresh()
prompt_result = str(result.output)

Step 5 - Twilio Messages and Response

Twilio expects a response from the webhook in TwiML or Twilio Markup Language, which is an XML-based language format. With the Twilio helper library, we can easily create this response without having to create XML directly. Below explains how we can do that.

from twilio.twiml.messaging_response import MessagingResponse

resp = MessagingResponse()
msg = resp.message()
msg.body('this is the response text')

Putting everything together

This is the full code

Testing the Chatbot.

Step 1 — Run the app.py file

Run the command below to start the chatbot, and it will run at port 5000 on your local machine. See the image below.

python app.py

Step 2 — Local Tunnel

Localtunnel is a simple tool that provides a publicly-accessible URL that reroutes every request to the locally-running service. It exposes our localhost to the world for easy testing and sharing, which we need to connect the URL to Twilio (Remember the step that we skipped when configuring the Twilio sandbox).

You can create another terminal beside the initial terminal, and install the Localtunnel tool. Use the command below, and ensure that NPM is installed.

npm i localtunnel

After this, run the command below and the browser window below automatically pops up. I can see that the app is working.

Take note that my app is running on port 5000

lt --port 5000 --subdomain shan --local-host "127.0.0.1" -o --print-requests

Step 3 — Configure Twilio Sandbox

On Twilio Console, paste the URL below in the WHEN A MESSAGE COMES IN field, since our chatbot is exposed under the /bot URL. After this, click on the Save button.

URL — https://shan.loca.lt/bot

Step 4 — Send and Receive Messages

Now that the configuration is all done and the server is running, it’s time to test the chatbot.

Here, I decided to ask some random questions, since I didn’t design the prompt for a specific use case. See images below.

You can see that we got responses that weren’t in the Prompt.

Logging with Mantium

You can use the Mantium’s Log feature to check the output and status of your prompt. This can come in handy when you are not getting any reply to your WhatsApp message.

Navigate to the Monitoring tab on the left side, click on Logs .

Conclusion

While I didn’t design the prompt for any particular use case, I aimed to show the possibilities of a large language model(LLM) and how to use the Mantium API integrated with Twilio’s API to build something interesting. There are many ways that you can spin this tutorial to build custom use cases; examples include but are not limited to Intelligent Customer Service bots, Virtual AI assistants etc.

I hope this tutorial was helpful, and you learnt about what you can do with Mantium & Large Language Model. If you have something to build in mind, you can reach out to me via LinkedIn

Any questions? Join the Mantium’s Discord Channel

--

--

Blessing Adesiji
Analytics Vidhya

A Developer Advocate, writes about Software Engineering, Data Science, ML & Blockchain Dev