Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (2024)

Summary

In this tutorial, we will build a WhatsApp AI-powered bot capable of conversational question and answering (QA). This type of bot is a generative chatbot, in contrast to the more common retrieval-based chatbot.

To create the bot, we will use Mantium’s Python library, Twilio WhatsApp API, and the Flask framework. The Mantium App/API enables us to get a generative response, as it supports and provides access to the OpenAI’s GPT-3 model.

Any questions? Join the Mantium’s Discord Channel

Below is what we are going to build at the end of this tutorial.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (1)

Chatbots are applications that replicate and process written or spoken human conversations in a way that allows interactions with digital devices as if they were conversing with actual people. You may have interacted with a chatbot recently while browsing websites on the internet. You probably noticed that the chatbot automatically engages you in a pleasant and non-confrontational manner while trying to help achieve the objective of your visit to the site.

Chatbots have seen great applications in businesses, such as automating customer interactions, sales channels, etc., and they are becoming a lot smarter. They are capable of handling almost human-level interactions.

With recent advancements in the Natural language processing(NLP) field, which is developing the GPT-3 model by OpenAI in 2020, it is easier to solve language-based tasks such as generating and classifying text. It becomes even easier to use with Mantium, as they provide development support needed to build effectively with large language models. If you are wondering what OpenAI’s GPT-3 model is, it is the third generation prediction model that OpenAI created, and it has a capacity of 175 billion machine learning parameters.

With Mantium, we will use this to build a conversational QA WhatsApp bot in just a few lines of code. You don’t need to have a solid knowledge of machine/deep learning to build a use case with Mantium.

To follow this tutorial you need the following;

The first and essential step is to design the prompt (conversational text generation task) with the Mantium App. Prompt design is our way of guiding the GPT-3 model to give a favorable and contextual response as it enables us to provide a text input and get a generative response from the model.

Think of this like you are playing the game of charades where you give the players information to figure out the secret word. Similarly, with GPT-3, we provide just enough information for the model to recognize patterns in the training prompt.

To do this, you don’t need to have prior deep learning experience, and almost anyone can do this in a No-Code approach to build a simple text application using the Mantium interactive UI.

I will break this into more straightforward steps that you can follow.

Step 1 — Register for a Mantium Account

Visit the app.mantium.com website to create a free account.

Step 2 — Connect your OpenAI API key.

To access the GPT-3 model through the OpenAI API, you must connect your OpenAI API key on the Mantium App.

Click on AI Providers, paste your OpenAI and click Add API Key. Notice that Mantium supports other AI providers such as Cohere, AI21, you can connect with their respective API keys, but for this tutorial, we build with the OpenAI's model.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (2)

If you don’t have your API key for any of the AI Providers , here is documentation on how you can get one.

Step 3 — Add New Prompt

The next step is to add a new prompt. Navigate to the AI Manager drop-down tab on the left sidebar, click on Prompts, and click on the Add new prompt button, as shown below.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (3)

Here, we will provide the information to build our training prompt.

A. Basic Info

Let’s fill in the basic information to identify our prompt. Note that you can provide any name and description.

Name of Prompt — Chatbot

Description — This is a chatbot

B. Provider

In this tutorial, we are using the OpenAI API, select OpenAI from the drop-down.

C. OpenAI Info

  1. Choose an Endpoint — Select the Completion endpoint. This endpoint provides the model with some text and generates a text completion that tries to fit the context of the provided text.
  2. Prompt Line — Here we are going to provide the model with examples, and it will figure out the patterns to perform the given task. See below a Prompt that I designed for our conversational QA generative task.
  3. Choose an Engine — The OpenAI engine determines the language model used to execute tasks. OpenAI supports various models with varying capabilities. These models are described and accessed using engines. For this tutorial, select the “Davinci” Engine.
Human: Hello, who are you?
AI: I am an AI created by Blessing. How can I help you today?
###
Human: What is a language model?
AI: A language model is a statistical model that describes the probability of a word given the previous words.
###
Human: What is Cryptocurrency?
AI: A cryptocurrency (or “crypto”) is a digital currency that can be used to buy goods and services, but uses an online ledger
with strong cryptography to secure online transactions.
###
Human: What country won the 2010 world cup?
AI: Spain
###
Human: How many pounds are in a kilogram?
AI: There are 2.2 pounds in a kilogram.
###
Human: When did the first airplane fly?
AI: On December 17, 1903, Wilbur and Orville Wright made the first flights.
###
Human: How can I learn swimming?
AI: There are many ways you can learn to swim. The first thing you can do is talk to your parents and ask if they know any instructors. You can also look online to find a class that you can take in your area.
###
Human:

Copy and Paste the above in the Prompt Line input field.

D. Basic Settings

Here you will provide the basic settings for the OpenAI model, see the image below.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (4)

  • Response Length — 150
  • Temperature — 0.8
  • Top P — 1
  • Frequency Penalty — 0.5
  • Presence penalty — 0.6
  • Stop Sequence — ###

Basic Settings Explanations.

  1. Response Length — The response length sets a limit on how much text the OpenAI API includes in its completion. Take note of your response length, as OpenAI charges by the token, where one token is roughly 4 words. The max tokens controls the length of your completion and your prompt plus completion can’t add up to more than 2,049 tokens. For this example, on the Mantium App, input 150 as your response length.
  2. Temperature & Top P — The temperature controls the randomness of the response; it is one of the most important settings as it significantly impacts the output. Increase the temperature if you want more creative completions or if you want to let the model go off on a tangent. Decrease the temperature if you want the model to follow instructions or remain factual. Note that, If you alter the temperature, it’s recommended to not alter top_p as it defaults to 1. Set the value to 0.8
  3. Frequency Penalty — This prevents word repetition. If you notice the output of a model is repeating the same word, and you want it to stop repeating the same word, increase the frequency_penalty
  4. Presence Penalty — The presence penalty prevents topic repetition. If you notice the output of a model is stuck on a particular topic, and you want it to talk about something else, try increasing the presence_penalty
  5. Stop Sequence — A stop sequence is a set of characters that signal the API to stop generating tokens.

Advanced Settings

We can leave the Advanced settings to the default value for this tutorial.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (5)

For more information about these settings, you can read this tutorial by Mantium.

On the Mantium App, we will use the Input field to test our prompt. See the image below, I asked the question How can I learn Machine Learing? and I got an interesting response. (See below)

Fill the input field, and hit the Test run button.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (6)

Congratulations on completing the first step and an essential step. Note that you can do the above step in code by using the Mantium API/Client Library, but I chose to explain the No-code approach to give you that first understanding and easy debugging.

With Mantium’s Deploy feature, you can quickly generate a one-page application you can share with your friends to test your chatbot, but we intend to create a WhatsApp application. So, in subsequent steps, we will grab this prompt using Mantium’s Client Library and integrate it with the Twilio WhatsApp API in Python.

For this tutorial, we will use the Twilio WhatsApp Sandbox, where will develop and test our chatbot application.

Step 1 — Create an Account

Create an account with Twilio, and visit the Twilio Console, where you will click on Explore Products and select Messaging.

Step 2 — Setup Sandbox

Navigate to the sidebar, Under Messaging and Settings, select the WhatsApp sandbox settings.

Connect your sandbox by sending a WhatsApp message with the given code to the number assigned to your account. Twilio will send a reply stating that your sandbox is set, and you can complete the Two-way setup by sending “Hello” on your WhatsApp.

Note that, you can share the number assigned to your account for others to connect, and test your application in the Sandbox.

The last step is to configure the sandbox, which we will come back to.

Here we are going to write Python code to build the WhatsApp chatbot. The process is easy. We will develop and test our application with a few lines of code. We will take the user input (Your WhatsApp incoming message), pass the message as an input through the Mantium execute prompt method, which sends this request to the model and returns a response. Twilio processes this response, and it is returned as a reply on your WhatsApp.

If you are still unclear on what we are trying to achieve, see the illustration below.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (8)

Let’s write some code!

Step 1 — Setup a Python Virtual Environment.

Create a Python virtual environment by using the command below.

$ mkdir whatsapp-bot
$ cd whatsapp-bot
$ python -m venv whatsapp-bot-venv
$ source whatsapp-bot-venv/bin/activate

After this, create a file named app.py in the root directory of the whatsapp-bot folder.

Step 2 — Setup Mantium’s Client Library

Mantium has provided a Python client library to improve the developer’s experience, and we will use that in this tutorial.

Installation

Using the command below, install the Python client library.

pip install mantiumapi

Authenticate

Here you will authenticate by providing your login credentials in an environment variable.

With the command below, install python-dotenv

pip install python-dotenv

After this, create a .env file to store your credentials. Input your login credentials as shown below.

MANTIUM_USER='youremail.com'
MANTIUM_PASSWORD='your-password'

In app.py , you will use the code below to authenticate your Mantium account and create a prompt. Notice that I imported the Mantium prompt method, more on that later.

Here is the link to the Mantium GitHub repository for more. If you are interested you can contribute to the development of the library.

Step 3 — Setup Webhook with Flask Framework

Installation

Here we need to install the Flask framework to create the web application, and Twilio Python Helper Library to work with Twilio APIs.

pip install twilio flask

Now let’s set up a webhook with the Flask framework. We will create an endpoint that listens to POST requests, and when Twilio receives the incoming message they will invoke this endpoint. The body of the function is what processes the input message, and provides a response after Mantium executes the input.

Here is a starter template, in this we will input the body of the bot function.

Step 4 — Send Input to Mantium, and Return a Response.

Let me explain the code above.

Input message — This gets the user WhatsApp input message

incoming_msg = str(request.values.get('Body', '').lower())

You need your Prompt ID. Remember the prompt that we created above. Let’s grab the ID. The easiest way to get this is to use the URL of the prompt editing page . See the link below, the last string is your prompt ID.

https://app.mantiumai.com/ai-manager/prompt/{prompt_id}

Using the from_id method, retrieve the prompt from Mantium.

qa_prompt = prompt.Prompt.from_id('<your-prompt-id>')

Execute the Input and Get Result

Here we will send the Prompt to the OpenAI API endpoint, and we will get the response from the GPT-3 model. You need to refresh because this is asynchronously done to get a result.

result = qaPrompt.execute(incoming_msg)
result.refresh()
prompt_result = str(result.output)

Step 5 - Twilio Messages and Response

Twilio expects a response from the webhook in TwiML or Twilio Markup Language, which is an XML-based language format. With the Twilio helper library, we can easily create this response without having to create XML directly. Below explains how we can do that.

from twilio.twiml.messaging_response import MessagingResponse

resp = MessagingResponse()
msg = resp.message()
msg.body('this is the response text')

This is the full code

Step 1 — Run the app.py file

Run the command below to start the chatbot, and it will run at port 5000 on your local machine. See the image below.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (9)

python app.py

Step 2 — Local Tunnel

Localtunnel is a simple tool that provides a publicly-accessible URL that reroutes every request to the locally-running service. It exposes our localhost to the world for easy testing and sharing, which we need to connect the URL to Twilio (Remember the step that we skipped when configuring the Twilio sandbox).

You can create another terminal beside the initial terminal, and install the Localtunnel tool. Use the command below, and ensure that NPM is installed.

npm i localtunnel

After this, run the command below and the browser window below automatically pops up. I can see that the app is working.

Take note that my app is running on port 5000

lt --port 5000 --subdomain shan --local-host "127.0.0.1" -o --print-requests

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (10)

Step 3 — Configure Twilio Sandbox

On Twilio Console, paste the URL below in the WHEN A MESSAGE COMES IN field, since our chatbot is exposed under the /bot URL. After this, click on the Save button.

URL — https://shan.loca.lt/bot

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (11)

Step 4 — Send and Receive Messages

Now that the configuration is all done and the server is running, it’s time to test the chatbot.

Here, I decided to ask some random questions, since I didn’t design the prompt for a specific use case. See images below.

You can see that we got responses that weren’t in the Prompt.

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (12)

You can use the Mantium’s Log feature to check the output and status of your prompt. This can come in handy when you are not getting any reply to your WhatsApp message.

Navigate to the Monitoring tab on the left side, click on Logs .

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (13)

While I didn’t design the prompt for any particular use case, I aimed to show the possibilities of a large language model(LLM) and how to use the Mantium API integrated with Twilio’s API to build something interesting. There are many ways that you can spin this tutorial to build custom use cases; examples include but are not limited to Intelligent Customer Service bots, Virtual AI assistants etc.

I hope this tutorial was helpful, and you learnt about what you can do with Mantium & Large Language Model. If you have something to build in mind, you can reach out to me via LinkedIn

Any questions? Join the Mantium’s Discord Channel

Build a WhatsApp AI-Powered Chatbot with GPT-3, using Mantium & Twilio API (2024)

FAQs

Is there a GPT-3 chatbot? ›

Quickchat is a fully conversational AI Chat Bot powered by OpenAI's GPT-3, a 175 billion parameters state-of-the-art language model. You can use it to automate your customer support, online applications, searching through internal knowledge base and many other tasks - possibilities are endless!

How do I create a GPT-3 BOT? ›

How To Build a GPT-3 AI Chat Bot In 10 Minutes - YouTube

What is the best API to create a chatbot in? ›

Our Top Picks for Best Chatbot APIs
  1. BrainShop.AI. This innovative chatbot API allows developers to create conversational, intelligent AI chatbots. ...
  2. Maytapi WhatsApp. This WhatsApp chatbot allows developers to create a bot that communicates through WhatsApp. ...
  3. BOTlibre. ...
  4. AlchemyText. ...
  5. Slack. ...
  6. FacebookMessenger. ...
  7. WitAI. ...
  8. ApiAI.
Feb 3, 2022

Can I download GPT-3? ›

Is GPT-3 available for free? The answer is Yes, and it is now available to all. OpenAI recently announced the expansion of its cloud-based OpenAI API service, which allows developers to create apps based on the research group's powerful GPT-3 artificial intelligence model.

How do I get GPT-3 API? ›

BREAKING: OpenAI GPT-3 Now Open to Public [FREE] - YouTube

Can Python use GPT-3? ›

Now that we've done that, we can initialize a new Weights & Biases project named GPT-3 in Python as well as a new W&B Table – which is a tool for interactively exploring tabular data – with two columns: prompt and completion.

What is a GPT-3 BOT? ›

OpenAI's GPT-3 is a language model that offers a multifaceted “text in, text out” interface that can be applied to virtually any task. It can produce code, stories, recipes, and more. The engine is capable of doing anything - it can even do your homework for you!

How do I use GPT-3 conversation? ›

Using GPT-3 to power a Conversational Phone Bot - YouTube

What is API in chatbot? ›

Chatbot API supports both advanced AI and machine learning to develop super-intelligent bots. It also allows you to integrate the bot with other messaging platforms. The Chatbot API helps to make stories related to bots that can answer various questions.

How can I be a good chatbot developer? ›

Technical Skills:

Must have knowledge of Google DialogFlow cognitive service, Rasa, Microsoft Bot, IBM Watson, and other chatbot development platforms. Sound knowledge of AI/chatbot development landscape, tools, and other frameworks. Hands-on experience working with LSTM and Transformer Networks.

What is API integration in chatbot? ›

API integration makes it easier for chatbots to fetch data from other applications, both inside and outside your organisation. In order to obtain data from different systems or applications, it's essential to connect your chatbot with the relevant API.

Can I use OpenAI for free? ›

Playground is mostly free, but has a time limit

Once you hit that time limit (or if you use them all up before then), you'll have to buy more by contacting OpenAI's sales team.

How much does it cost to run GPT-3? ›

Taken together, these factors mean that GPT-3 could have easily cost 10 or 20 million dollars to train (exact numbers are not available). Previous large (though, not as large as GPT-3) language models such as GPT-2, T5, Megatron-LM, and Turing-NLG were similarly costly and difficult to train.

Is GPT-3 an API? ›

OpenAI's API provides access to GPT-3, which performs a wide variety of natural language tasks, and Codex, which translates natural language to code. The API is designed to allow users to try it on virtually any task in the English language.

How do I add a chatbot API? ›

For more information on making calls from your chatbot, see Connect to External Service.
  1. Create a simple chatbot.
  2. Add a skill.
  3. Add an API Call.
  4. Test API call.
  5. Add webhook call.
  6. Test webhook call.
  7. Test yourself.
Jun 29, 2020

How do you make an API bot? ›

How To Create A Telegram Bot With Python - YouTube

How do you make a chatbot in JavaScript? ›

Create a Chatbot in JavaScript
  1. Creating the . js File. ...
  2. JavaScript Functions for Chatbots. In the next step, we will create some JavaScript functions that will make a bot. ...
  3. Programming Bot Responses in JavaScript. ...
  4. Chatbot Triggers and Responses. ...
  5. Updating the DOM.
Sep 16, 2021

Top Articles
Latest Posts
Article information

Author: Zonia Mosciski DO

Last Updated:

Views: 6072

Rating: 4 / 5 (51 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Zonia Mosciski DO

Birthday: 1996-05-16

Address: Suite 228 919 Deana Ford, Lake Meridithberg, NE 60017-4257

Phone: +2613987384138

Job: Chief Retail Officer

Hobby: Tai chi, Dowsing, Poi, Letterboxing, Watching movies, Video gaming, Singing

Introduction: My name is Zonia Mosciski DO, I am a enchanting, joyous, lovely, successful, hilarious, tender, outstanding person who loves writing and wants to share my knowledge and understanding with you.