Ultimate Guide to UI/UX in Full-Stack Chatbot Development (Part 3)
Published:
This article concludes the series and continues from Ultimate Guide to UI/UX in Full-Stack Chatbot Development (Part 2).
Step 6: Add More Features to the Chatbot
Navigating Feature Integration and UX Challenges
I’d like to share some of my personal experiences regarding feature integration and development. When adding a new feature or functionality, the UX can change significantly. Integrating new features can be challenging because it often requires changes to the existing workflow and can introduce complexities in user interactions.
Let’s consider a chatbot application integrated with a RAG system. This application enables users to upload multiple PDFs beforehand and select an LLM from various available options to chat with. Essentially, it’s a natural language generative AI that allows you to interact with your PDFs through chat.
When introducing a new feature for users to chat and get context from one specific PDF instead of retrieving vector indexes from all PDFs, it’s crucial to ensure that users select the LLM before selecting the PDF. This order of operations is essential because the LLM needs to be defined first to generate contextual embeddings for document indexing and accurately interpret and respond to the content of the selected PDF.
After selecting the LLM, the query mode initialises for configuration. Users should not be allowed to select another PDF or deselect the current one to return to default. Allowing such actions during processing could disrupt the system’s workflow, potentially leading to crashes or incorrect data handling. To prevent this, buttons for selecting another query mode should be disabled during processing.
If the application has more buttons, users might click on other buttons unexpectedly, complicating the interaction flow. For example, users might click to select another LLM to chat with even though the query mode is not fully set. Since setting the query mode requires a few seconds to load or retrieve vector indexes from the database, abruptly shifting to a new LLM could terminate the current query mode. This will cause errors or system crashes.
Consider other scenarios: What if users try to upload PDFs while the query mode is still being set? What if they delete a specific PDF, such as ‘animals.pdf’, immediately after selecting it for query mode configuration? What if they decide not to set the query mode to a specific PDF and revert to chatting with all PDFs? These scenarios highlight the need for robust error handling and careful workflow management to ensure a smooth and stable UX.
The effort and complexity of ensuring a good UX after adding a new feature are far more demanding than simply integrating the feature itself. Proper planning and error handling are crucial to maintaining a seamless user experience.
Thought Process on Handling Complex Workflows
As a math major, the thought process of managing complex workflows in AI and software development is very similar to constructing a rigorous mathematical proof. If you have taken an introductory discrete math course, you are likely familiar with proof by cases. I find that visualising how to handle different scenarios using the thought processes learned from proof by cases is particularly helpful. When developing a sound and valid proof, each step must logically follow from the previous one. Similarly, integrating complex workflows in development requires a systematic approach where each component is designed to logically follow from the previous one, ensuring that the overall system functions coherently.
Steps to Effectively Manage Complexities
Besides thinking from the proof by cases perspective, I also follow this structure in chronological order:
- Identify Related Code Segments
- When integrating a new feature, locate the code segments directly related to that feature. Consider the new feature as a node in the middle of a workflow. Identify the preceding and subsequent code segments that will interact with it. You can think this like a linked list.
- Minimise Modifications to Existing Code
- Try to minimise modifications to the existing codebase. Only make changes if you identify inefficiencies or if the current implementation is incompatible with the new feature. This approach helps maintain system stability and reduces the risk of introducing new bugs.
- Modularise the New Feature
- Develop the new feature as a separate, modular component whenever possible. This makes it easier to test and integrate without affecting the existing workflow.
- Implement Error Handling
- Incorporate comprehensive mechanisms to manage potential failures. This ensures that the system can handle unexpected scenarios and provide useful feedback.
- Ensure Smooth Transitions and State Management
- Maintain consistent state management throughout the workflow. Ensure that transitions between different states are smooth. This prevents disruptions in user interactions. Disable or enable controls appropriately to avoid conflicts during critical processes.
- Conduct Testing
- Test to verify the functionality of the new feature and its interaction with existing components. Simulate various user interactions to uncover potential issues and ensure a smooth UX.
Step 7: Integrate Frontend and Backend
Several widely-used tech stacks for integrating the frontend (client-side) and backend (server-side) are Django, Flask, Node.js, and Spring Boot.
Frontend-Backend Communication with RESTful API and Fetch API
Let’s consider Flask to create RESTful API endpoints and JavaScript with the Fetch API to handle communication between the frontend and backend. RESTful API endpoints follow the principles of REST architecture, much like CRUD operations. In Flask, this involves using HTTP methods such as POST for creating, GET for retrieving, PUT/PATCH for updating, and DELETE for removing resources. RESTful APIs facilitate communication, ideal for information transfer and dynamic operations in web apps. The Fetch API in JavaScript serves HTTP requests to these endpoints from the frontend, fetching resources asynchronously across the web and handling responses. This approach allows dynamic interaction between the UI and backend services. It supports tasks like data retrieval, submission, and updates without page reloads, which enhances the responsiveness of web apps.
You can think of it like a waiter serving food in a restaurant without making you leave your table. In this analogy, you (the client) are at the table (UI). The Fetch API is the waiter taking your order (HTTP requests) to the kitchen (server/backend with RESTful endpoints) and bringing back the food (data) to you.
Full Stack Integration Code with Flask
This Flask code integrates with the JavaScript code from Ultimate Guide to UI/UX in Full-Stack Chatbot Development (Part 2). Note that this code alone won’t function without the chatbot logic for query handling and response generation. It’s just a sample code to help you get started.
Before getting started, make sure to install Flask in your terminal:
pip install Flask
Then, create a Flask application in app.py:
from flask import Flask, request, jsonify, render_template
from typing import Dict, Any
import logging
app = Flask(__name__)
@app.route('/')
def index() -> str:
"""
Render the index HTML page.
Args:
None
Returns:
str: Rendered index HTML page.
"""
return render_template('index.html')
@app.route('/query', methods=['POST'])
def query() -> Dict[str, Any]:
"""
Process user query and return the response.
Args:
None (expects JSON payload with 'user_query' key).
Returns:
Dict[str, Any]: JSON response containing status, message, and response text.
Raises:
ValueError: If no user query is provided.
Exception: If any other error occurs during query processing.
"""
try:
user_query: str = request.json.get('user_query')
if not user_query:
return jsonify({"status": "error", "message": "No user query provided."}), 400
logging.info(f"Received user query: {user_query}")
# Assuming prompt_template and query_engine are defined elsewhere
prompt: str = prompt_template.format(query_str=user_query)
response = query_engine.query(prompt)
# Check response attributes
logging.info(f"Response object: {response}")
# Handle the response text
response_text: str = getattr(response, 'response', None) or str(response)
logging.info(f"Query response: {response_text}")
return jsonify({
"status": "success",
"message": "Query processed",
"response": response_text
})
except Exception as e:
logging.error(f"Error during query: {e}")
return jsonify({"status": "error", "message": str(e)}), 500
if __name__ == '__main__':
app.run(debug=True)
This code creates a responsive and maintainable backend. It sets up routes for rendering the main page and processing user queries. This ensures clear separation of concerns. Type annotations and extensive logging are implemented to improve code clarity and facilitate debugging. The app employs error handling to provide error information and appropriate HTTP status codes. JSON is used for efficient data exchange between frontend and backend. Similar to dictionaries, JSON offers an average time complexity of O(1) for key-value access, making it the fastest option for this purpose.
Remember to define the prompt_template
and query_engine
. This version of app.py
is designed for an LLM-integrated chatbot, where you need a prompt template to guide the LLM and a query engine to handle information retrieval.
Assuming the chatbot logic is defined and set up successfully, start the local server by running app.py
manually or executing the following command :
python app.py
Once the server is running, access the chatbot by navigating to the local host server in the web browser.
Step 8: Test the Chatbot
It is crucial to test your chatbot thoroughly. Testing is not a fixed step at number 8; it should be an ongoing process carried out continuously as you code, especially when adding new features. Testing ensures the system works correctly, efficiently, and provides a smooth UX. We can start with functional testing, which ensures that all features work as intended, from basic message sending and receiving to more complex functionalities like reset history and light/dark mode toggles. Usability testing is also important to make sure the interface is intuitive, accessible, and provides immediate feedback to user interactions.
Latency is extremely important for a good UX. Hence, performance testing should be a priority. Speaking about chatbot applications, if your chatbot is non-AI, the response generation should be less than 1 second. For AI chatbots, the response generation should be less than 1 minute since LLMs take time to generate tokens.
Ensure all responses generated are within the expected range. You can consider writing test cases for non-AI chatbots’ outputs, although this is often tedious and unnecessary since the output is predictable based on the fixed rule definitions. However, it is different for AI chatbots. The output of an AI chatbot is stochastic and harder to predict. If the LLM provider company decides to update its model, the generated output might change as well. Ensuring the output of an AI chatbot is completely reliable is currently an unsolved challenge. However, there are techniques to reduce hallucinations, such as using proper configuration, a good RAG setup, a high-quality database for the RAG, robust prompt engineering, a good LLM, and more. But nothing is guaranteed at the end of the day.
Integration testing is also essential to ensure that all components of the chatbot work seamlessly together. Regression testing is to confirm new updates do not break existing functionality. For more details on these testing methods, you can search for additional information online.
Step 9: Gather Feedback
Do not underestimate this step! It must be done carefully. You shouldn’t be putting too much effort into perfecting every feature or design before this stage. For the first MVP stage, stakeholders generally don’t care how much time you spend developing the chatbot; they mainly care about its core functionality and user experience. They usually aren’t aware of the fancy new frontend frameworks you used or the impressive design you implemented in the interface at a few glances. Trust me no one really cares, so don’t waste your time perfecting everything before demonstrating your first MVP. As mentioned in Step 2, you should let users navigate the product without instructions and observe how they manage. Do they stumble when trying to use a feature? Do they feel frustrated? Do they ask questions about how to use the product? Do they take too long to figure things out? All these observations will inform you whether your chatbot is intuitive and user-friendly.
You should gather feedback from a diverse group of people to make an informed conclusion. Don’t generalise the intuitiveness of your product based on feedback from only a few individuals. For example, older users might struggle with the chatbot interface at first compared to younger, more tech-savvy users. Also, consider getting a majority of the feedback according to your target audience demographics.
Never rush this step. In product development at startups and companies, one common failure is rushing through the feedback session, causing engineers to misunderstand or misprioritise what’s important. After user testing and observation, you can follow up with a few relevant questions to gather more feedback and insights.
In my opinion, the success of a product depends heavily on its UI/UX. Even if a company develops one of the best chatbots in terms of response generation quality, if the UI/UX is weak, it is hard to attract users and reach the potential user count. Humans are visual creatures, and many of us simply don’t bother to navigate a product if it’s not intuitive. The best UI/UX version of the product will strive a balance human psychology and technology.
Step 10: Optimise and Refine the Chatbot
Based on the feedback and insights received, you should improve your chatbot’s UI/UX and logic. Focus on reducing all the pain points users have faced previously. Make the usage as simple and intuitive as possible. To reiterate, don’t make users think when using it. The process from opening the chatbot application to finishing the conversation should be as easy as 1+1=2.
Optimisation-wise, it is good to encapsulate code using classes or functions, essentially following OOP practices or modular design. Follow basic practices such as not repeating yourself, writing good and concise comments, writing readable and clean code, using readable variable names and docstrings, considering codes’ time and space complexities, and so on. These steps should be taken from the beginning, but you should continuously recheck to ensure the code is easy to track and maintain throughout the software engineering cycle, even after the product is deployed. There are still more advanced techniques, such as algorithm optimisation, which include optimising loops and recursive functions to reduce time complexity. It also includes using data structures, for example, using dictionaries instead of arrays for lookups can improve performance, as the time complexity for dictionaries is O(1), while for arrays it is O(n). Other than that, caching for memoization and reuse of results, parallel processing, and other techniques can further enhance code efficiency.
Security-wise, it would be great to be aware of possible cyberattacks such as prompt injection, malicious prompts, cross-site scripting (XSS), and SQL injection. A lot of the security measures can be enhanced with good coding practices like incorporating input validation, error handling, cryptographic hashing using libraries like OpenSSL and implementing authentication before logging in. Especially if you or your company are planning to monetise the product, you should ensure it is security robust as you don’t want to lose money due to a few unethical black-hat parties. For large-scale projects in the industry, competent security teams are usually in place to advice these concerns.
Once optimisation or improvement is made, you should consider gathering feedback again. Steps 10 and 11 are ongoing processes that should be repeated until the chatbot is adequately refined for practical usage. Even after deployment, these steps should be done regularly due to the need for continuous updates, the emergence of new technologies, and the necessity to stay competitive.
Step 11: Deploy the Chatbot
Deploying a product for scalability for millions of users is a huge and critical task. I can’t give much insight into deployment at a production level, but for a toy project, there are a lot of free or low-cost hosting services available for deployment like Heroku, PythonAnywhere, or Netlify. For my previous personal chatbot project, I used PythonAnywhere, and it worked wonders for a free version. I just have to place my code there, deploy my product, and click refresh every three months to ensure the service continues to host my website. In other words, I can host it forever if I refresh it every three months (provided PythonAnywhere doesn’t change its policy or close its service).
For scalability, server hosting can become much more expensive, depending on the service providers. This step is usually handled by experienced and skilled engineers and requires a good understanding of system design and architecture to ensure the chatbot can cater to millions of users.
Congratulations if you made it here! Building a full-stack chatbot application is not easy and requires persistence, patience, and a lot of trial and error.
Conclusion
To emphasise once again, this article is not a step-by-step tutorial to spoon feed you on how to build a chatbot. Rather, it aims to provide insights and ideas so that you can adapt, modify, and implement them in your context. Some of the insights discussed here can also be applied in other software engineering contexts.
Building a chatbot was my first personal project, and it was the very experience that allowed me to dive deep into software engineering as a mathematics undergraduate. This hands-on journey provided me the foundation and confidence to tackle a more complex AI chatbot project during my summer internship.
I hope this article helps you in some way. Thanks for reading this far!