top of page

Implementing Chatbots: A practical Approach



What are Chatbots?

A chatbot is a software that simulates human conversation, or in other words, is a dialogue system that enables text/voice communication in natural language. OpenAI’s ChatGPT is a prime example of a chatbot today. AI chatbots are trained on large amounts of data and can interpret almost any prompt given in natural language to provide a suitable response.

In this article we will be focusing more on implementing chatbots within business websites, the most common use case for it being customer service, answering questions about the business, helping the customer navigate the website, and answering other FAQs.

Even though AI and ML technology is not new, generative AI like chatbots need to be considered differently as they interact with users as opposed to working in the backend. In our experience, the novelty of the technology, the existence of Ethical and Fairness risks combined with the regulatory requirements make the implementation of such technology challenging. Keeping in mind the challenges around implementing such technology, let’s look at the implementation of AI technology like Chatbots, their risks and regulatory requirements Risks around Chatbots

The implementation and use of AI technology like Chatbots is advancing at an unprecedented speed. With more businesses looking into implementing such technology, we can look at existing chatbots to gain an understanding of the kinds of risks that they pose today

  1. Cybersecurity risks: Chatbots often face security risks from malicious users. Risks could be in the form of prompt injections, indirect prompt injections, data poisoning, jailbreak attacks, SQL injections, API vulnerabilities and source code vulnerabilities. Further, the kind of output received from Chatbots can be influenced and manipulated for social engineering, phishing and malicious advertising.

  2. Privacy risks: Chatbots pose a number of privacy risks. AI models could be trained with data scrapped from the internet or even other personal information obtained without consent. It is important that we understand what data was used and if consent has been obtained. Further, Chatbots themselves may collect personal information while they are being used. An example of the same can be found with ChatGPT, its Privacy Policy states that the technology collects the users IP address, data regarding the user’s interaction with the chatbot, and the user’s browsing activities. When using chatbots, users may also need to provide their email or identify themselves in other way like through an account ID or username. In the case of ChatGPT, users are first required to create an account with OpenAI by providing their email. Further, depending on the purpose for which the Chatbot is used, there can be different technologies working together. For example, a chatbot could be using one technology for translation, and a completely different technology for natural language processing etc. It is beneficial for us to understand what technology or technologies are used under the hood and how the underlying model trained, licensing terms etc.d. We also need to understand if the data collected by the chatbot will be used to improve the model, if the data will be shared with any third-party, who has ownership of the data, how to restrict or prevent the usage of data for other purposes and so forth.

  3. Deceptive Trade Practice Risks: The use of Chatbots can be considered an unfair or deceptive trade practice if a company or employee uses a chatbot while interacting with a consumer without their knowledge. In other words, if the consumers are made to believe that they are interacting with a human when they are actually interacting with chatbot. The Federal Trade Commission (FTC) has released guidance stating that Section 5 of the FTC Act, which prohibits “unfair and deceptive” practices, gives it jurisdiction over the use of data and algorithms to make decisions about consumers and over chatbots that impersonate humans. To address this issue, the FTC emphasizes transparency. “[W]hen using AI tools to interact with customers (think chatbots), be careful not to mislead consumers about the nature of the interaction,” the FTC warns. Companies should also be transparent when collecting sensitive data to feed into an algorithm to power an AI tool and should, explain how an AI system’s decisions impact a consumer and ensure that decisions are fair.

  4. Discrimination and Ethical risks: AI models are built by humans and are trained with data created by humans. Sometimes the data might not be accurate or could be skewed. Any inaccuracies,biases in the dataset can an find its way into the output generated by chatbots. Chatbots can also be manipulated to make inappropriate comments to users, can be emotionally insensitive to users. It’s important to monitor for such risks and carefully consider the potential for prejudicial or discriminatory impact. Companies should conduct regular testing to understand disparities, and consider a process where humans review the outputs to ensure compliance with anti-discrimination laws and to safeguard against reputational harm.

  5. Intellectual Property risks: Data used to train AI models may include data that is patented, trademarked or copyrighted. Any data obtained without proper authorization cannot be used for training. Further ownership of content produced by AI models is also disputed as multiple parties are involved in its developments. It’s important to monitor for such risks as it can lead to legal disputes

  6. Accuracy risks: While chatbots are relied on heavily to produce correct outputs, they simply produce the most likely response to the prompt based on their training data. These systems do not actually ‘know’ the information and hence can produce incorrect or inaccurate information at times which can cause reputational damage to the company providing the chatbot service.

We also see risks that arise that do not fall into the typical risk categories. In October this year we saw that Snapchat was investigated by the Information Commissioner's Office in the UK for allegedly having inappropriate comments with children on adult topics such as alcohol and marijuana. In another case involving Snapchat’s AI, it was found that the bot posted a story on its own, an action that it is not currently capable of doing which raised concerns among users.

When talking about chatbots used within business websites, the bigger risks surround the responses provided by the chatbots, which could be the system going rogue, revealing personal information, providing inappropriate responses, revealing sensitive company information, etc.

Chatbots that interact with customers and website visitors are considered a reflection of the company. Just like a negative or inappropriate interaction with a sales or customer representative when you walk into a store would drive you away, similarly, a negative or inappropriate interaction with a chatbot could result in loss of potential customers and loss in faith of existing companies the creation of a bad company image

We see that prioritizing privacy risks positively helps mitigate other prevalent risks, an example can be providing appropriate privacy policies which aid in mitigating Deceptive Trade Practice risks. Below we talk about our approach to implementing technology like Chatbots. Our approach focuses on building compliance for privacy regulations which also inadvertently help in mitigating certain other chatbot risks. Regulations around Chatbots

Under the GDPR, the personal data processed by chatbots should be based on the Correct Legal Basis as defined under Article 6 and Article 9 of the GDPR. Under Article 6, the legal basis for processing could be obtained consent, meaning that consent must be obtained from the individual before the processing of their data. Legitimate interest on the part of the service provider could also serve as a legal basis for processing the data. For example, when users interact with a chatbot with the intention of obtaining answers to certain questions, the necessary responses to these questions can be considered legitimate interest. However, purpose limitation is applicable here, meaning that data processing cannot go beyond answering the user’s request.

Article 5(c) of the GDPR focuses on Data minimization, stating that the personal data processed should be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed

Under Article 13 of the GDPR, chatbot service providers have the obligation to provide information to the users regarding the data being processed, the purpose of processing, the operations in which their personal data is processed, the legal basis, recipients of the personal data and sources of the personal data.

Similar to the GDPR, state privacy laws in the US also focus on Data Minimization, providing users with Privacy Notices and allowing them to exercise their Privacy Rights. In the case of minors and the processing of sensitive data, informed consent must be obtained.

The US privacy laws differ from the GDPR by requiring that the processing of Sensitive Personal Information, under most privacy laws like Virginia, Colorado and Connecticut require opt-in consent of the individuals. While California, Iowa and Utah require that privacy notices are made available and a mechanism to opt-out is provided. Approach to Implementing Chatbots

Identifying the sources of data used for training The data used to train the AI models ultimately ends up defining the quality of the output produced by the model. If the quality of training data is high, the quality of output will be high. The data of chatbot users is collected to further train and update the technology, this means that the more one interacts with an AI chatbot, the more the technology will learn about the user.

In order to fulfill the transparency requirement of privacy regulations, the Privacy Policy of the chatbot will need to specify what information is collected, from where it is collected, by whom it is collected, the purpose for collecting, the period for which it is to be retained, to whom the data will be shared and how the users can exercise their rights. Obtaining consent for processing Mechanisms need to be put in place for users to consent to their data being collected and processed. This consent should be informed consent, meaning that the user should fully understand the terms of processing before making their decision. The ability to provide consent is usually available on the privacy policy or on a pop-up similar to a cookie banner. Or in the case that the legal basis for processing is legitimate interest, then users should be provided with the option to object to their data being processed and subsequently the chatbot service provider should process their objection. Maintaining data processing records and Chatbot logs It is required to maintain records of data processing activities especially if the model uses large amounts of personal data for training and fine tuning the model. This record should also include the legal basis of processing, the risks that were identified with regard to users’ data and security protections in place. These records are also required to fulfill the transparency requirements of regulations.

Chatbot logs should also be reviewed to find any personal data that has been collected without consent so that it can be destroyed accordingly.

Chatlogs should also be reviewed to gain an understanding of the kind of output the user receives. Based on the insights from these logs, the model can be fine-tuned to produce better, more appropriate results. Providing users with privacy notices The legal basis for processing the data will have to provided to users through the privacy policy along with

  • The information it is collecting

  • From where the information is collected

  • By whom it is collected

  • The purpose for collecting

  • The period for which it is to be retained

  • To whom the data will be shared and

  • How the users can exercise their rights.

It is important for the privacy notice to be made easily available to users. With Garante’s ban of ChatGPT, we saw that one of the DPA’s key concerns was that the privacy notice of OpenAI was not properly made available to users within Italy whose data was used by the technology. Ultimately, one of the requirements of the DPA was that the parent company was asked to rent media spaces to advertise their privacy policy to people within Italy. Allowing users to exercise their data subject rights Mechanisms need to be put in place to allow users to exercise their rights with respect to the chatbot. Meaning that users can request a copy of the data that the chatbot service provider has on the data subject (user). They can request to correct the data that the provider has on them and finally they can request the provider to delete the data that the provider has on them. This can be done through a DSAR request form which can be added to the privacy notice. These processes can also be introduced into the chatbot feature itself, by introducing a query and response format for such requests in the chatbot. Conducting privacy impact assessments Especially in cases of processing of sensitive personal data, privacy impact assessments need to be conducted to understand the impact of the data on the privacy of the individuals. Privacy impact assessments might be needed while training the data, they might again be needed for output data as the use cases could be different once the data has been trained. When implementing complex technology like chatbots, we find it beneficial to look at the existing implementations by companies, what worked and what didn’t. We also look closely at the different existing and upcoming regulations to ensure that our processes align with what regulations require today and what they will require tomorrow. Implementing AI technology can be considered a key challenge in the upcoming year. Having conversations around implementation and understanding the benefits as well as the risks and regulatory requirements sooner rather than later will be beneficial in implementing sustainable technology as well as introducing auditing and compliance frameworks.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page