Info Image

The Security Risks of Gen AI Chatbots Featured

The Security Risks of Gen AI Chatbots Image Credit: donut3771/BigStockPhoto.com

Generative AI chatbots have risen in prominence as businesses begin to harness conversational AI. They are used for a variety of purposes, such as streamlining customer service, automating tasks, and personalising user experiences. One company, Klarna, made the bold claim that their Gen AI chatbots are able to do the work of 700 customer service agents [1], serving customers 24/7 worldwide. These levels of productivity are unprecedented, and could be very compelling for organisations as they consider deploying chatbots. Nonetheless, there are also associated security risks which must also be considered.

Global Regulations on Generative AI

As global regulators begin to introduce policies to govern AI, businesses deploying chatbots must first navigate the data protection regulations to safeguard user privacy and avoid regulatory pitfalls. Various policies are being introduced globally such as the EU AI Act [2], which obliges high risk apps to be more transparent about data usage. Similarly, Singapore’s Model AI Governance Framework [3] identifies 11 key governance dimensions around issues such as transparency, explainability, security, and accountability to safeguard consumer interests, while allowing space for innovation

The recent PDPC AI Guidelines [4] in Singapore likewise encouraged businesses to be more transparent when seeking consent for personal data use, through disclosure and notifications. Businesses have to ensure that AI systems are trustworthy, which provides consumers with confidence over how their personal data is being used.

Internationally, the new ISO 42001 [5] specifies requirements for establishing, implementing, maintaining, and continually improving Artificial Intelligence Management Systems within organisations.

New risks with Gen AI Chatbots

Modern Gen AI-enabled chatbots are able to profile individuals very quickly through a massive amount of historical user interactions and data inputs, enabling detailed profiles to be constructed. Personal information such as interests, preferences, gender and personal identifiers can be deduced from seemingly innocuous conversations with chatbots. This ability raises privacy and manipulation concerns and poses a real threat if the information falls into the wrong hands.

Through adversarial prompt techniques such as prompt injection (inserting malicious content to manipulate the AI's output), prompt leakage (unintentional disclosure of sensitive information in responses), and jailbreaking (tweaking prompts to bypass AI system restrictions), unauthorised access can be gained to sensitive information, including passwords, personally identifiable information (PII) and even training data sets.

Rogue chatbots are also a concern, where malicious actors deploy chatbots with the intention of extracting sensitive information from unsuspecting users. These rogue chatbots may impersonate legitimate entities or services to deceive users into disclosing confidential information.

In addition to data leakages, AI regulators and ethicists are also concerned about bias in AI, especially when deployed in recommendation or decision making systems. Generative AI systems are likely to have biases inherent in their training data or algorithms, which can result in unfair or discriminatory outcomes. It is essential for chatbot developers, as ‘Human- in and over-the-Loop’, to recognise and address these biases in their development and deployment to ensure fair and equitable use of AI technologies.

Considerations to keep in mind to safely engage with chatbots

  • Chatbots shouldn’t elicit personal information. Users should be wary of sharing personal data in the chat itself: safeguard sensitive data by refraining from sharing confidential details like passwords or financial information unless you're certain of the chatbot's security measures. Be cautious of oversharing personal information.
  • Verify authenticity of the chatbot. Look for trusted domain, security protocols in place: ensure you're interacting with a legitimate chatbot from a trusted source. Beware of imposters or malicious entities posing as chatbots to deceive users and extract personal or sensitive information.
  • Practice Vigilance: be wary of suspicious requests or prompts that seem out of the ordinary, and refrain from clicking on links or downloading files from untrusted sources.
  • Review Chatbot domain Privacy Policies: familiarise yourself with the privacy policies of chatbot providers to understand how your data is collected, stored, and used. If possible, interact with chatbots that prioritise user privacy and adhere to stringent data protection regulations.

Businesses would benefit by referring to robust frameworks such as the IS42001 AI Management System, the ISO 23894 AI Risks and Controls or the guidelines from the regulators (EU AI Act, AI Model Governance Framework) for a more systematic approach in developing and deploying Gen AI chatbots that are public facing and process personal data. Be transparent to demonstrate accountability in adhering to best practices and legal obligations so users can be confident in their use.

Ultimately, while chatbots offer unparalleled convenience and efficiency, they can pose significant security risks and privacy concerns with lax practices and deployment by new ‘AI automation agencies or new AI no-code solution’ developers who are not versed in enterprise security or data protection guidelines. It is critical that users are aware of this in order to maximise the benefits from chatbots whilst also keeping their personal data safe.

 

References

[1] https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/  

[2] https://artificialintelligenceact.eu/

[3] https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework

[4] https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems

[5] https://www.iso.org/standard/81230.html  

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Alvin Toh is the Chief Marketing Officer of Straits Interactive.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic