We will now discuss 4 main types of security, privacy and compliance risk associated with the use of AI tools in the workplace.

Leaking PII or Confidential Data

AI Chatbot phishing scam

Biases and Misinformation

Deepfakes & Identity Theft

Leaking PII or confidential data is often a form of legal violation. Companies or individuals can be subject to fines and other legal consequences.

This is Sally. Sally’s manager John says that we would like to write a thank you email to their new client Paul at Greenmount corporation to thank him for assigning them the contract for their new product launch. Sally uses chatGPT to craft an email to Paul.

Accordion Content

Leaking PII or Confidential Data​

Leaking PII or confidential data is often a form of legal violation. Companies or individuals can be subject to fines and other legal consequences.

Example

This is Sally. Sally’s manager John says that we would like to write a thank you email to their new client Paul at Greenmount corporation to thank him for assigning them the contract for their new product launch. Sally uses chatGPT to craft an email to Paul. 

By providing chatGPT with the contact details of Paul and also information about Greenmount’s new shampoo product, Sally has shared PII data and confidential data with an external organisation.

This is a violation of PII rules and likely a violation of business confidentiality clauses that are common among companies. 

Phishing scams through AI chatbots

Examples and types of phishing scams:

(i) links that bring users to a fake login page that steals login information through emails, text messages 

(ii) attachments like PDF files that contains malicious content that can create security vulnerabilities

(iii) clone an exact replica of a legitimate website that requires users to update their personal information and steals their information in return

How do scammers and threat actors make use of AI-powered tools or AI-search engines to conduct phishing operations? 

(i) training data poisoning – by scattering weblinks and malicious documents across the internet which AI tools might pick up 

(ii) by directly influencing and biasing the underlying AI models, such that the models generate responses that serves the phishing objectives 

(iii) when they are application security vulnerabilities, allowing for prompt injection that skews the responses of an AI agent 

Example 1

Sally is searching for restaurant options for an upcoming business lunch near a client’s house, using an AI-powered search engine. 

Sally downloads the file via the link and found that there was no menu or promo code. She then proceeds to make a reservation with one of the restaurant and leaves the downloaded PDF file in her Downloads folder. 

Unknown to Sally, she has just become the victim of a personalised phishing scam.  

  • It is highly common for hackers to embed malicious scripts and codes within documents, including PDF documents. When users download and open the documents, the malicious code is activated and can lead to various forms of security vulnerabilities.
  • Because AI-powered search engines crawl the public internet for up to date information, they can be hijacked to deliver malicious documents that compromises system security.
  • Given the human-like and intelligent nature of most AI tools, users tend to assume that they can be trusted.
  • This is especially so when the responses are personalized and the AI tool is offered by a highly regarded provider.

Example 2

Aside from using documents for phishing scams, scammers and threat actors can also use fake links to mislead users into providing confidential information. 

Here’s a different example –

 

In this different example, Sally has been misled to a scam site where her credit card details, along with name and email address can be stolen.

Biased information & Misinformation

Lauren is a business analyst and is currently evaluating a company TopHanks. She has been asked to verify some information that the company has provided.  

  • The CEO, Lucas Riyani, of TopHanks claims to have been a graduate from a prestigious university and has various research publications under his name.
  • TopHanks has supposedly won various government contracts in other countries. 

Lauren has recently been introduced to a number of AI chat agents that are effective in crafting summaries based on publicly available information on the internet.

She wants to impress her manager by being detailed and fast with her work. She therefore asks the AI chat agent the following questions: 

Because the initial response from the chat agent did not line up with Lauren’s understanding of the company’s founder, Lauren provided additional information that biases the AI chat agent towards providing a response to confirm her views. 

Next, Lauren continues to use the AI agent to research on the company’s government contracts.

  • The information that the chat agent provides aligns with what Lauren and her manager has been told about TopHanks industries.

  • However, what Lauren was not aware off, is the fact that TopHanks industries had hired a team of content marketers to publish a list of non-existent contract wins on known websites like Wikipedia and other industry sources.
  • While the claims were fraudulent, there was no way the chat agent could verify the validity of the information as it simply reproduces what is publicly shared on the internet.

  • Lauren also did not independently verify the information via official sources. 

Deepfakes & Identity Theft

They can be used to produce images, videos or audio clips that are seemingly real.

Malicious actors can use deep fakes to bypass company security or for deception.

Example

Rafael is a reporter with the local newspaper. He receives an email tipoff from Suzanne about an extra-marital love scandal between the CEO of Greenmount and a local social media celebrity.  

Suzanne claims to be the wife of Greenmount’s CEO and has attached multiple photos and two videos of the subjects being together. The email was also sent to various other reporters from competing news outlets.  

As an experienced reporter, he knows that photos can be doctored using photo editing tools. He sent the photos to be checked by his graphics team and watched the 2 videos. 

He is now deciding whether to publish the story and decides to do a source check.  

He exchanged emails with Suzanne and eventually did a phone call with her. 

During the call, he sensed that Suzanne has a deep understanding about the personal life of Greenmount’s CEO. She knew where he stayed. Shared details of how they first got together, their travels and how he became increasingly distant after she had a baby. After a long 15 minutes call, Suzanne hung up the phone because her baby was crying in the background. 

Rafael searched up the Greenwhich CEO’s social media pages and found that he indeed had a new baby recently. Through videos of him with Suzanne, he concluded that the lady he just had a call with sounded exactly the same.

3 Months after publishing the story, Rafael’s company was sued for defamation. Lawyers proved that – 

  • The videos were all AI-generated using, based off videos of other individuals and combined with photos of Greenmount’s CEO that are on news and social media. 

  • The person who Rafael spoke with was not the wife Suzanne.

  • Using audio samples from the CEO’s social media account, a professional corporate espionage team had used social engineering, publicly known information about the CEO’s personal life and voice alternation to deceive Rafael.


This short video will show you how easily it can be done with AI technology.