There are 4 main types of security, privacy and compliance risks associated with the use of AI tools in the workplace.

Leaking PII or Confidential Data

These are privileged information that employees have access to through business plans, emails or CRMs etc.

Leaking PII or Confidential Data

With AI tools, people are more readily sharing information with chatbots. Traditional end-point protection tools are often unable to limit such behavior.

Phishing scams through AI chatbots

No longer just emails. Compromised chatbots can lead users to volunteer information or access malicious links

Phishing scams through AI chatbots

Compromised chat bots can deliver malicious links, attachments or trick users into revealing sensitive information. Unlike unsolicited scam emails, users are more likely to fall for chat based scams because they tend to be user initiated.

Biased Information and Misinformation

Inaccurate / false information provided by chat bots in response to users' questions

Biased Information and Misinformation

Present day chat bots are capable of sounding highly convincing. When users trust information provided by the bots without further verification, this can result in reputational or business losses.

Deepfakes and Identity theft

Realistic fakes of photos, videos and voice clips that facilitate scams, misinformation and social engineering hacks

Deepfakes and Identity theft

Deepfakes make use of images, videos or audio clips that users have shared either publicly (e.g. social media) or when using specific tools on the internet. This can lead to identity theft or other criminal activities.