First published: August 24th 2023
Last updated: August 24th 2023

2023 was the year AI went mainstream. It appears everyone has a story about how they used ChatGPT to make their personal or working life easier.

From an employer’s perspective, the rapid progress of AI raises difficult questions. Although a chatbot on the company website can be a useful tool for interacting with customers, there are tricky ethical questions and business risks to consider as the rollout of AI technology continues.

Employers are grappling with issues like whether staff should be permitted to use AI to make their jobs easier, data protection concerns and whether the outputs generated by AI tools are accurate enough to rely on.

Business risks of AI

For employers, the key risk to assess is the scale of any damage your business might suffer if staff do not correctly use AI.

Many people are familiar with the US lawyer who used ChatGPT to help him prepare a case with disastrous results. The lawyer cited several cases in court filings that were invented by the AI. The lawyer didn’t consider that the technology would generate fictitious precedents and was unaware that it might produce inaccurate information.

To avoid the embarrassment of making a similar mistake, there are some prudent actions that employers can take to protect their business against the risks of employee use of AI tools.

Develop an AI Policy

To avoid an embarrassing situation like the one suffered by the hapless US lawyer, your business should consider developing an AI Policy.

This policy can address specific risks affecting your business. Some of the most common issues arising out of the use of AI in the workplace are…

Protection of Confidential Client and Employee Information

While many of the tasks that AI is typically used for do not pose any obvious risks, it’s vital that employees are aware that no sensitive company data should be used by the AI tool.

AI tools analyse huge amounts of data to generate responses and it’s important that no personal information about your employees or customers is disclosed.

If an employee submits confidential information to ChatGPT or any other AI tool, your business is exposed to a range of privacy, commercial and data protection risks.

Your AI policy needs to clearly set out what types of data employees are permitted to submit to AI tools.

Intellectual Property Risks

Your business also needs to consider intellectual property risks. If your business publishes content online, you need to ensure that AI generated content is not subject to copyright.

AI tools don’t tend to specify the sources of the content they generate, and you therefore have no real way to check if the publication of AI-supported content will breach someone else’s intellectual property rights.

The AI tool may generate output by using existing content that appears on the internet, rather than producing original work.

If that’s the case and your business passes off work as your own, you will likely hear from the rightful owner of the content.

Expert HR assistance with developing an AI Policy

With the use of AI becoming more and more mainstream, now is the time to start preparing your AI policy.

To get the most out of AI technology, you need to inform staff about how to use the tools responsibly.

With a strong policy in place, you can ensure your business is in position to reap the benefits of this powerful new technology while safeguarding your operations against confidentiality, intellectual property and data protection risks.

For instant advice on any AI risks facing your business, call one of our employment law and HR experts on 1800 719 216.

Sign up to our newsletter

Get the latest news & tips that matter most to your business in our monthly newsletter.