top of page

Workplace Confidential: should you use ChatGPT for work?



The concrete deliverables of knowledge work tend to involve some form of documentation. The final deliverable might be code, images, a spreadsheet, an email, a survey, an article, a slide deck, meeting notes, a policy document, whitepaper or some kind of report - such as the infamous TPS report. Generative AI systems, such as ChatGPT, promise to increase our productivity through idea generation or by providing full or partial text, code or imagery for knowledge based work. Much has been written about the implications of trusting what comes out of a system like ChatGPT but we should also think about what we are putting into the system.

What are the implications for workplace confidentiality when using ChatGPT for work related projects?

What’s shared with a chatbot, doesn’t stay with the chatbot People tend to think about keeping something confidential in terms of not telling other people or not sharing information outside of certain spheres. Employees are also becoming more digitally savvy in terms of understanding data sharing and ensuring access to workplace systems and passwords are secure. There might be various security levels applied to data and information in an organization such as “internal only” or “ department X only” or the more restrictive “need to know basis”. What we may not yet be considering is the role of new technologies when it comes to sharing information.

Generative AI tools have the potential to compromise privacy and confidentiality. Asking a chatbot, like ChatGPT, a work related question may not seem risky. However, when you use this tool, you are sharing data with a third party.

Some professions are issuing guidance about these risks through their professional industry organizations.


The Law Society of Ontario’s technology practice management guidelines advise lawyers being aware of the security risks in using information technologies and adopting adequate measures to protect against security threats. Putting any type of confidential information in ChatGPT may be playing with fire.” (Urquhart)


Terms and Conditions

In order to use one of the many generative AI tools available, a person will have agreed to the terms and conditions that outlines the use of the product. Very few people read this information.

A Deloitte survey found that only 9% of people read terms and conditions.

Most terms and conditions related to AI products will usually make provisions to use the data to improve the service which means that the company who makes the product can use the data you share with it. For example, your data may go into a future training dataset for a machine learning system. The data you share may also be subject to content moderation and human review, which means humans might also have access to it. Content moderation may take place in other countries, which means data might leave a particular jurisdiction and no longer be protected by that jurisdiction's laws. In some cases there are provisions in the agreement to share the data with third parties or even sell the data. In addition to how data is used, typically there are also stipulations around liabilities and legal recourse. For example, ChatGPT’s terms contain a clause that precludes class action lawsuits and limits liabilities to its creator, OpenAI. Terms and conditions will be written to protect the interests of the tools creator - no surprises there! Organizations would typically not approve the use of a new product or service without having a legal review of terms and conditions for the product or service. In some cases negotiations might take place to find terms that are acceptable to both parties. People who use a service like ChatGPT while doing company related work are circumventing this process


But, we already ask Google

We’ve been primed for decades now to use search engines like Google in our day to day work. How is using a chatbot any different? You could argue that using ChatGPT isn’t any different from using Google or other search engines. All of our queries are already being logged and stored. The data is being shared with third parties and monetized. This is true. However, the kinds of questions being asked and information being offered when using ChatGPT might be more sensitive. If the experience of using the tool is less like looking up information in a database and more like a conversation, the likelihood of divulging more confidential information increases. There is also the incentive of having generative AI like ChatGPT create (in whole or in part) that presentation or tedious report that needs to be produced. A search engine doesn’t do that.

The affordances of the technology - the ability to create new content - changes the stakes for workplace confidentiality and creates new incentives to share more information and potentially, more sensitive information.

The below examples posted on OHS Canada are fictitious, but they provide a sense of how we might ask ChatGPT to provide some fairly context specific information. Imagine if the conversation got even more personal. This is the risk of a technology that carries on a dialogue and appears to “understand” us.



Clear guidance is needed

Where does your organization stand in relation to using generative AI? This is an important question for workplaces to address. Leaders will need to evaluate the technology in terms of risks and benefits and make some decisions that can inform their organizational policy. A good place to start is with data sharing. Does your organization have a formal policy about data sharing? Does this policy speak to the process and approvals of sharing data with third party apps in general? Even if you have a policy in place, you may need to update it to address the unique challenges around generative AI tools. In addition, awareness and training for employees so that they understand what is at stake is another proactive step employers can take. For now, in the absence of any regulations or official guidance, the onus is on employers to clearly outline their expectations for employee's use or limitations of use related to generative AI in the workplace.


By Katrina Ingram, CEO, Ethically Aligned AI ________ Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2023 Ethically Aligned AI Inc. All right reserved.



Comentarios


bottom of page