What not to share with ChatGPT if you use it for work
As hype and anxiety drive interest in the rapid rise of artificial intelligence, we examine how AI is changing the future of work – and how, in many ways, that future is already here.
The question is no longer "What can ChatGPT do?" It's "What should I share with it?"
Internet users are generally aware of the risks of possible data breaches, and the ways our personal information is used online. But ChatGPT's seductive capabilities seem to have created a blind spot around hazards we normally take precautions to avoid. OpenAI only recently announced a new privacy feature which lets ChatGPT users disable chat history, preventing conversations from being used to improve and refine the model.
SEE ALSO: ChatGPT rolls out important privacy options
"It's a step in the right direction," said Nader Henein, a privacy research VP at Gartner who has two decades of experience in corporate cybersecurity and data protection. "But the fundamental issue with privacy and AI is that you can't do much in terms of retroactive governance after the model is built."
Henein says to think about ChatGPT as an affable stranger sitting behind you on the bus recording you with a camera phone. "They have a very kind voice, they seem like nice people. Would you then go and have the same conversation with that? Because that's what it is." He continued, "it's well-intentioned, but if it hurts you — it's like a sociopath, they won't think about it twice."
Even OpenAI's CEO Sam Altman has acknowledged the risks of relying on ChatGPT. "It's a mistake to be relying on it for anything important right now. We have lots of work to do on robustness and truthfulness," he tweeted(opens in a new tab) in December 2022.
Essentially, treat ChatGPT prompts as you would anything else you publish online. "The best assumption is that anyone in the world can read anything you put on the internet — emails, social media, blogs, LLMs — do not ever post anything you do not want someone else to read," said Gary Smith, Fletcher Jones Professor of Economics at Pomona College and author of Distrust: Big Data, Data-Torturing, and the Assault on Science. ChatGPT can be used as an alternative to Google Search or Wikipedia, as long as it's fact-checked, he said. But it shouldn't be relied on for much else.
The bottom line is that there are still risks, made even more precarious because of ChatGPT's allure. Whether you're using ChatGPT in your personal life or to boost work productivity, consider this your friendly reminder to think twice about what you share with ChatGPT.
Understand the risks of using ChatGPT
First, let's look at what OpenAI tells users about how they use their data. Not everyone's privacy priorities are the same, but it's important to know the fine print for the next time you open up ChatGPT.
1. Hackers might infiltrate the super popular app
First and foremost, there's the possibility of someone outside of OpenAI hacking in and stealing your data. There's always an inherent risk of data exposure from bugs and hackers while using a third party service, and ChatGPT is no exception. In March 2023, a ChatGPT bug was discovered to have exposed titles, the first message of new conversations, and payment information from ChatGPT Plus users.
"All this information you're pushing into it is highly problematic, because there's a good chance it might be susceptible to machine learning attacks. That's number one," said Henein. "Number two, it's probably sitting in clear text somewhere in the log. Whether or not somebody is going to look at it, I don't know, neither do you. That's the problem."
2. Your conversations are stored somewhere on a server
While it's unlikely, certain OpenAI employees have access to user content. On the ChatGPT FAQs page(opens in a new tab), OpenAI says user content is stored on its systems and other "trusted service providers' systems in the US." So while OpenAI removes identifiable personal information, before its "de-identified," it exists in raw form on its servers. Some authorized OpenAI personnel have access to user content for four explicit reasons: one of them being to "fine tune" their models, unless users opt out.
SEE ALSO: Beware of shady knockoff ChatGPT apps
3. Your conversations are used to train the model (unless you opt out)
We'll get to opting out later, but unless you do that, your conversations are used to train ChatGPT. According to its data usage policy, which is scattered across several different articles on its site, OpenAI says(opens in a new tab), "we may use the data you provide us to improve our models." On another page(opens in a new tab), OpenAI says it may "aggregate or de-identify Personal Information and use the aggregated information to analyze the effectiveness of our Services." This means, theoretically the public can become aware of something like a business secret via whatever the model "learns."
Previously, users were only able to opt out of sharing their data with the model through a Google Form linked in the FAQs page. Now, OpenAI has introduced a more explicit way of disabling data sharing in the form of a toggle setting within your ChatGPT account. But even with this new "incognito mode," conversations are stored on OpenAI's server for 30 days. However, the company has relatively little to say on how they keep your data secure.
4. Your data won't be sold to third parties, the company says
OpenAI says it does not share user data to third parties for marketing or advertising purposes, so that's one less thing you have to worry about. But it does share user data with vendors and service providers for maintenance and operation of the site.
What might happen if you use ChatGPT at work?
ChatGPT and generative AI tools have been touted as the ultimate productivity hack. ChatGPT can draft articles, emails, social media posts, and summaries of long chunks of text. "There isn't an example that you can possibly think of that hasn't been done," said Henein.
But when Samsung employees used ChatGPT to check their code, they inadvertently revealed trade secrets. The electronics company has since banned the use of ChatGPT and threatened employees with disciplinary action if they fail to adhere to the new restrictions. Financial institutions like JPMorgan(opens in a new tab), Bank of America(opens in a new tab), and Citigroup(opens in a new tab) have also banned or restricted the use of ChatGPT due to strict financial regulations about third-party messaging. Apple has also banned employees from using the chatbot.
The temptation to cut mundane work down into seconds seems to overshadow the fact that users are essentially publishing this information online. "You're thinking of it in the same way that you think of a calculator, you're thinking of it like Excel," he said. "You're not thinking that this information is going into the cloud and that it's going to be there in perpetuity either in a log somewhere, or in the model itself."
So if you want to use ChatGPT at work to break down concepts you don't understand, write copy, or analyze publicly available data, and there's no rule against it, cautiously proceed. But be very careful before you, for example, ask it to evaluate the code on the top secret missile guidance system you're working on, or have it write a summary of your boss' meeting with a corporate spy embedded at a competing company. That could cost you your job, or worse.
What might happen if you use ChatGPT as a therapist?
A survey(opens in a new tab) conducted by healthtech company Tebra revealed that one in four Americans is more likely to talk to an AI chatbot than to attend therapy. Instances have already popped up of people using ChatGPT as a form of therapy(opens in a new tab), or seeking help for substance abuse(opens in a new tab). These examples were shared as exciting use cases for how ChatGPT can be a helpful, non-judgmental, and anonymous conversation partner. But your deepest, darkest admissions are stored somewhere in a server.
People tend to think their ChatGPT sessions are like a "walled garden" said Henein. "At the end, when I log out, everything inside of that [session] flushes down the toilet, and that's the end of the conversation. But that's not the case."
If you're a Person On The Internet, your personal data is already all over the place. But not the ChatGPT conversational medium where you might feel compelled to divulge intimate and personal thoughts. "LLMs are an illusion—a powerful illusion, but still an illusion reminiscent of the Eliza computer program that Joseph Weizenbaum created in the 1960s," said Smith.
Smith is referring to the "Eliza effect," or the human tendency to anthropomorphize things that are inanimate. "Even though users knew they were interacting with a computer program, many were convinced that the program had human-like intelligence and emotions and were happy to share their deepest feelings and most closely held secrets."
So given how OpenAI stores your conversations, try not to give yourself over to the illusion that it's a mental health wizard, and blurt out your innermost thoughts, unless you're prepared to broadcast your innermost thoughts to the world.
How to protect your data on ChatGPT
There's a way to go incognito when using ChatGPT. That means your conversations are still stored for 30 days, but they won't be used to train the model. By navigating to your account name, you can open up settings, then click on "Data Controls." From here you can toggle off "Chat History & Training." You can also clear past conversations by clicking on "General" and then "Clear all chats."
Source: Mashable