Blog

Is ChatGPT a Safe Space

Is ChatGPT a Safe Space

ChatGPT, developed by Open AI, has become a household name in the realm of AI-driven chatbot. Capable of crafting essays, conducting research, and even drafting speeches, this chatbot harnesses data from the vast expanse of the web to generate content.  ChatGPT scans and extracts data from various sources on the web and uses it in creative ways. Users can tailor their interactions, specifying length, style, and detail. However, with such power comes the potential for both brilliance and malevolence. This raises a pressing question: How safe is ChatGPT?

Safety Protocols in Place

As with other new digital tools, some have concerns about privacy, specifically the fact that the company records every conversation to improve the chatbot. In fact, even after you delete a chat from the sidebar, a copy will continue to exist in a de-identified form. While the platform does try to ensure that information is largely protected, users are discouraged from revealing confidential information when using ChatGPT

To bolster security, ChatGPT uses end-to-end encryption to protect users’ conversations. This means that only the sender and the recipient can see the messages that are sent between them. In addition, the platform does not share user data with third parties. Instead, it keeps all of the data on its own US-based servers for training future large language models like GPT-5.

Another nod to safety is the fact that web browsers and smartphone operating systems do not grant access to the device being used, greatly reducing the chances of being hacked.

The Human Cost of ChatGPT’s Security

When you type a prompt into ChatGPT, it is recorded and sent to OpenAI’s servers. OpenAI hires contractors to review some of these conversation records and provide feedback to help the chatbot improve. The human touch is important. Asked a question, a model without human input would spit back a related sentence rather than a reply. With human involvement, it learns to answer with precise relevant information.

However, Behind the scenes, human reviewers play a pivotal role in refining ChatGPT. The tasks these reviewers undertake have the potential to take a heavy emotional toll. In recent years, low-paid workers in Kenya and other parts of East Africa have been hired to prevent chatbot technology from making highly offensive, obscene, or abusive statements. These workers must scour through public data sets of toxic content, posts scraped from social media and internet forums such as Reddit, and content generated by prompting AI models to produce explicitly harmful, even illicit results. 

The generated outputs are necessary to have enough examples of violence and abuse that the system can know what not to produce. By noting and flagging disturbing and illegal content, reviewers play an integral role in making ChatGPT safe for its users; however, many of these reviewers report that the experience of auditing such material is traumatizing.

For example, OpenAI reviewers were asked to categorize content by levels of sexual perversity and obscenity. The worst were descriptions of child sexual-abuse material, followed by those that included incest, bestiality, rape, sexual trafficking, and sexual slavery—sexual content that could be illegal if performed in real life. The company also asked for violent material to be categorized by several levels of severity, so that workers had to repeatedly be exposed to scenes of extreme human suffering.

Several workers from Kenya said they experienced mental illness and that their work life and relationships have suffered because of this exposure. In July, some OpenAI workers began pushing for legislation to protect AI workers and content moderators. 

Mercy Mutemi of Nzili & Sumbi Advocates, who is representing the AI workers, said OpenAI and the outsourcing companies that work with them exploited the poverty of these contract workers. Although OpenAI pays $12.50 an hour, the employees receive somewhere between $1.46 and $3.74 an hour from the outsourcing companies. Paid poorly, these employees must nevertheless read scores of violent or sexually perverse posts a day. Many are proud of the role they play in ensuring safety for others, while also stressing that reviewers should be better protected and compensated.

The Challenge of Perfect Mimicry

Tech experts have also raised worries that modern chatbots are expanding without real consideration for the possible dangers of AI. ChatGPT can mimic human speech almost perfectly. For example, AI can create misleading images that have the potential to incite violence and political turmoil, or that depict famous people engaged in criminal behaviors. ChatGPT can be used to write emails and social media posts that persuade people to turn over their personal information or behave in an illegal manner.

Safety First: Tips for Using ChatGPT

1.  Consider opting out of allowing OpenAI to use your conversations for training purposes by filling out this form.

2.  Do not discuss confidential information such as work-related secrets, credit card numbers, passwords, or classified algorithms while using ChatGPT

3.  Delete your history regularly. If someone succeeds at hacking your account, they will not see your conversations or searches.

4.  Download the official ChatGPT app or access via the web. Do not use knockoffs that skimp on security precautions.

5.       Create a strong password and change it from time to time.

Conclusion

The rise of ChatGPT and its peers exemplifies the boundless possibilities of AI. But – to use a well worn aphorism – with great power comes great responsibility. OpenAI’s endeavors in ensuring safety are commendable, but the journey is fraught with ethical and practical challenges. As we embrace this new frontier, it is crucial to champion both innovation and the well-being of all stakeholders in the AI landscape. The future of AI is promising, but it is our collective responsibility to steer it towards a path that upholds human values and integrity.