Your Voice: ChatGPT — The risks and rewards
The chatbot has it’s shortcomings and it’s important to be aware of cybersecurity risks.
ChatGPT is a new technology that has been “trending” since it became available to the public. It has been discussed so much that I don’t feel I need to introduce it here. I have seen people using it to write programming code, generate essays, write emails, and even compose love letters to their spouse. Its output is so powerful that some people are arguing it has human-like reasoning capabilities.
At its core, ChatGPT is a branch of artificial intelligence called Large Language Model, which relies on assigning a certain probability to each word in the human language to understand what humans are telling it — and to produce human-like answers. The more words it can deal with at once, the more powerful it becomes in mimicking natural language and reasoning. Other factors include the quality and diversity of the training data.
Given its powerful capability in processing text, the chatbot has proven that it is an innovative tool. It can be used in various ways, such as for customer service, personal assistance, and solving problems. I have used it to summarize long articles and write programming code that would take me a long time to write myself. Some people are even using it to write short stories and re-write famous stories with alternative twists.
However, it’s important to note that the chatbot has its shortcomings. One of the main issues is that it relies on the text available online to build its language model and produce answers. This means that it may inherit misinformation and harmful content from the text it feeds on. Another downside is the lack of genuine human interaction, which is necessary in situations where empathy and emotional interactions are important. The chatbot is a black box that envelopes complex processing — even its creators might not be able to predict what the output will be. This makes it a potentially dangerous tool when in the wrong hands.
Another concern with ChatGPT is the cybersecurity risks it carries. As more people start to adopt it, some employees might provide sensitive data about their organization. It is important to be aware that input is sent over the network to another server where it is processed and stored. While it is challenging to mitigate such risks (human actions), it is crucial to update cybersecurity policies and awareness campaigns to warn employees of this risk.
And yes, this article is entirely human-made. But there is no way I can prove it!
Your Voice reflects the thoughts and opinions of the writer, and not necessarily those of the publication.