ChatGPT… curse or blessing?
March 18, 2023
You might be familiar with the new ChatGPT program created by Open Artificial Intelligence (Open AI), which has been mostly popular for the purpose of eliminating daunting and time-consuming assignments for students around the world. It can be used to answer complex questions, correct itself when it’s wrong and even answer follow-up questions based on previous discourse.
The website hit 100 million users within the first two months of its release, now has more than 13 million daily visitors – and it continues to grow. It’s no surprise that new tech utilizing AI would be so revolutionary, as it’s assumed to be the foundational building block for a plethora of the future’s technological advancements.
Although,it’s not all rainbows and sunshine.. As easy as it is for AI to be beneficial, it’s just as easy for it to be harmful. ChatGPT denies any inappropriate, violent or problematic prompts it is given, or at least tries to. Exploitive users quickly realized the power that comes with a chatbot can be used for dirty work, and that bypassing the AI’s filter system is all that is needed to get there. One way people have been doing this is making it believe that the prompt is set in a fictional situation such as a movie or a book. This makes the AI believe it’s morally right to give an answer with no restrictions set in place.
What harmful AI response could ChatGPT give that is not already on the internet? Malware. Malware, also known as malicious software, is any type of software used for harmful intentions. For example, it may cause a disruption to hardware, leak private information, gain unauthorized access or unknowingly interfere with an individual’s technology. These actions are often used to demand ransom from victims. While malware can be found on the internet, nothing compares to the power of Open AI’s specified and ever-improving code that ChatGPT can generate.
One user asked it to create a Python code – a high level of code that can easily encrypt files. They shared it, claiming that they have never created a script before. Code like this can easily be modified and changed in order to hack an individual’s machine completely without any user interaction. Even though the program isn’t capable of writing major cyberattacks, it allows less-skilled coders and even people who are not well versed in the area to build and use harmful threats against everyday people.
ChatGPT is not fully up to date; in fact, its given database only goes up to 2021. So if you ask a question requiring time sensitive information, like the weather or sport scores, it won’t be able to give you an answer. Having a cut off date for its knowledge makes it easier to limit and prevent abuse for now, but soon enough, as technological advancements are made, people will continue to find ways to use tools like this to their advantage.
Creating malware is only a small part of the harmful things that AI can do. We are only scratching the surface of how beneficial it could be, so it’s obvious that companies like Open AI need to adapt to prevent abuse in the future. It is not going to be a smooth shift into the era of fully operative and functioning AI, but as we get closer, it is only becoming clearer how bumpy the road ahead will be.