What could the ethical boundaries be for chatbots?
By Shaghil Bilali
Not even two months old, ChatGPT is becoming increasingly popular at a rapid rate. Despite being at the machine-learning stage, the chatbot is completing some of the most challenging tasks in mere moments or minutes. The most recent development from the AI software came when Pieter Snepvangers, a student from a US university, announced that he tested ChatGPT to see if it could pass a difficult exam. Astoundingly, the AI technology composed a 2000-word essay in only 20 minutes, a task that would usually take a student around 12 weeks of studying.
When his teacher at the university reviewed his essay, he remarked that the writing seemed suspicious; nevertheless, he graded the paper with a '2:2' or 53 marks out of 100.
It was just last month when in a study, the same chatbot passed the final exam of an MBA programme designed for Pennsylvania’s Wharton School.
Professor Christian Terwiesch, who authored the study, raised concerns, saying that students might be cheating on homework assignments and final exams using such AI chatbots.
![]() |
Photo by Gertrūda Valasevičiūtė on Unsplash |
Several universities in India, France, and New York City have already prohibited the use of ChatGPT. The chatbot has the potential to be used for nefarious purposes, such as writing malicious code, creating phishing emails, or enabling hackers to spread misinformation and fake news. The Snepvangers and Wharton School episodes have raised serious questions about the ethical limits of AI and who should oversee and regulate its use.
This is also a concern for Mira Murati, chief technology officer at OpenAI and the creator of ChatGPT. She was quoted as saying that she feared that AI could be ‘used by bad actors.’
“Regulations of some sort may be imperative in the future as hackers are already using the platform to create malicious applications. With this malware, hackers can access sensitive information and even steal users' money,” she was quoted as saying.
Recently, a former Google employee also raised similar concerns regarding LaMDA, Google's competitor to ChatGPT.
We can’t overlook the worries of the creators of these modern technologies. It is imperative that effective solutions are found to address these issues. The creators, governments and institutions all have a part to play; the creators can implement stringent rules regarding user identification and query types, governments and the judiciary must engage in serious conversations to draft regulations that benefit the public, while institutions can forbid the use of the technology for certain purposes to ensure a fair and balanced environment for their students and employees.
Although these initiatives may not totally prevent the misuse of tech, they will at least guarantee a level of fulfillment of public interests.
Comments
Post a Comment