TECHNOLOGY
C
Irrespective of the capabilities and fast evolution of ChatGPT, the exact source of the data used to train it is still unknown, something that raises serious questions about the authenticity of the responses. Moreover, ChatGPT does not explain why a specific response is generated for a given question, which also poses significant concerns on the trustworthiness of each output and lack of transparency in its working. The quality of ChatGPT responses depends upon the kind of data used to train it; just as we humans learn from our experiences in life. While AI evolution and development is focused on the common good, these limitations can pose serious threat to human values, society and business practices if the AI bot is misused or abused. A wingman to help humanity Since its launch, many academic practitioners and institutions have experimented with ChatGPT. The bot has passed a Stanford Medical School final exam in clinical reasoning and exams in four law school courses at the University of Minnesota, and also answered basic questions relating to business operations based on case studies often used to teach and examine business school students. In addition, it has co-authored research manuscripts published in academic journals and has assisted academics with developing research papers. Considering these developments, academic institutions are concerned that ChatGPT could help students to complete their written assignments, including essays, term papers and thesis, as well as answer exams and multiple-choice questions, all of which will increase plagiarism and cheating. Currently, the demo version of the GPT‑2 Output Detector, a software developed by OpenAI to detect AI-generated text, offers very low accuracy. Moreover, the detector can be easily fooled by including special characters, extra words and punctuation in a text generated by ChatGPT. Therefore, we are largely defenceless against this threat to academic integrity. While cheating has always existed in education and has been difficult to monitor, there have been several mechanisms to successfully prevent large-scale dishonesty.
hatGPT, also known as GPT3, is a large language model-driven (LLM) artificial intelligence chatbot launched
for testing by the public in November last year by creator OpenAI. It has gained significant popularity since the launch for its ability to generate compelling, human-like answers to almost any question asked. The ChatGPT-human interaction is realistic and conversational in the sense that the bot can answer follow-up questions, admit its mistakes and reject inappropriate requests. The language model (ie the brain behind ChatGPT) uses generative AI, which has been trained with around 45 terabytes of textual data available on the worldwide web up to 2021. The bot can generate new content, surpassing the capabilities of traditional AI algorithms that were largely limited to finding patterns within data and forecasting. A superior, highly powerful and more sophisticated version of ChatGPT, GPT4, was released on 14 March and is likely to have more advanced reasoning skills compared to its predecessor. GPT4 is currently being integrated into several different applications with ‘good’ intentions: language learning app Duolingo has put it to work enhancing personalised learning; assistive technology provider Be My Eyes has partnered with it on the development of an app for visually impaired people; Microsoft Bing utilises it to improve the search engine user experience; and financial platform Stripe is using the technology to ward off chatroom scammers. AI tools such as ChatGPT have superior quantitative, computational and analytical capabilities compared to humans because they can process and analyse big data super-efficiently. This is an advantage for humans in the digital age, where machine-learning algorithms can equip human decision-makers with comprehensive data analytics.
Ambition | JUNE 2023 | 15
Made with FlippingBook - Share PDF online