Limitations and concerns about using ChatGPT for academic purposes |
Citing Fake References: |
“It sometimes cites fake references (made-up DOIs/links) for the answers to your questions.” |
“Yes, it does not give accurate citations, most of the citations are fake.” |
Theme: Reliability of Information |
Participants express concerns about ChatGPT occasionally providing inaccurate references, impacting the reliability of information it generates. Some citations are even reported as fake. |
Recognition of Visual and Audio: |
“Can’t really search for images” |
“Can’t recognise visual and audio” |
“Text and Image analysis of PDF can be more eloquent.” |
Theme: Limitation in Sensory Perception, Information Retrieval and Accuracy |
It is noted that ChatGPT cannot recognize visual and audio content, indicating its limitations in sensory perception. |
Participants discuss ChatGPT's limitations in searching for images and its potential for providing inaccurate information. |
Opinion-Based Answers and Explicit/Illegal Information: |
“ChatGPT cannot provide opinion-based answers and explicit or illegal information.” |
Theme: Ethical and Legal Boundaries |
ChatGPT's inability to provide opinion-based or explicit/illegal information is highlighted, reflecting the ethical and legal guidelines governing its responses. |
Accuracy and Possible Inaccuracies: |
“It is limited to information until September 2021. It also provides inaccurate answers as it uses websites on the internet to provide answers without verifying their legitimacy.” |
“May not provide the most accurate answer sometimes and needed to be corrected.” |
Theme: Accuracy and Reliability |
Concerns are raised about the accuracy of ChatGPT's responses and the possibility of inaccuracies in the information it generates. |
ChatGPT is criticized for occasionally providing incorrect information, emphasizing the importance of accuracy and reliability. |
Different Generations of ChatGPT: |
“Different generation of ChatGPT have different limitations.” |
Theme: Evolution of Capabilities |
Acknowledgment that different generations of ChatGPT may have varying limitations, highlighting the evolving capabilities of the model. |
Encouraging Manipulation or Self-Change: |
“It does not provide advice that encourages manipulation or implies that someone should change themselves.” |
Theme: Ethical Guidance |
ChatGPT is portrayed as not providing advice that encourages manipulation or self-change, reflecting its ethical guidelines. |
Plagiarism and Manipulation: |
“Easy to paraphrase and not detectable” |
Theme: Misuse of Technology |
Concerns about plagiarism and manipulation of ChatGPT's responses by users are mentioned, pointing to the potential for misuse. |
Design Limitations: |
“Can’t design a website in detail.” |
“False information, simple math errors, bugs in generated code” |
Theme: Design and technological Capability |
ChatGPT's inability to design websites in detail is noted, indicating its limitations in design-related tasks. |
Concerns are raised about false information, errors, and bugs in generated code, affecting the reliability of ChatGPT's responses. |
Dependency of Students: |
” Students will depend too much on it” |
Theme: Impact on Education |
The dialogue suggests that students may rely on ChatGPT, addressing its potential impact on education. |
Limited Information and Verification: |
“No real time information.” |
Theme: Timeliness and credibility |
Concerns are expressed about ChatGPT's limitations in providing up-to-date information and its reliance on websites without verification. |
There are concerns about ChatGPT's limitations in terms of the year of information it can access, with users noting that it may not provide the most up-to-date data. |
Participants express concerns about the timeliness and accuracy of information, including citations, provided by ChatGPT. |
General Responses without Proper Command: |
“Your instructions have to be specific enough for it to generate the response if not the reply would just be error.” |
“They give very generic answers which sometimes is not the best if you want information.” |
“Without proper command, the response of ChatGPT will be too general.” |
Theme: Instruction Precision |
ChatGPT is described as providing general responses in the absence of clear instructions, highlighting the need for precise commands. |
The dialogue emphasizes the need for specific and clear instructions to obtain accurate responses from ChatGPT. |
Lack of Personal Touch: |
“It’s a robot with no emotions.” |
“ChatGPT couldn’t understand humour” |
Theme: Human Interaction |
Participants note that ChatGPT lacks a personal touch in responses, indicating the absence of human-like interaction. |
Persuasion and Wrong Answers: |
“Information is not 100% correct, can be persuaded to give a wrong answer.” |
Theme: Potential Manipulation |
Concerns are raised about ChatGPT being persuaded to give incorrect answers, highlighting the potential for manipulation. |
Rationalization of Answers: |
“It can’t possibly rationalise any answer it gives” |
Theme: Rationalization Capability |
ChatGPT's inability to rationalize its answers is noted, indicating limitations in providing explanations. |
ChatGPT Manipulation by YouTubers: |
“A normal cant manipulates CHATGPT easily but at some point, there are many youtubers which find out that you can manipulate ChatGPT to fulfil your orders and it’s so creepy. I think you can search for it.” |
Theme: User Exploitation |
The dialogue mentions instances where YouTubers have manipulated ChatGPT to fulfil orders, raising concerns about misuse. |
Truth vs. Lies: |
“As a large language model, it cannot tell truth from lies. This goes for both writing and checking written work. It has been known to provide citations with the wrong authors, authors who do not exist, papers which do not exist but are attributed to real authors, etc.” |
Theme: Credibility and Reliability |
ChatGPT's inability to discern truth from lies, both in generating and checking content, is emphasized, impacting its credibility. |