Zinkerz

Hallucinations and Overdependence on AI

Picture this: you’re using ChatGPT to do the legwork for a science research essay you’re writing, and at first, it seems easy. It generates paragraphs of content within seconds that are already in grammatically correct English, and it even provides a nifty list of sources for you to cite for the generated information. You hand in the essay after a few minor tweaks to make it more human, and you get back the grade: a zero. Not because you didn’t write the essay, although that certainly didn’t help, but rather because of something even stranger: none of the sources exist! The books listed are completely made up, along with the page numbers, author citations, and everything else.

The scenario isn’t as improbable as you might think. Oftentimes referred to as “hallucinations” in the industry, the phenomenon described is when AI software generates errors with ostensible certainty, producing results, figures, information, or citations that aren’t backed up by anything substantial. A recent example of this emerged when Bard (Google’s AI language software) was tested out on an episode of 60 minutes. In a moment that mirrored the aforementioned scenario, “James Manyika [Google’s Senior Vice President of Technology] asked Bard about inflation. It wrote an instant essay on economics and recommended five books. But days later, we checked. None of the books [are] real. Bard fabricated the titles.” (CBS News). In addition to the obvious question of how often this happens (more often than we’d like), the first question that arises out of this example is simple: how does this happen?

To answer that, it’s important to note that numerous AI models, including Bard, operate by predicting the next most likely succession of thoughts. If you’ve ever had the tiny option on your keyboard that suggests words as you type, you’re familiar with a scaled-down version of this technology. Bard, and AI language models like it, try to synthesize all the language it can find on the internet related to a subject and use that to predict what comes next. Occasionally, this can lead to very strange results. It could look like the model is expressing emotion, doubt, or self-awareness, but this is just a result of patterned predictions. Our language is filled with emotion, so their predictive results will be too. It’s essentially a complicated form of mirroring. While it is unclear what exactly causes these hallucinations (even from the Google side of things), the answer might be found in this predictive aspect of how the software works. It may assume titles for books on, say, inflation based on the infinite library of books it has access to. It may generate likely last and first names of authors or likely page numbers based on that same information. 

This is just one theory, though, as there is no clear consensus on where this behavior comes from yet. This uncertainty is sometimes called a “Black Box” at Google, denoting its unknown quality. It is sometimes known to cause the AI to develop knowledge it would otherwise not have been prompted to know. In the same episode of 60 Minutes, after just a few promptings in Bengali, Bard “taught” itself the language and was able to demonstrate fluency. This, combined with the hallucinations, has the potential to disseminate large amounts of misinformation to users. For now, one of the solutions in place is adding a “Google it” function to the AI, allowing people to verify the things that Bard is telling them about a topic while also filtering for hate speech and other explicit political biases. Even with that in mind, the potential for error is startling, to say the least. 

AI has many useful applications. In the same episode, some potential benefits are outlined in a modern workplace: helping doctors think of medical conditions they may have neglected as causes for a patient’s symptoms; developing new strategies for chess players; learning new ways to maneuver around a soccer pitch; and so on. But, as students and lifelong learners, there are different obligations and responsibilities at play. When presenting work, there is an obligation for the work to be original and for the content to be accurate based on verifiable facts. AI has the potential to rob us of that. Therefore, it’s important to remember that even though AI has a massive potential to streamline many areas of our work and lives, it is not a substitute for human resourcefulness and originality. Despite their speed, AI models are ultimately modeling language and text generated by humans, not other AI. Depending too much on it for educational purposes could not only lead to plagiarism but also misinformed plagiarism. For the moment, there’s still no replacement for the human mind. 

  • All Post
  • ACT
  • Admissions
  • AP Exams
  • Applications
  • Career
  • Choosing a School
  • College
  • College Life
  • Counseling
  • Kids
  • SAT
  • Test Prep
  • Uncategorized
  • ZinkerzCamp
    •   Back
    • ACT
    • SAT
    • TOEFL
    • AP Exams
    •   Back
    • Applications
    • Choosing a School
    • Admissions
    •   Back
    • Chess
    •   Back
    • College Life
    •   Back
    • Professional Development

Do you want to know more?

Set up a call and we will answer all your questions