This reminds me when my school teachers would rage against Wikipedia and tell us not to use it at all.
The issue however was never Wikipedia. It was how my classmates used Wikipedia. We just copied and pasted whatever we saw there and didn’t check whether the information was correct.
Too often many users misuse AI. They make vague prompts, no follow ups and copy and paste whatever ChatGPT spits out. And they don’t check the sources. Lawyers have been caught being utterly lazy when they submitted court papers that cited non existent cases.
And we must be honest with ourselves. Humans use shortcuts, have biases and make mistakes too that leads us astray when doing work, this isn’t unique to AI. But humans have systems to check each others work. We cite people in the field, we do peer review, we have other humans contribute to our papers and they do checks too.
University know that students can be lazy even before AI. They spend money to have software that checks for plagiarism
If we can recognise that humans can be faulty and that we need systems in place to make sure our output is accurate then why expect AI to be error free 100% of the time.
It’s like citing an article in some journal and never being critical about the source you’re using. Like are you checking whether you’re presenting the arguments made in that paper correctly, what’s the paper’s methodology, have others responded to this paper and given their own perspective or criticism.
I think people expect so much from AI. The companies who made them are hyping them too much to the point that people are forgetting their still tools. And tools have their purpose but also weaknesses that one must be aware of. You don’t use a hammer to cut wood.
So if human output also needs to be verified then so too must we expect AI output to be verified.
My philosophy professor once complained that when he was writing his book, he was shocked to find that many academic papers he looked up would cite a source incorrectly or misrepresent/misquote stuff in the cited source.
He found it frustrating. Even in peer reviewed journals mistakes happen.
Just because something is written by a human it doesn’t mean we should trust it without checking it.
For me, when I ask AI to give me books to read on a certain topic, my follow up is always “check online and link the books and articles you’ve recommended”
This always gives me real books and articles I can use as a starting point in some topic/subject.
And speaking about hallucinations and the accuracy of AI, I remember how an AI model (Google Notebook) made a mistake when it summarized Madison v Marbury.
It stated that the Supreme Court found that Madison had acted illegally by denying to give Marbury his commission. This was because the Supreme Court found the Judicial Act of 1789 did not empower the Supreme Court to force Madison to give Marbury the commission.
To me, this summary looked accurate. Then I checked it online and found the AI made a subtle mistake. It was a small mistake that one could miss. The Supreme Court did find Madison acted illegally. But the Judicial Act did empower the Supreme Court to force Madison to give Marbury his commission however the Supreme Court found that parts of the Judicial Act that gave it that power to force Madison to send the commission was unconstitutional because it gave the Court powers beyond what the Constitution had stated.
It’s such a small mistake. And I think that’s why most hallucination are hard to find. AI is really good at finding stuff and writing some paper.
But the problem is that AI can make a small mistake that sounds reasonable and most people don’t double check because AI writing for the most part does pass the sniff test.
It can be 98% correct but the 2% is critically wrong. Terrence Tau made this observation about AI writing proofs. AI can write convincing proofs but then make a small mistake that a human might not have ever done.
This is why I think its important for people to check AI even if what it writes sounds reasonable at first glance. The same goes with human writing/output too.