Honestly this is the real problem. As academia turns into more and more of a mandatory credential mill, graders become more and more overworked. The time needed to sift through a flood of AI generated nonsense and pick out a mediocre handwritten paper from a mediocre generated paper just isn’t there.
Not to mention that there’s no percentage in giving a shit about cheaters (and of course, if the cheater’s on the football team, the Honor Board will arrange for the paperwork to be lost anyway). Because college is ultimately a place where you get the piece of paper that says you’ve got a degree, you’ll get graders and professors who shrug and let their students coast through college and then crash in the real world. They get paid the same either way.
Counterpoint: one of my professors (or whatever his actual title is) is currently thinking of and asking feedback on changing how his courses are graded, because an increasing amount of students is doing the weekly programming exercises with ChatGPT.
Should be an automatic F if they’re using AI to help write out something for a class.
I know some high schools are doing it as far as I’m aware of
We use different grading systems though. There’s no “F” in any. Not that I don’t get what you mean, but it would be hilarious (and probably cause a system error).
They should really be doing their programming assignments the traditional way, by copying code segments off of stackoverflow.
That’s still pretty much what they do technically, it’s just the copying and pasting from stack is done by artificial middleman (who sometimes hallucinates non-existing APIs while at it)
With all the discussion here, wonder if people can see “duplicate work” from orginal ?
if people use “automate intelligence” and claim as original. Will people know ?
It’s not always easy to tell, no. That’s part of the problem.
Its a fantasy world over which we have no control, and trying to define real life and identify reality is going to get harder and harder even though computer technology and AI are finite. There is a limit to what they can do simply because its computer code, and its only a matter of time before hackers have sufficiently trained themselves to start attacking it at a serious enough rate to cause total chaos. The fall out will depend on how much AI is embedded in our educational, economic and political structures. Considering Australian banks are seriously trying to wipe out cash at the moment, its not a pleasant prospect, given how often their POS is off-line!
Since I’ve posted my fair share of skeptical pieces on this thread:
I should note that DeepSeek’s breakthrough has nudged my priors significantly closer to the argument that AI is, for better and worse, going to be transformative. As long as the brightest minds in Silicon Valley seemed to be insisting that the transformative use case for the tech was just a trillion+ dollars and a mid-sized country’s consumption of electricity away, it felt to me like just another wasteful blockchain/metaverse hype cycle. But being able to do it with vastly less cost and compute is a game-changer.
I’ve got a lot of respect for Yascha Mounk, and his recent newsletter makes the case pretty well:
At a work conference for last few days and the upper echelons of our academic leadership have been using ChatGPT extensively to challenge their assumptions, write test questions, and brainstorm. I think there is more there there than I originally assumed as well.
The thing that concerns me is that LLM as just cleverly repackaging what is already out there. I think longterm we are doing less original thinking as a consequence. There is also some value in just muscling through the research on your own.
I’d argue it already was transformative in some areas of life. Many people (the author of that article included) use ChatGTP for every minor task and more. AI images have long drowned out actual art and actual photos in terms of volume, making search engines an absolute pain to use. Companies of all kinds, from law firms to tech ventures, are increasingly less likely to offer entry-level jobs since LLMs can perform many tasks previously reserved for interns.
In case that wasn’t clear from my tone, I definitely consider this transformation “for worse” in nearly every way. I’m starting to think that maybe anarcho-primitivists were right all along and agriculture was a mistake, since it led us in a pretty depressing direction. We should have stuck to throwing spears at one another and told Prometheus to keep that “fire” thing to himself, at least then I wouldn’t have to doubt the authenticity of cave paintings.
The problem, imo, is the human factor. AI could become a helpful tool, but when combined with selfishness and greed, it could also enable further exploitation.
And that means I’m worried from another perspective, too. I hear about more and more people using AI to search up info, sum things up for them, stuff like that. We might be losing our ability to research and think things through on our own in favor of quick, easy answers. I am worried about literacy/reading comprehension levels falling, and critical thinking skills never getting developed - making people easier to control.
It’s not just about losing jobs, I think, not really. In the right hands, it could be so much better… but in the wrong hands, it could be so much worse.
Add to that that in so many cases generative AI is made by exploiting far more people than one might even think. It’s not just artists/researchers/etc getting their work stolen, butchered and sold back with all the wrong parts, it’s people in the global south working for a wet fart to do all the tagging and sorting just so someone can have shoddy ‘fanart’ their fav snogging on top the eifeltower in about 2 hours instead of commissioning an actual artist who’d not give them 27 fingers on their legs.
I mean it costs 20 dollars per month for now, and although DeepSeek is doing it with less cost and compute, the figures that people throw around that it only costed 6 million is very much an misunderstanding.
I would say the deep research published by OpenAI is very impressive on the upper end of things, and that it researches well and I’m very interested in seeing it’s implication for writing, people might actually be tempted to send 200 per month for this.
I’d have thought you’d hit a ceiling pretty quickly on the number of people willing to pay $200 per month for software, even if it’s pretty useful. Maybe I’m misleading myself by mentally comparing that to the cost of an Adobe or MS Office license, but if you need to justify billions upon billions in investment, I’d have thought a consumer audience of that scale is what you need. A JSTOR-type model where you charge a few thousand institutions thousands of dollars a year doesn’t seem like it would scale.
But I don’t work in the field, as I think I remember you saying you do. What comparable software are people shelling out $2400 a year for? Who’s the audience you think would do it for this OpenAI product?
Dunno about comparable, but 3ds Max is in that ballpark price-wise.
AI Dungeon (a roleplaying AI chatbot) had to make a specific subscriber tier for people who wanted big models with high context - the highest tier is roughly $1,000 a month.
You’re right that there’s a ceiling on the number of people who’d go for that, but whale hunting has always been a profitable business model.