There is no limits to the stupid things that companies will try to save money on to make sure the people at the top gets the maximum profit. In fact, cutting corners to make a thing worse in order to get more profit seems proportionately inverse to the budget at times.
Also, knowing some freelance artists/designers, what they get paid is a fraction of the sum charged to the customers (sometimes people get ccd in by mistake when prices are being discussed). So contracted artists cutting corners because the customer didn’t care isn’t out of the question either.
One of the things I always look at when assessing for AI is light source. AI is generally dreadful at it. Although these examples are far from perfect, they aren’t all together terrible either. It almost looks like different pieces of existing artwork smooshed together to save on time/effort in the light sources of some, and other stuff just looks rapidly sketched out without revisions. Like that extra shadow isn’t, it’s a ridge on the roof, you can see it in the shadow from the roof behind (which admittedly the one behind is a light source issue). The modern jacket and fishing bucket is debatable, I’m not convinced. The girl’s legs are just drawn weirdly close together for someone pushing a cart. The cat is weird though.The wall with the crucifix and the background with the huge moon feel like reused artwork or just carelessness. (Probably the former). As is copying the stumps.
Yes so do I. Someone really should have looked at these and gone nope, for such a high profile project no less.
Yeah I’d say that is likely. It’s a shame if they were underpaying their artists. I wouldn’t want my name attached to artwork like this if I was a professional artist though either. Would be pretty bad for your reputation and future work opportunities.
tragic news today for a lot of AI music creators and lovers, this is why we need open source models and freedom to train on publicly available data, hopefully the Chinese develops something.
It’s insane that they think any more than like, 10% of artists are going to opt in for that. It is extremely obvious that the vast majority of professional creative voices hate AI (which, valid, because honestly, same), and none of those artists are going to be any of the people that are popular anywhere, because even if companies start making contracts saying that it is required for them to opt into the service to get production deals, they’ll just go to a company that doesn’t have that stipulation. How do they ever envision this working out in anything other than losing absolutely insane amounts of money from running an AI program that will no longer be able to scrape from the most popular music available to it?
Though, I guess to be fair, unless they start it’s knowledge from scratch, it already has all that information from all those artists, so there’s that…
Of course not every country is a hivemind, but it’s funny that as a huge AI supporter to an extreme degree, I came from China and moved to US when I was like 13, still hates communists, Chinese brain-dead nationalists and the Chinese government though so maybe this pro AI sentient in the east is not really about social-economic beliefs or conformist thinking or that they like their government more, like some Western people are saying.
I think if you looked at Anglosphere tech workers, you’d see a big shift to the right of the graph. My guess is that’s probably more of a factor than ancestry in your case.
well it’s less about bloodline, and instead about the societal environment in which you grew up in that has a huge influence, and 13 is not that early.
and I’m not really a tech worker, more like a tech academic at this stage, and even then Pro AI sentiment is more like the cause for career, way before the whole LLM thing, instead of the consequence, so I don’t think heritage can be dismissed that easily.
maybe if I was born in the US, I would just be an average AI bro instead of an AI cultist, but who knows really.
maybe AI is just a generally exciting technology, and most Chinese just don’t give a damn about the downside, like what’s even copyright? never heard of it. People losing jobs? life sucks and you are on your own, we aren’t doing anything about it in the first place anyways, got mine. Human creativity? what is this “creativity” you speak of. Government oppression? the Chinese government is already US times 10, anything the Americans fear will happen already did and worse, tell me something I don’t know. Human rights? not really a thing.
I think your last paragraph is pretty much spot on. A lot of the non-science-fictional downside risks Anglophone critics dread – being reduced to drudgery, disregard for IP, computer-assisted authoritarianism – are everyday realities in much of the world. And one of the biggest upsides of better AI, so far, is its ability to facilitate profitable interactions with the English-speaking world by hugely reducing the barriers for non-Anglophones. My neighbors here in Nepal aren’t in your survey, but I’m sure the majority of them would be more excited than nervous too. ChatGPT has already helped one of them finish a MA degree and get a promotion in his job.
I guess I’m more European – not super nervous, not super excited – in large part because I still think we’re headed for a bubble-burst and retrenchment, rather than either the worse-case or better-case scenarios. A lost decade or three while AI researchers try to figure out a pathway to a reliabe general-purpose agent (and probably end up just enhancing a bunch of specific-purpose AI tools instead) strikes me as more likely than exponential change in either direction. The march of automation continues, and humans continue to adapt to it, with benefits and strains on a broadly familiar scale.
Today I stumbled across a pro-AI piece by Noah Smith (who’s much, much keener on the tech than I expect ever to be) where he mentioned that in his view one rational concern about AI is its extraordinary electricity consumption. His source was this MIT Tech Review article from May, which I hurried to check to see if it turned up radically different info than what I’d previously seen.
It’s certainly written in a tone of grave concern. But after various usage comparisons involving hours of microwave oven usage, the best estimate the authors offer is that OpenAI – the AI company with by far the biggest global user base – currently consumes at least as much energy as 13,700 American homes. That’s a picture consistent with the “couple of Brainerds” scale I suggested above for LLM-world overall.
The authors immediately follow that estimate with, “But here’s the problem: These estimates don’t capture the near future of how we’ll use AI.” And that’s where I can see why Noah Smith thinks this is a rational fear. Even more than the MIT researchers, he believes in the coming AI boom – so he to some extent shares their concern that the amount of inference that’ll go into “AI agents” who replace human knowledge workers will be immensely electricity-hungry, and that the scale of data-center construction is going to increase steeply to feed the demand for those agents.
But if “agents” are both energy-hungry and unreliable in ways we’re still decades out from fixing (my bet!) then we’re on track for a bust, not a boom. The near-future of AI is a retrenchment from current usage levels as companies can no longer afford to lose money on queries, and a bunch of existing data centers go cold for lack of demand for their highly specialized hardware.
Anyway. The Noah Smith post also links to a recent piece by Andy Masley, the guy whose math originally convinced me that AI’s environmental impact wasn’t far different from other industries (and on a different order of magnitude from crypto). His more recent piece is on water use, and worth reading for its specific rebuttals of often-linked news articles on the topic.
So for anyone keeping score, I dislike LLMs because
they’re trained on what is (morally if not legally) massive-scale IP theft, transferring value from a vast number of human creators to a small number of tech company owners
their results can be helpful and/or entertaining, but aren’t reliable enough to be worth paying anything close to their costs
they pollute the knowledge ecosystem by confidently introducing slop as fact
they interact badly with people who are vulnerable to sycophantic reinforcement of their self-harming ideas or obsessions
they’re at the middle of a stock market bubble that could cause a global crash even worse than 2008
But not because of their energy and water usage, which feel to me like a bit of a sideshow. Masley says it best:
It is my understanding that it’s not the prompting that causes the electricity usage the most, it’s the training. (After all, you can run LLM locally, but training one would take forever.) But I don’t have the time or energy to dig for research, so take that as you will.
(I would add “DDoS:ing innocent websites” and “taking away entry-level IT jobs” to the “why I dislike LLMs” though.)
I’d totally thought that myself – but that MIT paper suggests that no, the inference burns more power than the training.[1] Which is one more reason that this isn’t likely to have the same pathway to success as earlier IT booms. Google and Facebook weren’t adding significant costs with each new marginal user.
“As conversations with experts and AI companies made clear, inference, not training, represents an increasing majority of AI’s energy demands and will continue to do so in the near future. It’s now estimated that 80–90% of computing power for AI is used for inference.” ↩︎
Huh, that’s hard for me to understand unless there really is nothing there in terms of power consumption. We can install agents on existing closed network infrastructure in the DoD and we certainly aren’t upgrading it when we do the installation. That suggests the power consumption is essentially “always on” and the 80-90% on inference figure comes from that being what LLMs are doing 80-90% of the time. That also doesn’t quite track though because we know they are spending hundreds of billions on compute.
It’s mostly focused on the big unknowns underlying calculations of AI power use, and the demonstrated pattern of companies busting their carbon targets in the effort to power their new investments.
I still suspect that catastrophism here would have to rest on the assumption that this is the start of a boom rather than a bubble (and the parts of the post I find most compelling, describing companies forcing AI into products even when that makes them less efficient and reliable, does nothing to change my mind on that front). If you think the former, then yes, it’s reasonable to be concerned that AI is going to demand a vast amount of additional electricity in the coming decade.
I wish journalists trying to hold corporations accountable would be a little less dogmatic and polemic in their writing. Punishing Google for being the most transparent of the tech companies voluntarily releasing their energy consumption data seems entirely counterproductive. The tweet he used as evidence that cited a percentage increase is hard to take at face value. An 83% increase on a tiny number is a slightly bigger tiny number.
“I want companies to stop lying with numbers and here’s why” (supported by a lie with numbers.)
The circular firing squad among climate activists is especially disheartening.
Sorry for the late reply, meant to reply earlier but the back half of November got super busy.
That’s fair, I don’t necessarily agree with all the circled points myself. With the legs, if it’s not AI, it sure seems like the artist is deliberately choosing to draw them weirdly. Once could be a fluke. Twice, less so. Five times seems deliberate.
Strange artistic style? AI? As I said upthread, I’d lean towards AI due to the nature of the errors (hoofbeats and horses) but only the artist knows the truth.
Revisiting this, I’d never actually tried AI image detectors so out of curiosity I gave it a spin.
These are the 18 images (I think this is all of them, but there could be more) from the book that have been posted on social media:
The first thing you’ll notice is results between the detectors vary, sometimes widely, but they aren’t completely uncorrelated with each other or themselves. I admit I went into this pretty skeptical of AI image detectors; I cynically thought there was a chance some of them were nothing more than a Math.rand() wrapper for rubes.
But it looks like the big ones are actually doing something. That something may not be any good, but credit where credit’s due; I was overly cynical.
So: are they good?
rated on a scale of 1-5, 5 being confidently yes, 1 being confidently no ↩︎
(part II - Discourse didn’t like how many images I had)
Are you smarter than an AI image detector?
I’ve collected a set of twelve images, some AI, some human. To minimize any potential harms, the non-AI images are (1) widely posted commercial images from at least a decade ago or (2) public domain images part of all the big open source data sets. The generator and detector models I’m testing were almost certainly already trained on these latter images. I stole borrowed the AI images off AI art subreddits[1].
If you’d like to play along at home, try guessing which ones are AI-generated and which ones are not. This isn’t necessarily meant to be hard – or easy.
If you’re feeling really enterprising, rate the confidence of your predictions on a scale of 5 to 10. 5 means you think you’d be right 50% of the time, essentially saying you have no idea and are basically guessing – no better than a coin flip. 10 means you’re absolutely certain and think you’d be right 100% of the time. 7 for 70% and so on.
Well, I think we can confidently say Illuminarty, Decopy, and WasItAI are terrible. I don’t know what a good error rate would be, but above 30% is too high to be useful.
Hive and Sightengine classified all the images correctly, but this is a small and unrepresentative sample. Due to the way I chose the human art, most if not all of it was in the training data. The best we can say for them is we don’t have evidence they aren’t good.
Returning briefly to the art book, if we exclude the poorly-performing models, we get:
Image
Hive
SightEngine
AVG
10
99.9
99
99.45
11
99.9
99
99.45
14
99.9
99
99.45
18
99.9
99
99.45
15
99.8
99
99.4
5
99.6
99
99.3
9
98.3
99
98.65
12
95.1
67
81.05
1
99.4
52
75.7
6
99.4
45
72.2
13
98.1
6
52.05
7
0.1
98
49.05
8
40.6
18
29.3
3
13.6
37
25.3
17
44.6
2
23.3
2
1.3
6
3.65
4
4.8
2
3.4
16
0.1
1
0.55
You can see they mostly agree, with serious disagreement on around 6 images. There were 7 images both classified as highly likely to be AI-generated. I honestly don’t know what to make of it, but I find it interesting nonetheless.
And finally, the confidence rating. That’s from David Spiegelhalter’s The Art of Uncertainty[3] and is a simple metric to rate predictions. Here’s how you score it:
If you rated it…
5
6
7
8
9
10
…and were Right
0
9
16
21
24
25
…and were Wrong
0
-11
-24
-39
-56
-75
Once you add up your points for all twelve images:
A large positive score means whatever analysis you did was pretty good and you correctly assessed it was pretty good.
A small positive score means you were cautious – you were cognizant of your own uncertainty.
Zero means you had no idea about any of the images and didn’t think you’d do any better than someone picking at random.
A negative score means whatever analysis you did was less effective than picking at random and rating everything a 5.
A large negative score means you were inaccurate and overconfident in your analysis.
How many did you get right?
0 (0%)
1 (8.33%)
2 (16.67%)
3 (25.00%)
4 (33.33%)
5 (41.67%)
6 (50.00%)
7 (58.33%)
8 (66.67%)
9 (75.00%)
10 (83.33%)
11 (91.67%)
12 (100.00%)
0voters
While it’s possible someone might lie and post human-made art to AI art subreddits, this seems like such a remote possibility I’m discounting it. ↩︎
I highly recommend it if you’re interested in probability, forecasting, or epistemology. Written to be accessible to all audiences, with math in the footnotes if you’re into that kind of thing. ↩︎