The AI’s problem: that’s exactly how most of the world describes Finnish.
That is hardly the result you’d want from a translation software though! Even the old Google translate was better. Even if the sentences were nonsense, it wasn’t making up new words with letters that don’t even exist in Finnish!
(Or worse, turning the text it’s given into nonsense!)
Interesting things are happening in Albania.
I have a question, are format generators considered AI? I have a funny concept that I want to do but, formatting it will be hell.
“Genomic language models” are (like protein folding modellers) a good example of why I think it would be a mistake for GenAI opponents to talk as if AI in general is bad.
Because they’re not trying to talk to humans in natural language, these AIs don’t have to be trained with the same indiscriminate plunder of copyrighted work as LLMs or art generators – just on massive quantities of genetic information. And the medical potential that’s unlocked by coming up with evolutionarily novel bacteriophages is worth spending a bunch of electricity on.
(Also, terrifying bioweapon potential. But the power to ‘defuse’ bacteriological bioweapons and novel pathogens doesn’t come without a flip side. I’ll still take it.)
Yeah, and the pattern recognition -using analysis AIs are something that could be really useful because they recognize those patterns earlier than humans do. I recall something about one analyzing… MRI scans or something? That was efficient in noticing cancer? I don’t remember, it was a while ago and I can’t for the life of me remember where I saw it. (But it does make me wonder why you’d want to use LLM for it.)
Minor convenience (the ability to give requests and receive output in fully fluent human language)
and/or
The hope of entirely replacing human jobs in this area with AI “agents” (which is in theory easier to do if the AI’s outputs don’t require specialist interpretation)
and/or
The dream that AGI will emerge from LLMs if we get them doing enough stuff.
One or more of the above, I think.
I wonder what proportion of this phenomenon is related to AI or perhaps more generally automation?
Alright, just a heads up. I have used Grammarly for years and years, and up until now I have been alright with being able to customize the Pro version and turning off anything that smells of AI. But no more. They are rebranding, going wholly in on AI, and I am out for good.
As someone who has been paying for Grammarly Pro for editing, should I be looking for alternatives? I don’t want my work associated with AI, but I am not sure if there are any big tools that don’t use it in some capacity. Even ProwritingAid states that they use AI.
I wish I knew. At this point I am depressed and hoping that this will all have crashed and burned by the point I need things for final editing again…
That is, at minimum, manslaughter, and the people who created the service should be going to prison for it because we cannot prosecute the service itself.
People sit and watch YouTube videos of other ppl on dates , other ppl enjoying basic time with their family ,people playing video games and your shocked about this.
I feel like if boomers made fun of millennials and younger for not being able to form real life connections they be correct . My mom was never at home in highschool or in her twenties on the weekend. Mean while I know plenty of twenty year old including college students who have no friends or social life . They use twitch streamers ,YouTubers etc and replacement for these things .
the internet has helped a lot weird ppl connect with each other in some ways that’s good . The bad thing about the internet is alot of these ppl are never forced to connect with a person in real life and are incapable of real.life interaction . In the chatgpt subreddit it’s scary one guy ask chatgpt to tell him his flaws and chatgpt basically said he is too cool for society ![]()
I really don’t think it’s a good thing so many young people especially men are disillusioned and not connected to society at large . If this keeps up it leads to violence and civil unrest .
I don’t think they will have case . Almost every time this has happened it gets dismissed especially once the messages the person was sending the a.i get released .
Like one guy used role play and essentially tricked the a.i into giving him a suicide method but it was suppose to be for a story not the man in real life. His family then tried to sue but was unsuccessful .
I have a question, are format generators considered AI or well generative AI? I have a funny concept that I want to do but, formatting it will be hell.
What’s the current stance of HoG regarding using GenAI as rubber ducks? Meaning bouncing ideas off LLMs but not copy pasting the output directly into the story. Using it as inspiration for scenes regarding, visual, auditive and olfactory cues for example.
HG has no official stance on that. There’s also (I’d guess) about a 0% chance that they’ll articulate a stance on that, or on any other use of AI that’s not clearly generative.
There’s no upside to the company saying anything nice or even neutral about AI, given how many people in their customer and writer base would be furious if CoG/HG explicitly or implicitly endorsed the use of LLMs for anything.
At the same time, I’m sure the absolute last thing they’d want would be to increase the “AI policing” workload of HG staff. It’s hard enough to gauge whether text was generated by AI, which they do anyway because of the copyright implications. Trying to enforce a ban on using it for coming up with ideas would be ridiculous.
Proceed based on your own moral judgments – and the knowledge that a not-insubstantial portion of the fan base will boycott you if they find out you used AI for anything – not based on HG rules.
This is also what I understood from the tone of their official statement.
This is a separate topic that I’ve been trying to get a pulse of. On the one hand, I’ve seen a thread where an author was lynched for using AI, when it was obvious that no LLM could possibly have the training data to generate the author’s style and comparison choices. But on the other hand, I saw a WIP thread where the author disclosed that they used LLMs for gathering research materials and no one seemed bothered by it.
All in all, it seems like it’s best to stay away from such tools. Thank you for your reply!
I must admit, some of these images look pretty suspect, I can see why people are flagging it as AI, but if you run them though AI detectors it’s not saying that’s what it is. Could just be the artist phoned it in and didn’t pay attention to detail or just reused pieces of existing artwork they had in the images? (I mean it looks like a real woman (not a carving) on the front of the boat, no sigils, innacuracies in how things are described vs drawn, weird glitchy looking sections of images etc). Strange either way considering the budget that should have been applied to a project like this.


