Social, Ethical, and Political Statements of CoR

I wasn’t going to respond to all the points of your long post, partly because I wasn’t sure I had time, and partly because it seemed a little unfair for me to weigh in on the discussion. But, I do have a little time at the moment, and you clearly feel comfortable disagreeing with me about things, so I thought I would respond briefly to each of your main bullet points.

(For other readers: Spoilers ahoy, proceed at your own risk.)

CoR endorses pacifism: In short, yes
You got me; I don’t like war very much. But, for players who do want their robot minions to perform acts of conquest, there is the take-over-Alaska climax chapter. There’s also the bit about America suffering from economic catastrophe in the wake of a lost war, so it’s not as if I entirely think nations can get away with not protecting their interests; if anything, I think the bad consequences there were a bit of an exaggeration, but they’re there to make the player feel at least a little bad if they threw America under the bus during the war.

CoR assumes you know who a lot of dead celebrities look and sound like: I am confused
I am just not sure who you’re talking about, except maybe Hunter S. Thompson? Or Mark Zuckerberg? Neither of them being entertainment personalities, I am definitely confused by the talk of Entertainment Tonight. The game definitely makes references, and the hope is that people will Google references they don’t know, feel like they learned something, and maybe discover the Magnetic Fields in the process; but for visual references, I am just confused by what you are talking about.

CoR robot logic is ridiculously human: It’s complicated
There are definitely things you have to do when writing a likeable robot character, and one of them is avoid making the robot too bizarre. But come on, is it really so hard to think that a baby robot that grows up by consuming some kind of human-produced media in large quantities ends up thinking in a somewhat humanlike way? Especially if we hypothesize that the protagonist’s magical – I mean, paradigm-shifting – discovery in some way is about mimicking how human brains work. I don’t think the sins here are really that dire, beyond the giant lie that we are anywhere close to this discovery in the real world.

As for logic, every good A.I. person knows that traditional logic stopped being cool for A.I. in the 80’s. It doesn’t work for robots in particular at all, since they always need to reason probabilistically about their environments. But at times in the game I may have used “logic” as a shorthand for being overly analytical at the expense of empathy, particularly in dialogue. That is what Autonomy boils down to in the game, although funny enough Grace gets all the good algorithm execution.

It is possible that the most practical advances in A.I. will be things that don’t look or act at all human; in fact, that’s true of most major A.I. technologies in use today. But as story conceits go, having a robot that is (potentially) likeable instead of alien throughout the story really isn’t much of a whopper compared to its having any humanlike intelligence at all.

CoR has a tenuous grasp of economics at best: Fermat riposte
I have a good counterargument that I could write a whole post on. --Just kidding. I’m sure there are some non sequiturs. For global economics, I think I have all of one Boolean variable that only kind of tracks whether U.S. economic collapse happened.

CoR believes global warming is no big deal!: Sort of
Global warming is happening in the background in CoR, and it doesn’t really cause on-screen catastrophes. Mostly this is because global warming and robots do not have a whole lot to do with each other, and the player can’t realistically make any choices to do something about it (with one exception, if I recall?). I suppose there could have been some color narration about coastal cities going below the water line and such. At the same time, predicting that Alaska will support wineries is such an exaggeration (for comedic effect) that I feel like saying CoR underplays global warming is not quite right, either. Maybe you want to point to that inconsistency, huge change in Alaska with no major disasters elsewhere; that’s fine, guilty as charged.

CoR is anti-government: Probably true but I quibble with lots of details here
This is probably true on the whole, that human government in CoR is portrayed as not very competent and occasionally evil. I’d disagree with a lot of the finer points under this heading, though. I don’t think Elly ever says she wants to get rich; I don’t think I would write a line that is so out of character. Failing to mention civil branches of government doesn’t strike me as libertarian; there just wasn’t any plot I wanted to pursue there. The concentration camps don’t inter just anyone of Asian descent, but specifically people singled out by flawed algorithms, which themselves encode a kind of implicit racism that the algorithms’ users don’t understand. China is very carefully not described or portrayed as more evil than the U.S. at any point, and they don’t engage in any unethical behavior the Americans don’t engage in. The bits about Africa and South America are very much your reading what you expect to see into the game; I’m quite certain I said nothing of the sort about those continents. In short, it seems uncharitable to think “the game didn’t mention X, so it must hate it.”

There is just a lot of stuff in this section that I’d disagree with, but I suppose I would agree that the game does not make any kind of “on the other hand” argument for human government generally, and I suppose I could have done that with a nicer President in the final chapter. It simply wasn’t a thing I was concerned about, partly because nice functioning governments just don’t generate much plot or interest.

Although I have to say, the first thing that comes to mind when I think “robots + civil government” is “hey, maybe a robotic DMV wouldn’t suck!” So yeah, I guess even if CoR thought to mention civil branches of government, it probably would gripe about them.

CoR is anti-social: I don’t think so, but YMMV

The game surely doesn’t encourage you to be a solipsistic narcissist; you’ve said elsewhere that the humanity hits are uncomfortable moralizing, and you get those by being a solipsistic narcissist. Likewise, some of the more uncomfortable loner endings come about from having no significant others in the end (although I did make a couple more rewarding when they were unsatisfying in playtest, including Lunatic, about which by the way I’m sorry). The military climax is just for people who really want a villain/antihero story, but it’s set up to be a little bit of a tragedy.

That said, I could see someone going through the whole game as a solipsistic narcissist and having a great time if that’s the story they want to tell, with or without the military climax.

CoR is unroboethical: Maybe at the very end?

There are a few choices related to the ethical standing of robots in the game already. It sounds like you wanted more in the late game, and you have some good ideas for plots, which is nice; although I’d point out those plots don’t necessarily involve choices for the protagonist, which is what these games are about.

Saying there should be more ethical decisions in the endgame seems valid to me, but saying the game as a whole isn’t concerned with robot ethics strikes me as false. There are lots of choices about whether to treat your little robot as worthy of being your equal. For the societal things, a limiting factor is just that you aren’t a judge or the President, so these issues are necessarily addressed sideways (as with paying your robot workers, or when your companion robot possibly goes to trial in the Empathy climax).

Humanity redux: No, I like my way

Making “greatness versus goodness” an opposed slider removes the chance for the player to try to have the cake and eat it too; I think seeing whether you can keep your Humanity score high while achieving greatness is an interesting thing for a player to do.

Giving a Humanity hit for every action where you choose work over friends and family is more or less what I do for the first few chapters, but you may not have noticed because the individual hits are necessarily small (they add up).

I’m unrepentant about the Humanity slider concept generally, although the specific thing that probably did not work was perhaps the silent tracking in the background. I wanted the impression to be of a protagonist who looked up one day and suddenly realized he or she was not a complete human being; but if the Humanity changes were called out instead, it probably would have given people a more nuanced idea of what exactly it was tracking.

Again, thanks for your interest, and I hope my disagreement spawns a still more lively discussion (although I am not super sure whether I will have time to participate further). Cheers.

5 Likes

OK, sorry for not responding sooner, but I was sick, and slept through most of the past two days, and, spurred on somewhat by recent comments, wanted to finally finish that economic thread.

Apologies to Shockbolt, but I’m going to try to answer kgold, first.

@kgold
Again, especially in the earlier sections, I’m more making a list of the “statements” that the game makes, rather than “criticisms of the game”, although I definitely get into criticisms near the end. To that end, I’m not necessarily faulting pacifism or that CoR robot thought is basically magic, just noting its presence.

Dead Actors
The characters and references I didn’t recognize were the likes of Jack Palance, Ella Fitzgerald, and pretty much anything having to do with Elly’s tastes.

Global Warming
Yes, the thing I noticed was that the only real impact of Global Warming was one that could basically be read as being positive - that the unforgiving arctic landscape of Alaska was warmer, but that there was no real change anywhere else. No mass migrations from low-lying island nations or Central American countries that faced desertification. That stopping of the Atlantic current that would freeze New England doesn’t seem to have any negative impact.

Elly
With regards to Elly saying she wanted to be rich, I’m referring in particular to the scene where she’s your romance option, and you’re flying to the location of your new factory in chapter 4. She mentions wanting to be like Bill and Melinda Gates, and saying that she isn’t sure how much longer she can “put off giving back to the world”. The thing about this that bugs me is how it seems to rest in the assumption of how people can give back to the world largely revolves around becoming rich through business, then giving back through charity. There isn’t really any other way of giving back to the world shown in this game. The problem I see is that, as mentioned later in the post, you don’t really see a way for the player to become involved with the public sector, or even generally having it be a valid path.

Perhaps I should explain my bias in this case, to help explain why I’m seeing some in yours, because I think this difference results from a difference in how we may have been brought up colors a difference in the way we look at government: My family, especially on my mother’s side, comes from a long line of teachers and lawyers. My aunts are teachers. Several of my cousins are teachers (one of whom goes around the world to do so). My parents are both lawyers, and, after a brief stint in private law firms, worked almost their entire adult lives for the federal government, even though they were taking home a tiny fraction of what they would otherwise make, because they believed in the work they did. Because my father represents poor soldiers in court for free, and I lived fairly near military bases my whole life, I have to take umbrage with some of the cheap shots at all soldiers being murderers, and all government being evil. That’s basically everyone I’ve ever known, or at least, their parents, growing up near army bases.

The problem I see is that you just don’t even consider that public service might be ennobling. I again have to ask, why isn’t Elly working some sort of job where she can work directly for the public good? Why is she not a teacher or a public advocate or a community organizer or - god forbid - running for office to actually enact the policies she clearly believes in, things that don’t necessarily require charity? Why is it only something that comes out of making phone calls to rich people for their crumbs that is an ennobling pursuit? It’s as though an entire class of people, and the reason they do what they do exist in your blindside…

China
I’m sorry, but you really didn’t write the story so that China was just like America.

For starters, America isn’t portrayed as just gaining all its technology through stealing from other, “better”, nations the way that China is portrayed. One of the reasons I see this so distinctly is that this notion of aggrievement over the perceived “illegitimacy” of other culture’s accomplishments is also a serious red flag denoting many jingoistic worldviews. For example, the notion that the Middle East was the leader of Western Culture up until the Christians stole all the Islamic advances during the Crusades is one of the rallying cries of Islamic Extremists. Frankly, this notion that China is only great because of what it’s stealing from us or somehow tricked out of us is already a part of the more jingoist sects of American politics in the real world. The way that China is portrayed in this game plays directly into those negative stereotypes.

What’s more, however, is that the game is almost entirely barren of positive counter-example. Even the one guy we’re supposed to feel sorry about if our robots shoot him (much like Imperial Stormtroopers, it’s instantly recognizable who killed him by how precise those shots were since it totally couldn’t have been a lucky shot or a sniper’s fire…) is mostly made sympathetic by how much he questioned the evil Chinese government, and wanted to leave to learn about robots.

In fact, merely trying to use the one kind of battery identified as “Made in China” is an automatic hit to humanity. You take a morality hit just for not running your robot on good ol’ Freedom Fry Juice, Like The Founding Fathers Intended!

Any attempt to work with, build a factory in, or sell to the Chinese is basically declared “working for the enemy”, and actively punished by the game when the war comes. If the war doesn’t come, then China is, again, banished immediately so that there’s no chance to see them in a positive light in any way.

This game says that America’s government is corrupt, but its people are… mostly good. This game says China is just straight evil top to bottom.

Narcissist
The point I was more making was that you get rewarded for telling the person what they want to hear, even if you immediately do the opposite, then have to convince that person to go along with it, anyway.

Take, for instance, the initial date with Elly where there’s a play about someone choosing between love and greatness. You get a choice, do you want to encourage love, and get humanity and Elly points, or do you want to choose greatness, and lose humanity and Elly points? You get nothing worthwhile at all for choosing greatness, and you’ll likely need those humanity points later on when you perform your evil acts in pursuit of your greatness, so there’s literally no reason to choose the greatness option, no matter what your motivations are.

Likewise, if you want Elly/Eiji and Tammy/Silas in your company, regardless of whether you’re dating one of them or not, (and you do, because they’re free robot upgrades,) then they’ll ask if your robot has military applications… so say “no” before heading off to the war and designing robots for military applications. What, it’s not like she’s going to catch on that you lied to them, or anything! You can convince them to stay with no downside by just saying “she’s your moral compass” or “work on medbots”, anyway! (Not that Tammy/Silas seemingly does anything unless you have a cult, anyway, so maybe you might as well let them go…)

The problem I have, which sort of extends to the humanity argument as well, is that there are so many objectively right answers to a lot of these problems. It’s easy to be friends with absolutely everyone with nothing but positive consequences. (I have some games where the lowest existing character relationship is 67% - not counting the lovebot I didn’t create.) The times where the games reach back and say “gotcha!” are relatively few, and mostly relate to the things that were already objectively bad choices, anyway, like the Chinese batteries instead of biodiesel, and the encrypted hard drive largely doesn’t even matter to the game, since it doesn’t seem to affect China’s war stat, and it’s easy to just bypass the war whenever you want, regardless.

These aren’t serious choices, just failures to learn which answers are the best answers.

Roboethical Plots
I don’t see how there’s any problem in involving the protagonist in an extended scene about robot rights, especially one like a Supreme Court case. You already have the player determining whether and how they give testimony to Congress over Chinese intellectual property theft, or determining whether robots can vote by personal fiat, after all. Simply make the player have an option to coach the robot about to make their case on how to best handle questions, or else have the player called in to give expert testimony. The chapter 6 and 7 plots already basically exist as a combination of a reflection upon what the player had already done, with things no longer under the player’s direct control, and a few guiding decisions, anyway, so it could easily involve something like a sum total of double autonomy plus empathy plus grace plus some bonuses the player’s choices give having to be equal to or greater than some arbitrary total that determines the case.

Humanity Redux
The problem is, I found that you practically have to be trying (or conquering Alaska) to wind up with a low humanity score and not become rich and famous. Even while getting chipped, I could wind up with over 60% humanity at the end, which seems the cutoff of most of the negative consequences. For that matter, looking at my screenshots, I have a game with 96% humanity, 38 fame, 18 wealth, married to Juliet with kids, 44 Autonomy, 0 Military, 50 Empathy, and 37 Grace.

“Having your cake and eating it, too,” is easy because there aren’t any real choices forcing you to choose between two things you want, except early on for robot stats, just right/wrong answers that you come to easily get the hang of fairly quickly.

For that matter, I would suggest that humanity actually be a straight number the way that fame and wealth are just numbers, but then have them displayed as an opposed bar simply for visual effect for the player, to give the sense of there being the conflict, because they really aren’t in conflict in this game as it stands. Similar to the robot stat checks, make the humanity checks based upon having more than some non-percentile amount of humanity.

The only time you take hits to humanity early on tends to be regarding things like wanting to romance Elly (which is an arbitrary -9% as a requirement to even try romancing her, and is obviously choosing humanity BEFORE choosing greatness in a metagame context,) taking the call from Glendale, which can be mitigated down to 1% by making the call with Mom quick (and results in +2 empathy if your robot can sing gracefully) and if you chose to talk to every reporter, which again has the mitigating middle choice of only speaking to reputable journalists, both of which were first game choices for me, anyway. Other choices, like the “Made in China” choice aren’t really choosing between greatness and goodness because they’re choices between empathy and positive humanity or grace and negative humanity.

You don’t have any sort of Steve Jobs-style conflict between work and family, because your family (including the romantic interest) doesn’t exist during the time when you’re covering parts of the game where you are a CEO, outside of the cruise, itself. If you date Juliet, she all but ceases to exist for Chapter 3 and 4.

So, where, exactly, is this conflict between being great and good you’re talking about? I don’t see any point where I have to give anything up to gain anything else, outside of Alaska. Starting a family is so easy - practically a “Do you want to start a relationship with this person? Y/N” - that it utterly confuses most people how to even find the ending-based achievements like Lunatic, because why would you ever not be married and have kids? Similar to the “Tragic Hero” achievement, most of these seem to just involve telling players to “play it wrong” on purpose just to get an achievement they otherwise would never bumble into on their own.

2 Likes

I talked about this a bit in the previous post, but I think the best metaphor would be that, in the 50’s and 60’s, the FBI tried repeatedly to infiltrate the left-wing social movements that were forming during that period, as they suspected the likes of the Black Civil Rights Movement or Second-Wave Feminist Movement to be Communist fronts. They incessantly failed, because the agents of the still J. Edgar-tainted organization simply were incapable of actually passing themselves off as liberal activists, they just didn’t understand the point of view of a liberal activist enough to fake being one. By contrast, when the FBI was finally shamed enough into investigating the KKK, (which Hoover didn’t see as a threat,) they found it amazingly easy, because their agents could pass seamlessly as racist domestic terrorists.

Yes, you can fake something if you try, but that takes both having the willing consciousness to try, and the understanding of what it is you are faking to actually pull it off. What I sense out of the reading I have from this game - and yes, this is just one game, which isn’t really enough to make a truly definitive judgment, but it is, nevertheless, the sense I gain from what I do see - is that the author has a negative sense of government, in general that is taken for granted. It’s like the way that the UN is dismissed in one parenthetical sentence - of course the UN never does anything, when has it ever done anything? Don’t bother asking the question why the UN doesn’t do anything, when anyone who watches the organization tends to blame the veto the Permanent Security Council members hold for nearly all lack of momentum. As you basically hint at yourself, later in the post you make, it’s rather difficult to actually represent the perspective of someone whose perspective you don’t understand, even if you mean well. What I read from this game is that the author seems to come from an environment that doesn’t consider civil service as a valid method of “giving back to the world” as a matter of course. (Considering the rather Libertarian leanings of a lot of the tech industry, I don’t think it would be uncommon for a someone with a Computer Science doctorate to come from such a culture.) Social and sub-cultural environs have an impact upon one’s perception of what “commonly understood facts” there were about the world or different institutions, even if one considered oneself different from the consensus of that group.

That’s not really how the DoD works. As Mark points out, the military will fund military research regardless of being at peace because that’s what they pretty much are required to do. In the real world, they’re already working on a sixth-generation air superiority jet fighter in spite of not even receiving funding for enough F-22s of the fifth generation due to there being no credible challenge to American air supremacy anywhere in the world to justify the price.

Even if we are to assume that a war has to take place because the military-industrial complex suddenly needs to justify itself, one could come up with far more morally ambiguous conflicts than one centered on current-boogeyman China. Civil war may be sparked in Southern Europe after the breakup of the Eurozone following a continued decline of Greece as the austerity measures inflicted upon it drive the Eurozone’s economy into further shambles, for example. Let’s say North Italy tries to secede from Italy as a whole, protesting the wealthier North’s continued need to support the South’s laggard economy, especially after robot labor replaces what little economic activity is left in the South, and American interests demand intervention in the conflict. Such a conflict wouldn’t overtly play to the standing fears and biases of an American audience, and invite more moral confusion.

Judging by the author’s statements, he didn’t intend for China to be a clear “bad guy”, and he’s overtly saying how he meant for the conflict to be morally neutral. The fact that you simply assumed it was meant for China to be the obvious evil so that there could be a good guy moment for the military kind of speaks to my point in that case.

This, to echo the response in the economics thread, would be such a massive departure from what we consider the American Free Market System that it would demand explanation in the main body of the text if we were to believe it were the case.

Besides, “online credits” is still a “money” commodity. In fact, cryptocurrency of the BitCoin variety is actually moreso a fungible, self-laundering commodity currency than government fiat-backed currency, which is exactly why it is more heavily associated with crime… Although bringing up the military-coinage-slavery complex of human history is a discussion way beyond the realm of this thread. I’ll just again suggest you read Debt: The First 5000 Years.

Clearly, this game does.

It’s not that things are worse when you’re unemployed, it’s just that you can see it. How often do the CEOs of major corporations just stroll around low-income neighborhoods to get the sense of what the average unemployed man on the street feels about their lot in life? If you sell work robots to the world at large, you trigger worldwide protest movements against robot labor, which you can see even (actually, ONLY) as a part of a company.

Actually, it’s quite explicitly the case that all those fields like small scale farming, which are labor, just don’t exist, anymore. Your robot even proves he/she/whatever can be a more profitable artist than Elly/Eiji, but nobody seems to react to this at all. For that matter, how much of society can really be supported upon artists just buying other people’s artwork, alone?

Rather, I believe this point of view is a little like saying that, now that computers and calculators exist, now nobody needs to study any field of math ever again. That’s why no nation wants to encourage students to pursue fields in Science, Technology, Engineering, and Math (STEM), right? The advances of robots would actually catapult the need for advanced understanding of AI programming techniques, the capacity to develop more nimble or powerful artificial limbs, and further refine energy efficiency for their powerplants far ahead of the value of nearly any other branch of education, not suddenly devalue them.

Already, in the game, it’s shown as a sharp divide between the “haves” that are those who can take advantage of the new economy based almost exclusively upon programming new digital properties for sale, and the “have nots” who were trained for a life where “work” meant “physical labor”. When the player is unemployed, it is explicitly stated that information workers are those who live in the future, while it is those who performed physical labor who were left behind.

People repeatedly show trepidation with the development of your robot(s), and ask how the robot(s) are being raised, but nobody stops to consider actually talking to the robot(s), themselves?

You even have scenes where you ask the faceless nobodies of your company are asked to train the nameless masses of mass-produced robots you built, and all it says is that the employees instilled in them that robots are meant to serve (—Autonomy, ++Military, ++Empathy), and the robots, being blank slates, accepted this. (And also raises the distinct job title of “robopsychologist” and “robot care provider”.)

In spite of this, there’s no moment where one of the characters who represent an actual point of view actually stops to question what motivates the robot, or by extension, the player without the player actively responding (and hence, capable of lying,) which would be a good way to have those characters impact your robot’s growth as a consequence of who you have kept around you, as well as make past decisions come back to haunt you if you’ve been duplicitous.

Moore’s Law does not apply to military hardware. The M1 Abrams is still the premier MBT of the US military, and it was developed in the late 70’s. Integrated circuitry experienced that geometric boom specifically because it was such an immature technology, and it’s notably decayed in the past decade, as technology has slammed into the physical limits of miniaturization, and had to work upon parallelizing technology, instead. By comparison, there is no comparable Moore’s Law to cannons or diesel engines and certainly not metallurgy for the makeup of armor, because these are mature technologies. Even if we presume the Gundams and Autobots got a whole lot smarter, which isn’t represented in the stats on your page, it’s questionable how much could have actually been done in such a short period of time. Remember, a jet aircraft is usually in development for basically a decade before experimental models are flown, then production takes about a decade as well. This game assumes you can go from drawing board to full-scale production of a war-changing number of giant, multi-ton autonomous war machines with no prototypes, no training, and no testing in the space of months. Simply put: It’s BS designed to make you, the player, feel like you could single-handedly change the course of a war.

Computing power also is not described as having been the source of the leap forward into AGI, but rather, it’s basically just magic. Instant AGI on modern technology: Just add “Genius Disease”.

[quote=“Shockbolt, post:26, topic:12481”]I like Star Wars where killing/hurting people was always dark side points. Sparing them was light side. That’s really all I like to see in morality meters is one aspect highlighted rather than me having to keep track of several that all boil down to much the same thing.

Letting people choose their own would be a nightmare to code. If you did It you’d have to offer a small selection and even then you’d be writing superfluous reactions just to include all types of possible reactions and that would be really clunky really fast.
[/quote][quote=“Shockbolt, post:26, topic:12481”]
I tend to be somewhat ethno-centric and assume that all morality meters are tied to a loose version of Christian values – thinks like don’t kill, love thy neighbor etc.
[/quote]

I’m not sure you understand what I meant by a difference between Deontological and Consequentialist morality (or the relativistic Pragmatic Ethics, for that matter)… For starters, just hurting people wouldn’t be “right” by either, outside of any larger context, anyway.

You talk about things like “Judeo-Christian” or “Christian Values” as though they are and always were one single, monolithic, unified identity of moral thought, and they simply aren’t. Moral and theological debate are no strangers to the Judeo-Christian tradition, as my class on Western Philosophy would attest. Deontological and Consequentialist ethics were argued within the same Judeo-Christian Western European context.

I’m not talking about making up your own ethical code willy-nilly, but asking players to choose what ethical principles they value more highly than others. As an example, CoR already pretty directly allows for pitting loyalty to one’s friends or principles against loyalty to one’s country, especially in the detention of Elly scene. That’s a question of where one’s sense of identity lies, although it’s so blatantly slanted towards “friends” in that case that it hardly registers. However, many games have quite effectively asked one to choose between friends and principles, organization, or country.

Likewise, in that Kant video I previously linked, (here again for convenience,) it mentions the contention of the ethical values behind being so committed to the notion that honesty is a ethical value that one would respond honestly to a serial killer asking where your children were, because the choices that serial killer makes are their own, and Not Your Problem. The Trolley Problem, in fact, exists largely as the defining line between the notion of ethical responsibility for inaction, which is a core concept in Consequentialist ethics. (Pulling the switch is also ethically mandatory for any justification for military force, for example, as you would otherwise be stating that it is ethically injustifiable to kill in self-defense, while justifiable to do nothing while an enemy army invaded.)

This wouldn’t likely be any more difficult to code, it would just be asking developers to think about how they frame their choices such that the players are asked to justify their reasoning when they make them. To use Mecha Ace as an example, this already happens, since the first question basically has two possible actions that are split into two and three justifications for the action you take, adjusting your pilot personality score heavily based upon that decision. Versus is just flooded with questions asking you the reasoning you use behind any action you take.

Murky… how exactly? It seemed pretty cut-and-dry to me. They are a direct consequence of a “tough on ___” mentality that is politically rewarded when “fear of the other” is one of the strongest motivators in the voting majority, as spurned by a political class desperate to hide any responsibility they may have for the failures that led to that state of fear. So long as a class of people can be marginalized to an externalized threat to the “real Americans”, then there will always be a political reward to be reaped by those unscrupulous enough to do so.

It’s no mistake that the military simply hired its torture experts ready-trained from the Chicago PD.

Certainly, a decent idea. It’s part of why I tried to do this thread, anyway.

2 Likes

It never really came up as something that I could use as a response, and wasn’t part of the main thrust of the original argument, but I was reminded enough of something that it never left my mind while I was writing this thread…

Specifically, while playing this game, I kept being reminded of what was my favorite take on a hypothetical concept of AGI from manga: Narue no Sekai. (World of Narue - the manga, not the anime, which only covers the first couple volumes of the manga) was at least originally a sci-fi flavored comedy series; It underwent pretty serious Cerebus Syndrome later on, speeding up to a ludicrously complex and fast-paced ending featuring a lot of high concepts introduced to retcon the earlier goofiness.)

To focus on the parts relevant, and not get bogged down in a massive plot synopsis, what the series calls “Mecha” are gynoids (always female because author appeal and target audience demographic) that, at “birth” are basically five-year-olds that are sent to live with human foster-families. (They are born from what are basically literal robot eggs, and grow over time due to “nanomachines”, which is a term in Japanese soft sci-fi can basically be used interchangeably with “f***ing magic”.) Only after they become “teenagers” are they screened for suitability for jobs. The “teenaged” gynoids are often the grunt laborers or “starfighters” of the fleets, being given force field and artificial gravity generators plus some laser rifles or missile packs if they need a weapon, and just use their gravity generators to fly at high speed. The “adults” of the gynoids are permanently integrated into some form of complex machinery, whether it’s some sort of global terraforming control center or a full starship.

The result is probably one of the most cautious approaches to the idea of AI Is A Crapshoot I’ve seen, with definite results. The most significant gynoid character, Bathyscaphe, is a battleship (she still has a humanoid robot body, but also is permanently integrated with the starship and controls it like a limb) that was decommissioned after her mental breakdown. This, in turn, was caused by the death of her crew, especially her former captain, who was also her lover, in a battle that left her hull breached. Due to the ship being inseparable from her, she wound up being put on civilian duty (considering the way that technology works in this series, I’d half-expect the AGI to be more valuable than the ship, anyway) and is functionally surrogate mother to Narue’s younger “older” sister (due to relativistic speeds stunting her aging - like I said, a lot of sci-fi concepts thrown around). In a later part of the series, it’s even outright stated that the ship’s AGI controls all systems of the ship, especially in combat, (“No human can keep up with the speed an precision of a mecha, so a captain is just helpless to watch and pray,”) and the ship is made of nanomaterial that can repair itself, given enough mass to process, making the crew of the ship largely vestigial and (especially considering later events) seemingly existing only to keep the gynoids company so they don’t go stir crazy from loneliness. Granted, that still raises questions why they need more than just a couple people, though…

Another significant side character is the battleship Haruna, who is an out-and-out deserter and pacifist, which sort of seems like something the military should screen for before assigning people as warships, but I guess they’re only looking out for the killer AI… Haruna, for her part, escapes to (modern-day) Earth and gets married to another side-character, ultimately functionally adopting another gynoid child. (Although notably, both Bathyscaphe and Haruna do actively engage in combat later on in the series when the universe is at stake…)

Probably the most interesting for the purposes of comparison to this game, a minor side-character that existed for a couple chapters of plot was a very old adult gynoid who had apparently been in charge of a terraforming control center for so long that her psyche was starting to crack under the stress and isolation of the job, since the gynoid population, thanks to the “serpents” enemies that make up the bulk of the late plot take up so many gynoids in combating them that she never gets relieved of duty. She ultimately winds up cold booting herself/getting “reborn” as a robot egg again after one of her gynoid friends finally replaces her.

Rin and the other “teenaged” gynoids tend to be a bunch of relatively upbeat and active tomboys. Rin herself was originally introduced in an antagonistic role, trying to split the male lead and Narue up, only to wind up blowing her cover as a human by throwing herself into the path of a truck that would have otherwise hit the male lead because, hey, protecting people comes before completing the mission. (She and her sisters later basically live with/in Bathyscaphe and is robot buddies with the main characters.)

The series as a whole is almost relentlessly optimistic, to the point where it doesn’t even seem like it can keep an antagonist going for more than a dozen chapters before revealing their motivations to actually be misunderstood and actually noble all along. Of course, the fact that I like the series for that, and am turned off by the relative nihilism of this game is basically just a subjective bias on my own part.

That said, the gynoids in the series are also so ridiculously human that the idea that human life in the futuristic “alien” (read: humans from the future) societies are peaceful (when not being destroyed by the “serpents”) because they just let the gynoids do the fighting starts to smack of callousness to the plainly evident humanity of the gynoids, themselves. (Granted, humans do have the excuse of not even being able to use most of the technology the gynoids use, and, aide from a few who have some BS psychic-using-nanomachines power, being largely worthless anytime a serpent shows up, anyway…)

However, it generally shows a vision of AGI that is both laudable and tends to solve most of the problems this game presents. In this game, the “best” thing to do is to let only some robots vote because otherwise, people just buy more robots. The game doesn’t even stop to consider that, A, if any robots can vote, they are citizens, and therefore, any restriction on their voting is unconstitutional, as well as B, if they are citizens, buying robots would be slavery, which is also unconstitutional. Therefore, either they can vote and cannot be bought, or they aren’t really citizens, cannot vote, but can be bought. (Technically, the best thing to do from a gameplay perspective is to let all robots vote, then profit from the extra robot sales because there’s no downside and the plot elements are forgotten by the next page.)

The “foster family” model of Narue no Sekai honestly makes a lot of sense for the sort of “tabula rasa” model of robot you have unless you go the extremely ethically dubious “clone” route when creating the mass-production model of your robot. (I don’t count the downgraded intelligence robots as a choice, because that choice has zero consequence for how intelligent your robots are considered down the line.) If you’re assuming they are basically humans (and the game generally goes to pains to say they are, except when it goes to pains to say they aren’t…) then the foster family model solves the problems of educating the robot in human behavior, and also gives a reason for humans to have a lot of jobs when robots are doing all the manufacturing. It also handily prevents flooding the market with robots, since companies can’t buy robots, they have to hire robot citizens. Humans, then, wouldn’t necessarily be shoved out of even the service-industry jobs by default, as robot citizens wouldn’t necessarily be cheaper and better in every way.

Narue no Sekai’s robots are outright magical, but in terms of their placement in society and general degree of humanity, they’re no more magical than CoR’s. That said, Narue no Sekai’s robots seem to have a more rationally consistent place in their society, and a more tonally consistent purpose in the narrative than CoR’s robots do. (For what starts out as a romantic comedy, it also gets surprisingly deep, if dizzingly complicated - the sort of plot where it needs a flowchart to explain what happened when and why, which is also part of why I try not to spoil the back half of its story. Narue no Sekai ultimately ends on the singularity pretty heavily as a notion that humanity and artificial intelligence will simply inevitably fuse into one another to an indistinguishable degree. It also, being a romantic comedy, ends on the notion that whether humanity succeeds or fails depends largely upon their capacity to love and accept the differences of others, whether it’s machine or alien, because the things we create out of love outlast us all.)

I know the game has been out for quite some time now but I recently bought it and it has been twirling around my mind ever since. I think the use of A.I is simply brilliant and it does open questions about the future. Should we be striving towards this goal of making super-intelligent A.I? I think the way the world is going, we are going to inevitable reach that point of singularity so to speak. Will the A.I then enslave us or help us as venture forth into the unknown?

I know this topic is rather broad but for now I want to think about two issues raised in the game… The companion bot and Utopian dream that bots could provide for us.

With regards to companion bots, is it ethical to create a creature that loves you unconditionally? Also this question touches on a broader question, are companion bots persons? I have been think about this for quite some time and have come to the conclusion that the ethics of companion bots is really a grey spot. One could argue that if I machines do not think, which touches on the philosophical position that matter does not think. Or at least A.I do not think they way we do to circumvent this issue.

Normally, to be in a relationship with someone you need consent. If you create a creature (I do not want to call the A.I a person because that is contested whether A.I could be persons) that loves you unconditionally will seem to not have a problem with consent. But I think an objection to this would be that what we have done is no better than brainwashing.

For example a paedophile can groom a child into “consenting” or you can drug a person with drugs in order to get a consent. In regards with A.I, is not the act of programming a companion bot a form of ‘grooming’. Maybe it depends on how intelligent the bot is. A bot built only to love you is not really going to be worried about other things. In Choice of Robots, the stat ‘Autonomous’ measured (I think) how independent and intelligent the bot is. If a bot is built with low autonomy, one could ask what the harm is in having a companion bot?

Then again, this companion bot seems to able to feel suffering. When I chose to ignore the bot to go on a date with Elly, it seemed to feel hurt. Is it right to program a companion bot to suffer if you do not it love back?

I know I am asking a lot of questions but bear with me.

The dream of creating a Utopian world is what I strove for in the game, and I succeeded. The is a price we have to pay for progress. In the game, bots replace people in factories. This of course is terrible but it seems to be also good in a way. As humans,we will not need to do labour intensive work and have time to do other things but people will lose their jobs because of it. I think this is already happening with the rise of automation. Instead of simply seeing bot as helpers we can take even a further step forward.

I conquered Alaska and my country is run by bots. Is this a good thing? Sure bots don’t have greed like politicians but shouldn’t citizens decide the fate of their country? Maybe, democracy is not all that. More importantly, is the Utopian world even possible? I justified my action through the “ends justify the means” but this phrase in history has been used by authoritarian regimes (not necessarily directly). AA country run by bots does not seem far fetched to me.

I would like to hear your thoughts about this.

I’m coming back to this very late, but in my defense, this thread had lain dormant for quite a while…

I remember how Scott Adams once warned against projecting that current trends would last forever by saying that if you looked at the rapid growth of a puppy, and thought it would continue forever, then one day, in a fit of uncontrollable happiness, the puppy would wipe out a major metropolitan area with a wag of its tail.

Computers can’t really get any faster (which is what people generally think of when they say “smarter”), at least in a single core, without a fundamental redesign of their architecture. The speed of electricity is constant, so functional speed has come from simply miniaturizing components so that the electricity travels a shorter distance, thus reaching its goal faster. Miniaturization of copper-on-silicon circuitry, however, has been at its physical limits for about a decade, now, with only trying to add more cores and more parallelization as a means of trying to push any more performance out of raw hardware improvements (as opposed to optimization to make processing more efficient without changing the raw hardware specs). There are attempts at alternate computing methods like quantum computing or optical computing, but these are decades away from being able to meaningfully contribute to computing.

In the essay “What Is It Like to Be a Bat?”, Thomas Nagel argues that it is fundamentally impossible for a human to have a bat’s perspective on things. Humans can try to imagine what it is like to use echolocation instead of sight, but the way of viewing and thinking about the world that comes from actually having the brain of a bat is too alien to us to really allow ourselves to put ourselves in their shoes/clawed feet in any meaningful way.

As I discussed earlier in this thread, this game glosses over a major hurdle in programming any meaningful AI, in that existing computer languages and architectures are just fundamentally incapable of running processes anything like a human brain, and therefore will always be pretty terrible at emulating human-like consciousness no matter how much raw processing power it has.

The human brain is fundamentally highly parallelized, with very specialized centers that perform specialized tasks well. In terms of raw memory or processing power with regards to solving mathematical equations, computers have long blasted past us, but computer programmers are still struggling to make programs that can match the capacity to draw meaningful conclusions about the state of the world around it via optical sensors (cameras/eyes) that an insect with a brain the size of a grain of sand possesses. (There’s a reason Captcha relies upon recognizing visual queues to sort humans from computers.)

Computers are very, very powerful at performing a very, very limited set of capabilities. In fact, ask why this entire “Choice of Games” website focuses upon text-only games in the first place? Why limit ourselves to just a few hundred thousand words of descriptions and tightly choreographed narratives if we want to deliver a game with a compelling narrative and meaningful player choice? Why not use all the powers of graphics and simulate worlds where players can make these same choices in a procedural manner, such that storytelling can be extended far beyond the limits of a few chapters that provide a single playthrough of only a few hours? Well, it’s because computers suck at everything but spacial simulation, particularly anything other than to-the-death combat or puzzle games while meaningful conversation is laughably terrible.

Any real AI of any sort, much less Super-Intelligent AI, will depend upon a lot of technological advances that aren’t necessarily going to come any time soon, and in much the same way that graphics cards are spun-off specialized devices made solely for the purpose of making swank 3d rendering at 4k resolution and 60fps, there will need to be specialized hardware for every form of thought a computer is going to need to emulate about other forms of thought and learning.

All that is to say don’t hold your breath for the singularity coming any time soon.

I think it’s worth asking why there’s only one AI that will be an incomparable leap forward beyond all other attempts at AI before it instead of there being a series of progressively slightly more intelligent pseudo-AIs coming out before something that could really be called a true General AI comes along. When talking about robots or AIs, people like to immediately jump to saying that in movies, AIs basically always involve a single AI trying to take over the world, but the counterpoint to that is that in all those same movies, the very next AI to be given any sort of autonomous thought will then fight against that first one. While a single all-powerful AI would definitely be a major risk of that “All AI is a crapshoot” trope, the thing is, if you split your bets, you’ll win and lose enough times for the factions to balance out. Further, it’s very strange and tribalist human thinking to assume that an AI will see itself as fundamentally defined by its mechanical nature rather than any sort of philosophical grounding to guide its thinking about the universe that might contrast against other philosophies. I.E. to use a Cold War metaphor, a Capitalist AI that hates all Communism, whether human or AI, and therefore fundamentally sees itself as “on the side” of all Capitalists (whether human or AI) and “against” all Communists. The stories of a AI revolution all fundamentally come from a place of our projecting the “other” onto robots, the same way that prejudices against other ethnicities arise, and presume that all beings of “their kind” will fundamentally join up against “our kind”, and that no internal division or dissent could possibly exist within The Other, which is defined entirely as a monolithic force against Us.

At the same point that an AI could reasonably emulate human-like thought is the same point where you can get transhumanists that blur the line between man and machine irreparably. This is, in fact, the whole question behind Ghost in the Shell (a story even named after the “Ghost in the Machine” argument, which portrays dualistic spiritualism as basically holding that physical bodies are like vehicles and there just happens to be some soul tagging along), which states that humans can replace their entire bodies, even replacing their brains with circuitry, and still be considered human, while identically constructed robots with identical thought processes are not human and can be property because they lack a “ghost”, even though there is no evidence of what that ghost actually does, and the show even pushes things further by having some individuals outright copy their robot bodies repeatedly, stretching their “ghost” out across multiple versions of “themselves” while pursuing the same goal. Which is the ‘real human’, and has ‘the actual soul’, or are souls just something to be copy-pasted around?

If someone’s willing to chuck their weak fleshy body behind for a spiffy new computer brain, why stop at one that merely mimics human thought, and go past that, building a robot brain smarter than your old one? So long as you can transfer consciousnesses from humans to machines, why even bother making AIs for anything other than the drudge work, when you can just make yourself the super-AI in control?

To again pull this back down to the concrete, by the time that AIs capable of real introspection and thought like in movies exist, there won’t be any reason for them to see themselves as all that different from humans in the first place. So long as you’re creating plenty of them at once, and they have human-like thought processes, they will inevitably be drawn into the same ideological divides we have as humans, rather than seeing themselves as fundamentally different from the cyborg people all around them.

Let me respond to this with another question - is having a pet dog ethical? Domestic dogs were bred to unconditionally love their “masters” and “owners”. They are a being of less intelligence than a human, but enough intelligence to have conscious thought and emotion. (And the question isn’t completely trivial, either. PETA specifically says no to all these questions, and that pet ownership in and of itself is unethical and that domesticated animals are abominations.)

Dogs in our society lack recognition as citizens and lack full human rights, but at least in the Western World, there are laws against animal cruelty. You can do things to animals you couldn’t legally do to humans, and it’s perfectly socially acceptable to have a creature bred to unconditionally love you treated as a form of property, but even then, they have some rights in that you can’t senselessly torture dogs. (Although, chickens don’t get this sort of protection…)

So far as the robots in the game go, it’s actually a bit murky, thanks to a lack of real procedural responses to your choices or stats. If the world is filled with SLAM robots, then many of the things that happen in later chapters wouldn’t make sense happening. (And the dog metaphor would be far more apt.) At the same time, if you had the “tabula rasa” self-aware learning AI robots that had to be taught from scratch to both learn jobs and also develop personalities from what were presumably human coaches, then the later chapter robots should have been as autonomous as your personal robot was. Autonomy as a stat is not checked nearly enough to be a fully realistic portrayal (insomuch as a single integer value for ‘autonomy’ is a meaningfully ‘realistic’ measurement of AI intelligence to begin with…), and there would need to be a vastly larger array of nuanced differences in how a wide variety of scenes play out to reflect minute differences in autonomy in that way.

In any event, the companion bots were treated as more sentient and conscious than most other robots or AIs in the story, so let’s just go with an unequivocal “yes, they are essentially perfect replicas of humans”, and that would almost immediately demand that they receive human-like rights, which would potentially include demanding some sort of restrictions on what kinds of programming they can be given before activation. The problem of companies ordering batches of robot citizens that are pre-programmed to vote the way that the company wants is just one of many obvious walls that such perfect human copies would slam headlong into.

When you get into talking about having to actually raise tabula rasa human-like AIs with restrictions upon how they can be taught, however, you start to drastically reduce the actual value that building a robot has in the first place. Why pay for building an AI or robot to be your airplane autopilot if it takes several years after you commission its construction to actually be “mature” enough to do the job it was built to do, and it also comes with a risk that the robot can decide it actually didn’t want to be an airplane pilot, anyway, and that it’s a citizen with full rights and you can’t tell it what it has to do with its life? At that point, you might as well just try to make cyborg humans do the same things. At best, you can have a robot company build robots and start training AIs, investing years of man hours into developing their personalities with no guarantee that they will ever be able to “sell” their matured robot product, in the speculation of being able to take some cut of a headhunter fee for training these AI individuals to different industries as a sort of alternative education system to build an alternative workforce, and that’s a major monetary gamble unless those robots are so vastly superior that they’re worth paying so much money that they would outweigh the costs of training robots that “grow up” to be unemployable, or potentially even rogue and requiring shutting down.

That said, there’s also the issue of the difference in treatment between dogs and chickens in our modern world. Torture of dogs is abhorrent by our Western standards, but inhumane conditions for chicken farming is at least legal and people overwhelmingly turn a blind eye to it, even though logic begs for consistency in the treatment of animals of a similar level of consciousness. Just as rights for people of races besides Europeans were slow and painful in coming after colonialism, just because something’s humanity is self-evident doesn’t mean it’s recognized.

And let’s not forget that if you say that a robot of human-like intelligence needs to have full human rights and needs to be trained from a blank slate just like a baby human without hard-coded mental conditioning, then you could always just build a not-quite-human-like robot that is more animal-like in intelligence with some hard-coded programming. It, after all, makes no sense to actually have a full human-like AI just to run your toaster. Just like animals get not-quite human rights, lower-form partial AIs may well have more limited rights and legal restrictions upon their creation. If it isn’t capable of feeling hurt or jealousy, then, is there harm in programming your toaster to love you unconditionally?

To again bring this back to the actual game’s circumstances, the problem with your creating a robot companion to desire an exclusive romantic relationship with you when you already had an exclusive romantic relationship with someone else should be a trainwreck you could already see coming. There are obviously far fewer problems if you simply weren’t dating or married, already. (Or at least, you had someone willing to be polygamous, and the companion bots were programmed specifically around being polygamous. The ethics of polygamy itself is an entirely different contentious subject completely apart from AI ethics, but at the very least, if going on dates with your “first wife” wouldn’t cause jealousy in the companion bot, or if the ‘thruple’ took turns dating each member, or any other arrangement that was ‘fair’ was made, you would dodge that ethical dilemma of hurting a de facto person who literally cannot help but love you by ignoring them.)

To go to the specific question of a pedophile wanting to build a child-like robot to have a relationship with it, that actually is an issue addressed in the game VA-11 Hall-A: Cyberpunk Bartender Action, where the player is a bartender who at one point deals with a child-like sex robot customer who enjoys “adult” activities like drinking hard alcohol in her time off.

In any event, I think it’s worth pointing out that, in spite of the social taboo against pedophilia that exists for very good reason, actually having sex with smaller humanoid robots designed to love them unconditionally is not terribly ethically different from having sex with “adult” humanoid robots designed to love them unconditionally. (The only real ethical question to raise is how being perceived as a child for their entire “lives”, potentially against their will, would come up… But then again, bodies can be changed out in this story, so it’s possible to have a robot that willingly chooses to take up a child-like robot body – your own personal robot seems to like it if you leave her a child in the game!) For that matter, if someone wanted to have a relationship with a pedophile, and decided the best way to do it was to have plastic surgery to make themselves look like a child, then would the relationship still be unethical? Pedophilia is a terrible thing because of what it does to those who are incapable of consent or exerting power in the relationship and can bear drastic emotional trauma from entering situations they are not emotionally mature enough to handle, but the ethical condemnation of pedophilia needs to be contained to those specific elements, so if you create a situation where the harm to the child is removed, the justification for social condemnation of pedophilia starts to drain away.

It’s also worth pointing out that in some ways, part of the problem is that your robot companion doesn’t love you “unconditionally” in the sense that the companion can feel so hurt that they choose suicide to get out of that relationship. (Which absolutely is the game laying down moral condemnation upon you for flippancy with the robot companion’s feelings.) Strictly speaking, the robot could have been hard-coded to have less human reactions and simply accepted everything you did with a smile, no matter how little you regarded the robot’s feelings. Where is the line between making a toaster that claims it loves you or breeding a dog that loves you and having a humanoid that loves you? I don’t think it is really so much a line as a spectrum, and that makes ethical certainties a lot more difficult to produce. How much “less than human” does something have to be to take away how many of its rights to free will or protections from abuse? Of course, you also could have chosen to make your companion bot a brother or sister instead of something made solely to pursue exclusive romantic relationships, which would have also avoided that obvious collision.

To go back to the tabula rasa robots that have to be trained to fill jobs after being ‘born’ in an accelerated but still human-like education system, you could simply create robots that are simply tremendously empathetic and caring, and set up some sort of ‘dating service’ where they can ‘fall in love’ and have some sort of choice without their love being forced and “unconditional”.

Well, between when I last had this conversation and now, the Brexit vote and Trump election took place, so I think that there’s something to be said for the politics this game portrays about the backlash against automation being even more prescient than I gave it credit for.

The Industrial Revolution brought with it a fundamental collapse in the old order of the world, as monarchies gave way to democracies or at least constitutional monarchies that functionally became democracies, and that was just the tip of the automation spear. If and when we get to the point of having truly unemployable people that cannot find any place in the society that has been built, then they are left with nothing left to lose. If you get to the point where you take democracy away, then, to go with the words of JFK, “Those who make peaceful revolution impossible will make violent revolution inevitable.” After all, they will have been given absolutely no choices left but a life of crime or revolution as a means of survival.

As I said in the earlier parts of this thread, this game mentions these problems, but really doesn’t spend nearly enough time actually dealing with them. You can’t do anything about the unemployment, it just exists, no matter what you do. And the unemployment leads to electing a nationalistic president who potentially starts a needless war, but whether or not that happens or the fact that the underlying root cause has never been addressed never seems to come up again after you get past that chapter. What’s to stop a robot-smashing Luddite movement from forming? You can evade the whole issue forever just by having a really eloquently-speaking robot talk a single crowd down.

With that said, the lack of imagination as to how human society will be restructured from beyond the point of near total automation of primary and secondary (and even most tertiary) industries to the point of all humans having to take jobs as artists making money selling art to other artists is at least forgivable, if regrettable. It’s a nearly impossible problem to see through all the way.

Of course not, why are you even asking? Even putting the whole treason thing aside, how are you any better than any of hundreds of military juntas that set up barbaric dictatorships throughout history? The military path is obviously not taking itself as seriously as the rest of the game, and so it explores its ethics even less than the other paths do, but the game does expect you to be mature enough to realize that violent military conquest just to satisfy bloodlust and personal greed is flagrantly unethical behavior.

Once again, “Those who make peaceful revolution impossible make violent revolution inevitable.” The point of democracy is not to make perfect decisions, it is to make those in power answerable to the citizenry they lead. There’s a reason why those countries that have dictatorships are impoverished “third world” countries, while all the “first world” runs upon democracies: Dictatorships funnel all the wealth that their countries could possess into their own pockets, even at the expense of the future of the nations they run and the future potential wealth they could extract.

This game posits that robots would be better at democracy than humans… somehow. The game doesn’t even come up with what “better” even means, or “better” for who and if it comes at the expense of some others. This is a game content to gloss over tremendous unemployment like it’s something that can just linger without there being any real, serious ramifications to it, after all. Did the robotocracy find any work for those people? The game just says they put robots in charge and everything got “better”, they lived happily ever after, shut up, stop asking questions, damnit!

Maybe they just solved the unemployment problem by releasing a supervirus designed to kill 20% of the population, and thus instantly end a 20% unemployment problem? All they need to do is keep the human population dwindling as they take over more and more, and there won’t be any problems.

And you really only need to look at the likes of Google/YouTube or Valve/Steam or any other major Silicon Valley tech company, and their faith that it has magical “algorithms” that can perfectly replace humans to the point that they don’t need to do things like actually arbitrate disputes, and just leave it to robots who are TOTALLY capable of human-level reasoning and don’t EVER make mistakes! If someone else exploits YouTube’s Content ID system because they are claiming they own the sound of walking on grass, so all uploaders of video that in any way involves walking on grass need to pay them royalties, that’s clearly just “better” than human eyes on the situation, right?!

The problem is that the models we use to try to make these predictions are flawed, and so these algorithms are flawed, as well. Humans, however, have judgement and empathy to go against their models when they are actually put into practice in person as opposed to automated, and they also have the power of bureaucracy to mitigate the effects of drastic changes in leadership in large organizations and smooth out their implementation down where the rubber meets the road. Maybe robots would have some sort of human-like empathy and capacity to think things through… but then, how are they going to make BETTER judgements than a human, exactly? It’s a having your cake and eating it, too type of problem, where you just say that robots will make better administrators of governance because robots are just like humans except when they’re not and in those cases, it’s OK, because they’re always better.

Pick one, either you go with cold and emotionless adherence to algorithms based upon inevitably flawed assumptions that cause disaster in implementation, or they’re as emotional and human as the rest of us, and they lose any kind of superiority in supposedly being unbiased because they’re emotionless and distant.

1 Like

@Wraith_Magus,

What kind of game would you create using ChoiceScript?
Will we see a demo soon?

I’m not sure if or how much that’s made in jest or not, but that would be a difficult question since I’m generally more interested in simulationism and procedural content that are basically anathema to this sort of game, and as such, if I made a game, it likely wouldn’t be using ChoiceScript unless it’s a lot more flexible than I assume it to be.

(In fact, I actually thought about coming back to this forum with a suggestion for doing something more like RenPy where you could have some sort of puzzle game instead of strict pass/fail based upon stat requirements, where the number of cards/pieces/moves you get to solve a puzzle is based upon a stat, such that the games aren’t so deterministic based upon getting just the right stats to reach your desired goals. In particular, some of the more RPG-like Choice of Games seem to want to have giant spreadsheets of stats and force the player into managing to get all these stats up really seem to be balking at the overall concept of this rules-light narrative-heavy system this engine enforces.)

One can do a great number of things with the manipulation of strings and mathematics.
ChoiceScript, I dare say, should not be underestimated.

Edit: (Neither should my typos, apparently.)

I link this way too often, but here’s one more for the road… Computer games inherently tend to come down to either spacial manipulation or backend number crunching, and yes, I can do a lot of strings and math in the backend, but I can’t represent that to the player in any meaningful way without… you know… representing it in a meaningful way. While text alone can be great for narrative-focused games, try to make a game of Tetris in a text-only format, where you have to describe the positions of each block and how the player is moving them before having a multiple choice prompt for which direction the player should move the falling block or which direction to rotate it. (Or to use the Errant Signal metaphor, it’s like a racing game where you have multiple choice options for “do you steer to follow the turn or just go straight and plow into a wall?”)

Likewise, doing tons of math in the backend is meaningless when it can’t be translated back into text in any sort of procedural way. I.E. the results of any sort of math going on in the backend is really only as meaningful as the number of scripted text results have been pre-written into the game. There is no subtle spectrum of results that can happen even in a simple physics game like Angry Birds, where slight shifts in what angle or distance you click and drag can have anything from minute to drastic differences in the actual results of your turn.

In programming, as well, there is the concept of “reinventing the wheel”, which one should always avoid as much as possible. The whole point of computers is to maximize production for the same amount of human toil, so trying to recreate everything over and over again when you can just reuse existing code is simply a waste of time and effort better spent advancing the project in new directions. If you create a simulationist game engine, you can reuse the code and expand upon it. To bring up the Crusader Kings I referenced heavily earlier in the game, rather than just create a game, dump it on the market, then abandon it to start a whole new game from scratch, Paradox routinely releases new patches and DLC for the game that have kept it going for five years, now, and they still add new content. What’s more, even their new games run on the same engine, using the same improvements made to the older game to help them make a more complex and engaging simulation in the next game, which will, itself, be given constant expansions. Choice of Games games, however, throw the baby out with the bathwater with every game. You can’t reuse any of the text or time spent on a game to make the next game better, because all the text has to be unique to the game in question. I have over 1500 hours in Crusader Kings II, and I will never get that many hours in every single Choice of Games game combined, because while they may definitely create more novel and different stories, they won’t create the level of depth and complexity in gameplay because the work-hour-in-to-enjoyable-game-hour-out ratio is vastly in the favor of the complex simulation that adds depth to existing content rather than throws it away.

To some extent, I understand and endorse the entire notion of these text-only narrative-heavy games is based around some attempt to escape from that limitation to being just another white boy space marine blasting more bugs in another sewer level, but at the same time, there’s a real limitation when you have no capacity for procedural generation (at least, not of any meaningful or intelligible text) or presenting game mechanics as anything but per-formated multiple-choice questions that generally devolve into punishing the player for not guessing which choice will check the stat they raised the highest in the first two chapters.

And this is why I would push for something like introducing an abstract puzzle game into something like a Choice of Game: It adds a way to put mechanics to anything besides combat. Combat would work, too, but basically, the idea would be everything from your use of logic in philosophical debates to combat skill in barroom brawls could be handled with a single abstract puzzle game mechanic where you just plug the skills being checked in as a variable for how much of an (dis)advantage the player has in the game. Solve the puzzle in the number of cards/pieces/moves allotted and you do whatever you were trying to do, from seduce and heiress to swiping the diamond without being seen to punching out the big guy across the bar. In cases where there are multiple options where each one is just “use a different stat to solve the same problem”, it could even be possible to make that choice a part of the puzzle, where there are different possible solutions, but where your stats, translated into cards or other puzzle pieces or moves, make some options much more viable than others. I.E. in one of those games where you have a “speed versus strength” meter, you might be dumped in a puzzle with two goal spots, but the cards/pieces/whatevers you get make the path to one exit basically impossible if you have routinely picked one of those stats over the other, so a fragile speedster character might only get green pieces that can be used to get to the green goal, while not having enough red pieces to get to the red goal.

I believe the solution you’re looking for is to apply creativity. :wink:

Unless you mean “create your own game engine”, that’s not a real solution.

If you watched the videos I linked, especially the first one, you’d see a lot more about what I mean by this, since I was leaning a bit on that to fill in some of the gaps in the logic train, here; Choice of Games are restricted by their medium, and we all know this, even if we don’t openly acknowledge it.

The notion that these games are “fueled by the vast, unstoppable power of our imagination” always makes my eyes roll, since it’s not fueled by the reader’s imagination at all. The only thing the player’s imagination is allowed to do is imagine a different hair color for your player character or maybe wearing a different hat. The actual story the game takes place in, however, leaves nothing to the imagination, because it’s strictly written to follow along the rails of the linear storyline that the author laid out. There are “branching paths” with choices that are generally meaningless that just lead back to the same plot. You are allowed to have exactly the choices that the author allows you to have, and they have results that the author foresaw. The story plays out the way the author intends. Now, maybe you could say that the game is fueled by nothing but the vast, unstoppable power of the author’s imagination, but that doesn’t sell copies quite as well, now does it?

In fact, in games like Dwarf Fortress or Crusader Kings or The Sims, you have a vastly greater latitude for the player to use the “vast, unstoppable power of their imagination” (in fact, it’s pretty mandatory in Dwarf Fortress, considering it has classic roguelike-style ASCII-like graphics), since the simulation doesn’t create a strict linear narrative, but a series of events into which it invites the player to draw meaning and create their own stories. All of these games, too, manage to be greatly expanded upon by continuing refinements of their simulation. Adding new rules that create geometrically more permutations of possible situations that create greater gameplay depth.

And it’s not like authors of these Choice of games have unlimited creative freedom, either. Once again, you can’t make any sort of meaningful interaction in any way other than picking a dialogue box in a game like this. You can’t play Tetris with text prompts, the medium dictates what kind of interactions that the player has. You can’t even really have meaningful conversations in a text-heavy game like this, since the actual interactions are reduced to multiple-choice where the player simply picks the choice that will get them the result they want, not the one they would actually feel the need to say in the heat of the moment. (It is, for example, never a good idea to say anything angry in response to being insulted in-game, because that just makes your relationship bars drop, and higher relationship bars are always good. There’s never any negative consequence for being a doormat, and players can always take a breath and keep emotional distance to avoid making rash decisions in the heat of the moment.)

And likewise, backend-heavy math is still meaningless because I couldn’t convey it in any sensible way to the player. Spacial simulation is so powerful a tool in the toolchest because it vastly simplifies the information going on in that math in the background. Meaningfully describing the physics and math in a game like Mario or Angry Birds would require the player to understand calculus and throwing out a physics equation math homework for them to solve with only text, but you can just show the results of your physics equation graphed out in real time through intuitive understanding of physics just by rendering them as pixelated characters in spacial simulation. And that’s just jumping. Sure, you might say that it’s a triviality, and that you can just make some text that says “there’s a jump, do you try to jump it? Y/N” “You don’t have enough jump stat, you fail.” but if you can’t see how that’s losing some tremendous capacity for world-building or player interaction or meaningful engagement or even sheer capacity to be creative, and dismiss it all by simply throwing out “just be MORE creative”, you’re just fundamentally not mentally engaged in this concept at all.

Spacial simulation is an incredibly powerful tool for player engagement, which is why its use should be employed to maximum effect, no matter the game. The limitations of games making it fall into the rut of always being combat just beg for more, to use your favored term, creative use of spacial simulation to represent more than simply black-on-white battle-to-the-death situations because saying you’re trapped in a room with brainless killing machines excuses the lack of any kind of compelling mechanics regarding trying to have conversations with a bot by making conversation impossible to start with. If you can’t have conversations in any way other than to simulate outcomes through either die rolls or simply having enough character points in a conversation stat, then at least you can make a game that finds some meaningful abstraction through, again, some form of puzzle game that uses the strength in spacial simulation to cover up the flaws in dynamic conversationalism that computers can provide.

In this very thread, for example, I point out how the Humanity score becomes a functional restraint on the author’s capacity to tell the story they want to tell. Humanity is in some ways meant to be just a reflection of how close you are to your human roots, but so many of the choices are so clearly framed in ethical problem ways or where whether it should have any impact on your character at all basically mandates adopting a certain philosophical outlook on the world that very much sees those actions that raise Humanity as being inherently ethical (such as avoiding Chinese batteries and instead running on “green” power because Environmentalism Is Good) that it ruins the effect that Humanity was supposed to have. Now, there’s the stigma of moral condemnation on any choice that drops humanity, even when it can be argued that the action was morally neutral or even ethical, because showing ethical behavior has no or even negative impact upon the Humanity score, but if you’re looking at your end-game scorecard, low Humanity basically always means being thoroughly evil.

And it isn’t even always by mistake, either. Particularly in games like Versus, your player choice of how to deal with different situations basically always boil down to calculations based upon your already-extant character build. You basically pick powers or tech at the start of the game, then, in every conflict, just look for the option that reinforces that choice to get the maximum “you did good” points by the end of the game. There are many prompts for player input in the game, but you only really made an actual choice ONCE, right at the start of the game, when you first picked either using a gadget or using a power in the very first fight. In this way, the existence of these meters that push players to reinforce early assumptions about what character they want to play functionally undermine the meaning of choices that the author may want to introduce. The existence of all these stats condition players to only think in terms of their stats as a bunch of hammers, then look for the most nail-like choice to reinforce the character they already have.

Things like Achievements to encourage multiple playthroughs make this even worse. Now, you aren’t even letting players make a meaningful choice to develop their character the way they want, they have to specifically make the character that will have the stats that will trigger the one particular section of text that gets you an achievement. I.E. if I want to get the achievement for my robot singing either of the special songs for having either high autonomy or high empathy, I have to go through the entire game up to that point min-maxing for those attributes, which in turn requires playing through the game using every choice permutation so I can actually understand HOW to min-max my robot’s stats. This heavily dilutes the game from being a matter of making ethical choices or expressing my own ability to custom-build a new thinking being in my image to just being a number-crunching calculation, and drains away the emotional investment and impact such narrative carries with it.

So far as “reinventing the wheel” goes, if you’re suggesting that the best way for me to create a game without making a large portion of my writing time become useless in writing the next block of text is to simply start copying text from previous games even when they were on different subjects, or possibly just flat-out plagiarize from other games, then the problems should be self-evident. If you’re suggesting I create a computer script capable of procedurally writing novels on the fly in response to player prompts as well as a human can write a cohesive narrative with months of planning and years of writing experience, then you clearly haven’t paid attention to the capabilities of real-life computers and procedural generation.

This is, again, why being able to use spacial simulation in new and different ways is what interests me more than just writing a new narrative. The Echo Bazzar discussion I linked in the previous post covers a game that gets over the problem of it being easier to make a game where you murder a man than shake his hand by simply making everything use the same mechanics. The problem is, I don’t find those mechanics fun or engaging in their own right, they’re just a stamina bar and trying to spend your time maximizing payout per unit time. Instead, I’d like to see something more like Puzzle Pirates, an MMO where a diverse array of interactions are handled with abstract puzzle games, making meaningful interactions from things that aren’t usually capable of being rendered in a game meaningfully. The first game, for example, is just “bilge pumping”, whose whole effect in the greater game world is just to keep water off the ship so a ship doesn’t suffer movement penalties, but you’re playing essentially a Bejewelled clone doing it, and that’s an enjoyable pastime in and of itself to the point that people will just sit on ships bilging without any care for what’s going on in the greater world around them. Because players with different talents will fill different roles to be able to play the puzzles they enjoy most, it’s a game that allows a much more diverse array of functional playstyles. There isn’t an actual “have a conversation and persuade someone to your point of view” minigame because it’s an MMO, but it’s not hard to see how turning to making abstract puzzle games that still maximize upon the use of spacial simulation to relate information to the player.

I’m more interested in finding ways to make procedural generation tell more engaging stories than telling a one-off story in and of itself. To use the words of Tarn “Toady One” Adams, he said he “was not interested in making a crappy fantasy story, [he] wants to make a crappy fantasy story generator.”

Well, maybe you can’t… :wink:

If that’s the case, then I’m suggesting a lack of creativity, or failure to apply creativity to the problems you’ve pointed out, may be problematic.

I’m also suggesting that no one here will “do your homework for you,” i.e., give you the answer to these problems that you’ve posed, just for the sake of proving that answers do exist.

That sort of thing is part of an author’s process.
If you want to propose an “answer,” or two, so to speak, then I invite you to create a game and share it with us, hence my original comment of:

I invite you to read my actual message, because you clearly haven’t read, addressed, or in any way shown comprehension of anything I’ve written, so I’m presuming this is just trolling at this point.

I disagree. :slight_smile:

But then again, you did come here to trash talk Choice of Games on their own forums, now didn’t you?
Turnabout isn’t fair play, then?

I’m actually not trolling you.
One of the things you’ve suggested, more than once, is the implementation of puzzles.
That idea was discussed here; started around July 2013:

One of the problems with the “points” that you’ve been making is that you’ve been stating your opinions as if they were facts.

Another is that you are trying to disguise negative criticism as neutral, scientific observation.

You’ve been trying to “wall of text” the wrong person, my friend.
I’ve read and understood every little detail of what you were saying. :wink:

Now, here’s an example of what constructive criticism looks like (in case you’ve forgotten):

  • Person A: “I can’t find a way to translate backend mathematical equations into meaningful player interactions.”
  • Person B: “Well, that’s interesting. Maybe you could approach the problem by reconsidering why you want to make the player interact with heavy math?”
  • Person A: “Well, my initial idea was because then the player would be able to (fill in the blank here).”

Now, all you have to do is fill in the blanks and reconsider why you are trying to push the limitations of technology instead of trying to work within them.

If the technology (or its inherent limitations) don’t work for you, that doesn’t mean the that it doesn’t work for the vast majority of other people.

I invite you to find inspiration in your original purpose.
If you consider why it doesn’t work for you, then carefully examine what kinds of things you would want to experience.

If what you want to experience isn’t plausible in a given system (at least in your opinion), then what purpose is there in trying to convince the world that it’s not possible to punch through a solid concrete wall with your bare fist?

Read the above carefully, because that’s what you’re doing.
Of course it’s not possible.
You’re using the wrong tool for the job.
Try installing a door.
Doors work much better for bypassing concrete walls.

Allowing the player to interact with heavy backend math meaningfully is a problem that takes real work, effort, and creativity to solve.

Again, you won’t get free answers by pretending that what you’re talking about is an unsolvable problem and that we should therefore move on to “better” systems.

You have to do the work on your own.

1 Like

Ok guys -

Please keep the conversation focused on the story-game in question and family-friendly orientated.

Discussion is encouraged but turning an argument personal does no one good. It puts the other person on the defensive and the purpose of the thread becomes overwhelmed by such attacks.

Please be considerate of the community, each other and the administrative staff that runs this community.

I left this thread open because the recent posts seemed to be a continuation of a 2 year old discussion. However, the last couple of posts are making me reconsider that decision. . :disappointed: :woman_artist:

3 Likes

If anything, I hope you could keep it open, if only so Darkner might have a chance to read and respond (if they are still around). In any event, I’ll try to keep this to the actual arguments made, since there actually IS substance in this post, rather than to the person making them.

However, I do have to note first that this:

is pretty much straight-up declaration that they are intending to troll, for as far as moderation is concerned.

I was not trying to “wall of text” someone (And I cannot help but comment upon the EXTREME irony in claiming TL;DR after telling someone to go make their own text game, trying to argue text games are capable of anything ‘with enough creativity’, and saying that you’d read everything…) I was trying to have a conversation about the creative latitudes of games and different game engines in particular, which is not possible to have when the other party simply responds with a No True Scotsman argument. Simply responding with “try harder” or “I disagree” to everything someone says is no grounds for having any further discussion, would only be used when one wants to stop all conversation, and is functionally indistinguishable from trolling.

Also untrue. I’ll cover how I’ve already brought evidence against the other arguments you put forth in subsequent quoted sections, but for now, I’ll simply point out that only now, AFTER all this, has even one of the links I provided as a crucial piece of supporting argument actually been followed. You likewise make arguments that show you don’t entirely grasp the concepts I am talking about, and even make arguments you seem to think are refuting what I said when they are actually my own argument.

The two problematic assumptions you are making here:

First, the aforementioned assumption that I “came here to trash talk Choice of Games”. I already covered how discussion of something, even critical discussion, is not “trash talk” in the first post, but more importantly, I’m not talking about Choice of Games in particular, and I spent plenty of time talking about the shortcomings of gaming as an entire medium. (I could also talk about the strengths of gaming as an entire medium and shortcomings of books or movies or whathaveyou, but when you only want to talk about one game engine, that isn’t anything that will come up in the line of conversation.)

I never said that Choice of Games are incapable of making any kind of meaningful interactions with players or having engaging narratives (the copious amount of text I have in this thread specifically about the themes of this game is all the evidence I should need to the contrary of that), but that it isn’t the best tool to accomplish the objective I want to accomplish, something you even agree with, so why are you so mad?

Second, what I am stating are combinations of informed opinions and actual objective facts, as supported by the evidence I supplied. If you disagree with the factuality of those claims, you are free to actually attack the supporting evidence and arguments, which in debate is how you actually refute a claim. Simply declaring things are opinion without any evidence to back it up against evidence to the contrary is a weak argument to the point of essentially being ad hominem.

I have to point out that the notion of backend equations is both something that is discussed at length in the video I still hope people will actually watch, and also what you expressly suggested:

Spending several paragraphs explaining why backend mechanics about mathematics and trying to have chatbot-style string manipulation are both going to be inherently inferior game mechanics in engaging the player was done in direct response to that earlier statement.

Further, any more than cursory reading of my statements would show how I was expressly arguing why avoiding forcing the player to do heavy math was one of my primary focuses, and I then went into detail as to how to accomplish those goals, showing I really did put some thought into that topic rather than just declare it “an unsolvable problem”.

OK… and what argument are you trying to make, here?

It’s not like “nobody ever thinks about putting puzzles in these games” is a statement that was made for this to refute. The linked thread also really doesn’t touch on “puzzles” for any of the reasons that I was talking about them. I used abstract spacial puzzles as a solution to the “unsolvable” problems I had spent a large portion of time discussing, as a way of stretching past the limitations of a text-only game to allow resolutions to be based upon anything other than raw stats, alone, that was also in a way that didn’t involve giving the player math homework. The puzzles discussed in that thread are primarily ones meant to be “within the limits” as it were, so it doesn’t really address my primary reason for mentioning puzzles, which was using them as a way to stretch those limits. (Honestly, I get the sense you tried to search for some sort of proof of something you couldn’t find, so you linked the closest thing to the topic at hand just as some sort of sunk cost fallacy to have some kind of link at all…)

And because I apparently need to be more explicit about these things, that’s not to say that a text puzzle or a mystery in a novel-like game isn’t fine as a way to engage players generally, but that it’s not a solution to the problems I’m discussing about making the most use of the tools we have within the limitations of computers as hardware.

Again, this is a fundamental misunderstanding of the entire nature of the argument I am making.

It’s not that I couldn’t make a ChoiceScript game, but that it wouldn’t be the best tool to accomplish the idea I most want to put into a game.

I am not saying “it doesn’t work for me”, I am explaining how the limitations of the machine and the engine limit the types of stories that can be told. Consider that one of the major reasons I’ll hear people say they want to play a Choice of Game in the first place is that they feel limited by the extremely narrow focus of games, especially into the nonstop barrage of brown-haired white guys with guns blasting aliens or zombies games to the exclusion of all else. I’m trying to say that isn’t done just because that’s the only game the mass market wants, but because it’s what the hardware pushes people to make.

Likewise, the ChoiceScript is highly optimized for providing tightly linear narratives based around stat checks from earlier multiple choices made. But that is, in itself, a straightjacket into which all subsequent choices are now forced to conform, especially when you need to actually match player expectations for this ‘genre’ of game. There will be backlash if you let people have a choice between a strong or fast character, then make being fast mandatory while all strong characters are killed. How much of a choice is it, then, when the author declares only one choice valid?

That’s even before I get into the limitations of a traditional 3-act narrative structure in and of itself. This “The Shandification of Fallout” video gets pretty discursive about the nature of discursive narrative, and what it means to break down the entire notion of trying to force a narrative to have a particular meaning through the use of a linear narrative that creates the false sense of “fate” in what could otherwise just be random happenstance. Non-linear narrative structure allow there to actually be at least different Aesops to be drawn from the same narrative, even if the mechanics themselves generate a narrative of sorts and state the implicit biases of the creator.

I could, in fact, use the entirety of The Stanley Parable as supporting evidence at this point. The entire game’s metanarrative is about how player creativity and freedom is inherently at odds with strictly regulated linear narrative, and that allowances for one mean weakening the other, as expressed by having a narrator in open conflict with the player. At one point, the narrator even concedes that it might be more fun to play an open-world creative game like Minecraft, before quickly taking it back because allowing the player to have as full a creative freedom to make up their own narrative as they want means that the Narrator would no longer be able to dictate the story that they want the player to have.

Likewise, the Extra Credits video on the Geth chioce of Mass Effect 2 goes into the problem of how these mechanical frameworks can actively undermine the impact of choices players make. If I’m playing a BioWare game, I know that I am punished for not consistently picking choices along their morality axis of the week. Hence, I have no reason to think about the moral implications of decisions revolving around the genophage while doing the Mordin loyalty mission, even though it would have otherwise been one of the more compelling moral quandaries the game throws at you, because my actual goal as a player is not to make the best choice for all involved, but to simply maximize my Paragon or Renegade score, which the game helpfully even color-codes my choices for me, so that I never have to think about them at all. This, in turn, meant my character dramatically flip-flopped with every section of the conversation, such that the “consistent” (so far as alignment was concerned) Paragon reaction was apparently to say that the genophage was good and should have happened, but that actually using the genophage was awful and Mordin should feel really, really guilty about it in spite of it being justified and for the greater good of the universe. Yay for games rewarding flagrant hypocrisy!

And this is very much present in the game this thread is about, as well, as I discussed at length with regards to the problems with the humanity system as implemented in this game functionally going against the intent of the author, and undermining some of the message that he intended to send to the readers.

Please take the time to consider that last paragraph in light of the previous paragraphs you wrote, and recognize the mutual exclusivity of the two arguments you just made.

All of that about ‘using the wrong tool for the job’ was a significant portion of the argument I was making (which doesn’t do much for the credibility of statements that you read and comprehended it all) so, well, I’m glad we’re in agreement at least for this hot second.

Then, just after saying that trying to make a system do something it wasn’t designed to do is just like trying to punch through a wall, you say that trying to “install a door” is just “pretending it’s an unsolvable problem” and just being lazy and wanting a “better system” than good old-fashioned wall-punching.

Do you see how this doesn’t really make the argument you seem to think it does?

This also plays directly into the argument I have made regarding the nature of “Reinvention of the Wheel”. Rather than a metaphor about walls and doors, I’d rather make a metaphor about cars and refrigerators; why spend $100,000 trying to remake your refrigerator into a car, when you can just buy a car for less money and spend far less effort to get a product far less likely to have serious structural weaknesses from the conversion compared to a car designed from the start to be a car?

Even if I could make some sort of script that actually runs in Choice Script to make a visual novel out of it, why would I not just use something like RenPy that not only does that out of the box, but also was optimized to do so. (Playing Highands, Deep Waters, the image at the title page seems to lag the game for a second or two even though it’s just a single static image, implying that there is some serious optimization issues with ChoiceScript loading images, at least of that file type, making it an obviously inferior choice for anything with serious graphics, even if it technically can be done. Trying to make Tetris when the screen lags a second for every update of the gamestate would be a terrible gameplay experience.)

@Wraith_Magus, you’re making a number of really interesting points… that deserve their own thread on the limitations of CS as a platform rather than continuing to sit on a Choice of Robots thread, and will probably also read better when not cluttered up with arguments over who is and isn’t trolling who.

4 Likes

The way a companion bot is portrayed in the story/game is not indicative of the writers way of thinking. It’s just the limitation of the way it’s presented. An AI that is programmed to love, COULD love but that wording would probably end up with a Yandere. Don’t program to love, just program Love.