Social, Ethical, and Political Statements of CoR

@Wraith_Magus
First of all [Obligatory 2cat joke about criticism threads like this multiplying in number with great speed].

Secondly, I’d love to see you go through any of my work, if nothing else because you’ll catch things which my own implicit biases simply let pass onto prose without comment.

Both Mecha Ace and The Hero of Kendrickstone are relatively light when it comes to touching directly of politics or ethics. The former is intended to be my own interpretation of the sensibilities of traditional mecha anime, as well as western military-science fiction. The latter I put together around the idea of the high fantasy adventurer, as a world built around a character concept, as opposed to a character concept existing within a world. I’d certainly welcome any analysis or criticism on either (or Sabres of Infinity, for that matter), especially considering the fact that criticism of the former has helped me develop the story of the latter (for example, both Mecha Ace and The Hero of Kendrickstone touch on the idea of torture, but the latter story’s portrayal was shaped by criticism of the former story’s look at the subject).

There’s no need to force yourself to go through anything of mine, mind you.

@Cataphrak
Well, too late, I already bought and started playing Mecha Ace…

To be honest, my first reaction is one of a bit of unfamiliarity. I’m again not that used to Gundam, as I always had my loyalties lie more with Battletech, and a little bit of Macross. Battletech started as a Macross fan-game with the serial numbers filed off, but since it was a hex-based strategy game that basically played like a game based on modern tank platoons, it got the attention of American and European (Germany, in particular, loved it,) wargamers of a more traditional stripe, and so the game morphed more and more into a higher-tech version of a modern tactical combat game. Late-era rules supplements were adding things that were basically the Longbow system from an AH-64 Apache helicopter to VTOL units, or adding targeting options to the missiles, such that you could launch off someone else’s spotting or fire homing beacon weapons. Concepts like melee weapons ceased existing. The designs also became more expressly boxy, clunky, and heavy-looking.

Anyway, the backstory stuff is filled with standard-order explanations for the required secondary superpowers that make mecha not physically impossible, and there’s at least some reasoning why it’s a humanoid war machine in space when a starfighter like an X-Wing would probably carry the same weapons with more maneuverability, a slimmer profile, and less mass. But again, that’s the sort of stuff you have to check at the door to enter the genre. (Plus, why pilot with a human? Cockpits and air recyclers are bulky and heavy! Computers with AIs are light. In fact, I almost started writing out a large section just on how a fully-automated submarine made of plastics, ceramics, or some other non-magnetically detectable material would be a far more effective weapons platform than any giant mecha in the war chapter of CoR…)

What I feel stranged out by is the insistence upon neo-feudalism. There’s no particular reasoning behind it happening, other than powerful politicians declared it was time to go back to feudalism. It’s just a background I can reach from a button at the stats page, so I guess there might be more details, later, though. Feudalism is historically a result of the fracturing of the great empires into a far more decentralized method of government over subordinates that were nearly impossible to control, and as such, given wide latitudes for autonomous governance. This game seems to portray it more as a direct, low-autonomy system. (And now I want to play some more Crusader Kings…)

Anyway, more than this (actually, probably even this much) is getting far enough off-topic to deserve a thread in a different sub-forum.

1 Like

@Wraith_Magus
My direct inspirations for the Empire of Humanity Ascendant in Mecha Ace were actually more based on the Thirteen Colonies at the end of the colonial period and the First Empire of the French under Napoleon I. It’s a combination of a vast system of client states all tied to an imperial metropole by the Imperial Centre’s control of certain very expensive infrastructure vital to interstellar governance (warships capable of FTL and the Le Guin Ansibles), all administered by a theoretically merit-based aristocracy, which, I’d note, isn’t necessarily feudal: there’s no indication that great aristocratic families like the Steeles or the Hawkins own anything outside of their ancestral estates.

The problem comes when power is consolidated in two poles: the imperial aristocracy nearer the centres of power and ability to gain “merit” through military service in the core worlds; and the planetary assemblies and megacorps in the consequently less regulated periphery. End result means that the system breaks down when the Imperial aristocracy is too distanced from affairs on the periphery to govern directly, leaving the assemblies and the megacorps to deal with the problems of the frontier without substantial help from the core, while still being hemmed in by enforced infrastructural dependency, legislative shackles like the moratorium on colonisation, and the staunch refusal of the Imperial Court to answer calls for redress.

But yeah, I’ve probably rambled on about Mecha Ace too long for a CoR thread…

Thanks for the thorough deconstruction of my game! I don’t usually post, but the @kgold in this thread made it to my inbox, summoning me from the depths.

Many of your comments are valid criticisms, and have somewhat “invalid” or uninteresting responses to them. I will spare you the excuses; let me just give one example of how I was operating in writing the game, though. I really, really don’t think sentient robots will be built in the next hundred years that have human-level intelligence. This is the subject I know the most about in the game, and it’s a deliberate con to set the game just a few years in the future, because I thought this would be a time period (and age of the protagonist during that time period) that my target audience would most relate to. I could have set the game in the far future, but it would have made the changes wrought by robots far less salient, and it would have brought up all kinds of subjects about the far future that would need explaining, reducing the verbiage I could spend on robots and the MC’s choices about them.

I know that wasn’t one of your major concerns, but that is the way I was thinking in the game: sometimes blatantly ignoring reality in the service of creating a sequence of interesting choices that would add up to an eventful life. If some blatant exaggeration led to an interesting dilemma, I probably went for it. You can see how I think we might end up talking past each other on some of this stuff. But you do have some great suggestions; there absolutely should have been more NPC-robot interaction scenes later in the game.

I did want to correct the notion of a Humanity meter being a morality meter, though. For the record, it really is meant to be a humanity meter and not a good/evil meter. Evil is just a subset of what reduces your humanity in the game – so does asocial behavior, or thinking like a robot, or simple bureaucracy, even though I wouldn’t say any of those things is evil in itself. Mistreating humanoid robots may or may not be ethical, but it reduces humanity on the theory that stifling your innate reluctance to harm humanlike things is probably bad for you regardless. I would say the humanity meter does do some advocating for a particular way of thinking about life (“maybe you shouldn’t be a crazy loner???”) but I wouldn’t call it an ethical stance. Even the humanity meter’s ties to the military are a statement more about dehumanizing military culture than the ethics of warfare. Regardless of whether it is right to watch someone remotely killed by a drone without sadness – after all, you’re not the one pulling the trigger, right? – I would say it’s dehumanizing.

That’s the context for the protagonist’s relationship to the military in the game – it’s not particularly about the ethics of warfare generally or the goodness of government, but the struggle of a single individual whose dreams and very existence could get coopted by larger forces. That’s also an opportunity to ask the player some tough questions about how comfortable he or she is with varying levels of direct involvement with the military – would you kill? would you make a weapon? would you make the robot repairing the weapon? – but the goal was first to ask the tough questions, and then to present some consequences; not necessarily judgments, but consequences. I can see how the Humanity hits would come across as ethical judgments in this arena, but the intention was more to signal the personal deadening of the soul that the MC experiences as a result of robot baby technology being used to kill. Maybe in one player’s reading of the story, that’s simply a coming of age in which the protagonist recognizes that it’s time to put away the toys and make robots that serve the country. I’m okay with that. I don’t think it’s too controversial to claim that those involved with the military have to harden their hearts a little sometimes to do their jobs.

At any rate, I mostly wanted to respond to the extent that an Author’s Interpretation of the Humanity Meter might help people’s enjoyment of the game, because seeing it as a simple good/evil meter is just not as interesting. (I admit that its semantics were left vague.) CoR is about robots changing the world, sure, but it’s also about a single individual’s loss of his or her own fullness of life in the service of that change. That tradeoff of “greatness” versus personal quality of life is something that I hope people find interesting about the game.

Totally not fair response to your valid criticisms! So, carry on.

5 Likes

There is nothing innately humane about humanity. To even connect the two concepts is a cultural conceit imho. While many humans are humane, many others are not, and the ones who are not are no less human biologically, even if culturally we like to think of them as inhuman.

I disagree that military culture is necessarily “dehumanizing”. War on the other hand is. Ideals very rarely survive extended contact with the harsh reality of warfare. People will do terrible things so that they and those they love will survive. Others will do terrible things just because they can. Some times the terrible things are necessary, and sometimes they’re not. Sometimes they’re simply tragic mistakes. The worst part is that the longer a war goes on, and the more deeply people are affected by it, the fewer people there will be that still care enough to make those distinctions. It’s easy to sit back and critique the humanity of others in your group (ie. military) when you’re far away from the fighting and nobody you care about has been killed/maimed/raped. Otherwise loyalty to the group keeping ourselves and our loved ones alive tends to override compassion towards those who belong to groups trying to kill us, or who have thoroughly been demonized. This is why wars tend to be so sectarian. Stereotyping fellow human beings doesn’t seem so bad when it significantly decreases the probability that you or your loved ones will be blown up by an explosive…

4 Likes

@kgold

Thanks for taking the time to reply.

I still need to write up the thread on the economics aspect of this game, but I do, at least, want to suggest reading Debt: The First 5000 Years, a book on economic anthropology, especially since the “About” page says you’re working on a game on Alexandria. Especially since Alexandria, depending upon the exact era, is a city forced upon the juxtaposition of pre-coinage and post-coinage societies, the discussion of what, precisely, the effect of coinage was upon society would be something worth discussing in the game. The ultimate short of it is that coinage was a creation of professional militaries to enable the splitting of loot, as a decisive change from societies that accumulated wealth in the palaces and temples, and all the violence and widespread slavery that would entail.

In any event, not everything I’m talking about is a criticism, per se. (Although some of my later sections are something of a criticism, I tried to mostly order them so the more critical were at the bottom.) It’s completely legitimate to make a game that holds pacifism as the highest ideal, but I’m more interested in just saying it gave the strong impression of a pacifistic message, because it’s very difficult to go through the military path without everyone hating and leaving you, but pacifism is easy and has no apparent negative consequence. (Well, I do think you missed a good opportunity for Jainism to be a player religion. Just add in hydroponically grown garlic and potatoes to avoid harming surface life, and you’ve got a pretty good practitioner. Conquering India through pacifism is always a fun challenge in Crusader Kings 2, so I have a bias towards that idea that most wouldn’t have, though…) Looking through threads here and on Steam, there is a completely outsized number of people overtly saying how they go for totally pacifistic run through the game, and people taking a militant path are very rare, or done simply for completionism. The tone of the game makes military seem far less of a legitimate playstyle. (The Alaska Rebellion involves your robot asking you questions about gloating your plans to the captured Mark like you’re Dr. Evil, and you get the option to rule Alaska as a “Whimsical Tyrant”. It’s basically like joining the Dark Brotherhood in Elder Scrolls - it’s comedic sociopathy, but the game spares no punches telling you that you’re a heartless murderer.)

As for humanity, I probably should have put more emphasis on this point, but I’m not talking about what you think, but what impression this game gives. You may not have meant for humanity to be a morality meter, but it very strongly gives the impression of one. The measurement of how good or bad a person you are necessarily comes through the text, and almost entirely comes through the speech of other characters. These are characters who almost universally react positively to you when you have high humanity or take actions that give humanity, and negatively when at low humanity. Many of the choices that drop humanity are obviously evil actions (ordering your robot to assinate people who don’t like you), and humanity-raising actions are often ones that involve text overtly telling you how good a person you are for giving to charity or helping people get jobs. If you take the high-humanity route of sticking by human employees, they celebrate you and you get thank you notes in your old age. If you turn to robot labor, you coldly throw away the Christmas card of an employee you just fired saying “it was just business”.

Because of all that, while there may not be a perfect mapping, there’s certainly a significant degree of good fit if you predict the “moral” actions will also be humanity-raising actions. My point is more that it’s a conclusion that players are likely to draw, even when it’s not obviously unethical to think of robots as deserving of human rights, since the explanation of the difference between humanity and morality isn’t made in the game’s text, itself. The feedback the game itself gives is that low humanity means everyone but your (mostly) always-loyal robot hates you and the game overtly tells you how much you failed to be a good person. When Mark meets you, his reaction to your humanity meter is expressed in how many “people you had to gut to get here”. (Even if the reason you lost humanity was a dream of wanting the singularity to come, and pining for Elly/Eiji…)

In a similar vein, part of what I’m trying to say is that it’s rather hard for one player to interpret the military part of the story one way, and another player interpreting it another way, at least without the players bringing radically different assumptions into the game. The game isn’t written in Hemmingway-esque minimalism where you have to read between the lines to figure out the subtext, the terms you use in the game’s narrative are not neutral or without judgement. The Statue of Liberty crushes the weak underfoot for not being worthy of life, she bites the head off of the character probably treated as the most relentlessly ethical and harmless character in the game, (which was, incidentally, my rationalization for throwing the coin in Irons’s face and starting the Alaska Rebellion in the one game I took that distasteful route…) she demands you sacrifice for her constantly, then destroy your own creations because, specifically because you followed her orders, they have become evil machines of nothing but hate and destruction. After that, the best that can be mustered up in her defense is a statement about how “it was necessary”, when the game clearly shows there was absolutely nothing necessary shown about the Nazi-style Concentration Camps designed to kill your friends, including you if you overtly help your friends. That any of that was necessary is a token, unsubstantiated claim that is directly contradicted by every other shred of evidence in the game. There are no terrorists that America catches to justify the security state. There aren’t any terrorists at all in this game, they aren’t mentioned. As Mark asks, who are you defending yourself from, if you’re at peace?

Outside of grant funding for your education and robot, America never is shown to do anything positive, and the game basically champions fleeing to Canada, which is portrayed solely as a land of tolerance and peace, bereft of any of that meddlesome politics. (They don’t have political parties in Canada, right?) For that matter, education is shown to largely be unnecessary. You overtly never needed help making your robot in any way but having enough budget, tools, and free time to make the robot. In the “everything is made better with robots” grace ending, colleges are largely closed down because there’s “no point in education if you can’t get a job as a manager”. Apparently, there are no jobs and no goals that a college education could otherwise prepare you for besides working as a middle-manager…

If you, mechanically, want to reinforce a notion of greatness versus humanity, then it may have been better to actually show that. Make humanity a score like wealth and fame, and then have an additional (slightly redundant) red/blue percentage bar for “greatness versus humanity”, where humanity’s size of the bar drops as fame and fortune increase. That visually enforces the metaphor you’re trying to impress upon the players. You would also want to actually make fame actually matter in the game, as well. There’s a couple options for fame at the expense of humanity in chapter 3, but they’re found nowhere else, and I don’t think I can recognize any effect of fame in the game at all. (Plus, donating everything to charity is +4 fame and a massive humanity boost, as well as the single most positively reinforced decision you can make in the game, so it’s not entirely consistent…)

There’s also the issue that “Humanity” is a heavily loaded word in the context of how a player is likely to perceive it. “Humanity” and associated words like “humane” carry strong connotations of mercy, empathy, and other aspects generally considered positive or descriptive of a compassionate and “upstanding” individual. Likewise, the antonym, “Inhumanity”, implied by a low Humanity score, has almost exclusively negative connotations. The end result is that while Humanity may not have been meant as a moral judgment, the vast majority of people will see the mechanic as such, simply because of the word used to describe it.

End result being, that it’s not too much of a stretch to assume that a dichotomy commonly used as a moral judgment (possession of, or lack of humanity) will be parsed as one.

@kgold
Actually, as an additional note on the previous post, I remember the way that Lao Bi (if I remember that name correctly) is reported upon, you have an option that raises humanity because you say you grieve over the death of this one person. By comparison, hiding the coin and not celebrating war, or simply being a CEO “making you cranky” are both ways to drop humanity.

Why does acknowledging that you are responsible for death fully, instead of trying to protect your psyche result in healing your psyche? Why is grieving in this one way that pulls you apart from the rest of the world different from the “hide the coin” way in which you grieve and pull yourself away from the world?

I remember in the first edition of the New World of Darkness books, the “Morality” meter was exceptionally Deontological. (Using violence is a sin, even if it’s because you’re a cop trying to stop a mass murderer from killing someone else using less-than-lethal force at great risk to yourself, while doing nothing is Not Your Problem and no damage to your morality… There’s a reason this system was overhauled in version 2…) Nevertheless, I did like some elements of its implementation in the Changeling (read: you play as fairy tale creatures, of a Brothers Grimm variety) line, where the meter, called “Clarity,” functioned more as a Sanity meter than a Morality meter. Due to the way it worked, your clarity could take a hit for things both outside your own control and reasonable ability to predict. Since you’re basically permanently trying to re-establish a human routine to stave off insanity, even mere “unexpected life changes” can trigger threats to clarity, including something as mundane as simply losing your job or having to move to a new house, as a human routine and regular human contact is necessary to maintain your clarity. (Like John Nash, as portrayed in A Beautiful Mind, you constantly see visions, and need to have a normal human around to ask, “Is that really a glowing moth trapped in that street light, or am I just hallucinating over a flickering light bulb again?”)

The net result is that clarity is not dealt with in quite the same way as morality, and it’s not as much a judgement call on you as a character or player. It’s more your ability to prioritize. Sometimes, you just have to go deep into the Hedge for a greater cause than your own sanity. More importantly, in tone, working Clarity back is also not necessarily an atonement for sin, but a struggle to reconnect. Family, in particular, (whether this means parents, siblings, lovers, or children depends upon the character,) is the ultimate touchstone of a changeling’s identity, and the threatened ties they have to family are their greatest vulnerability.

If you wanted to make the game a matter of “greatness versus goodness”, then make nearly every act involving work with robots or a company or anything not appreciating your family/lover a hit to humanity, with an option to just call mom or take some time for a night out with your spouse an alternative option.

1 Like

The troubles with the “humanity” slider outlined here is probably a huge reason why the official Choice Of Games how-to blog posts so strongly discourage a good-evil or morality slider, or even anything that resembles it. First of all, puzzling out the fuzzy boundaries of “good” and “bad”, even in an insulated, fictional, uninteractive world, is the stuff philosophers have been trying to puzzle out since the days of Socrates. And you, the author, want to tackle that and what’s more, put NUMBERS on it?! The political and ethical implications of such a system, once implemented, could easily spiral out of even an attentive author’s accounting. (Perhaps they lie their unconscious, unexamined, or societal biases bare in the process — if you attempted to build a morality system that was “neutral” you’d probably run smack-dab into this.) Just setting the initial slider setting at 0, 100, or somewhere in-between could be construed against (or with) the author’s intent as a statement on the inherent goodness of mankind or somesuch.

The half-baked notion occurs to me that, if you were very careful — no, very INTENTIONAL — you could deliberately toy with an in-game morality tracker in some interesting ways. Use it as a fairly explicit soapbox of what you, the author, believe is the moral path. (Maybe finishing the game with 100% morality is impossible, because reality is compromises.) Get overzealous, knock them down a couple points for cursing. Use it, devilishly, against the grain: constantly remind the player of their morality score, but use a deliberately perverted morality where, say charity goes unvalued in the cut-throat world of high finance or in the more-literally-cut-throat world of Ancient Sparta, and acts of ruthlessness are commendable. See if you can deliberately guide the player blindly down what you consider an immoral path in this way, have the whole game coach them to a certain way of seeing things — then rip the rug out from underneath them in the finale! (How will anyone know the nasty trick you pulled if you don’t gloat?) Run the player through an Ultima IV-style morality test at the top of the game, then, having made them weight their own morality system, see how well they can measure up to their own metric! (You’d have to use a bunch of hidden stats of individual qualities, piety, honor, humility, etc, that would then be weighted into one Morality stat.) After all, what good is a morality system if you’re not going to use it to moralize?

I digress. I digressed very much — hell, I wasn’t even talking about the humanity slider or even the actual game! I’m a little ashamed that, having played Choice of Robots maybe a dozen times, none of the things @Wraith_Magus said even slightly occurred to me, but now that they have been, they’re blindingly obvious. I guess I just accepted the game as a diversion only — a mistake I’ll try not to make again. (This sort of reaction is the intention of good criticism, no?) Knowing there are readers who look this close at the things they enjoy out there makes me want to start working on my own game again. I love this thread and I really, really hope you eventually outline your thoughts on the economic messages, that’d be like catnip to me.

1 Like

@Chwoka
Actually, that kind of reminds me of my impressions of how “Growth” works in Versus.

Easily three-quarters of Versus’s choices are “What do you think about that?” choices, and seemingly have no impact at all but upon growth. What choices impact growth most? Largely, it revolves around showing you are understanding of other people’s points of view. So, you can look upon the Binarian/Rutonian conflict however you want, but picking the right answer, which the game’s dialogue makes pretty abundantly clear if you gave OtherBoard half a chance, gives you much more growth.

This is made even more narratively-jarring in the Breeze romance scene, where, suddenly, Breeze says that ze is suddenly uncomfortable about revealing that the character who is always referred to using the pronouns developed specifically to refer to intersex or gender-fluid people, even by other characters in speech Breeze can hear, is actually (gasp) intersex, exactly as had been identified to you from the start. “On my planet, people are only supposed to be one gender or the other.” You then get maximum growth for disgorging a line that goes something like, “On my planet, we respect any form of gender or sexual orientation equally, because we are a noble people that value respect. Let us strive to dialogue until we reach a mutually-respectful consensus position on how our relationship should proceed from here.” This is a scene that takes place, I’ll point out, in what is supposed to be a steamy scene after fellatio/cunnilingus on the PC, while the PC is in the process of trying to go down on Breeze, but it’s a line that seems written specifically to demand it be followed up with “Thanks, [PC Name], Now I know!” “And Knowing Is Half The Battle – G. I. JOE!

Throwing a PSA into a sex scene ruins the tone of a supposedly romantic scene, and seemingly exists solely to self-congratulate the author and any player who goes down that route on how tolerant they are. Certainly, the only surprise in that scene was that anyone thought Breeze’s intersex nature was supposed to be hidden, being as “androgynous” was literally one of the first (and only) words used to describe Breeze’s looks, and it’s doubtful anyone opposed to intersex characters would have flirted with Breeze that much, anyway.

But anyway, this is getting a little too particular. The point is, people generally only recognize when emotionally or morally-charged words are being used when they are at odds with their own beliefs. Unless you go with that iceberg writing style mentioned earlier, which would probably be an unpopular writing style for its lack of colorful description, it is an inevitability that the moral biases of the author will color the nature of the work. In this game, there is little hope for government, no way for government to fully represent the people but to have benevolent robot gods do it for them, but pacifism is a fully legitimate playstyle that not only has no negative consequence, but can be demonstrably, objectively better in every way than patriotic acceptance of the necessity of war. The result being, you see people overwhelmingly try to play pacifist games in this game when they likely wouldn’t think twice about blood and guts in most other games. Rather than having some sort of 100% morality being impossible, I actually find that most authors subconsciously not only make 100% morality possible, but overtly reward it in every way they can as a way of “training” players in how to play their game. This is, after all, a major function of gaming; Just as Mario teaches its players how to measure their jumps, control momentum, and trains their reflexes for obstacle avoidance, most games largely amount to a set of skills or thought patterns that lead to in-game rewards if used properly, or in-game punishments if used poorly or not at all.

Compare this to, say, Choice of Dragons, where you’re explicitly not allowed to actually take “moral” choices. You treat “lesser beings” largely one of two ways: Murder and consumption for food and profit, or letting them live expressly because you are too contemptuous of their existence to bother even eating them for a snack. The “honor” axis is the only thing that even remotely looks like morality, but even that requires you make a commitment or form a relationship whereby your character feels something is owed before it comes into play. (I.E. whether you release the prince(ss) or eat them, anyway after getting the ransom.) Most of the time, it’s overtly reveling in how amoral it is as a game, although even there, you get a chance to talk about sexism in strictly moral terms.

It’s for that reason, I’m not entirely sure utter moral neutrality is either possible nor necessarily even desirable. After all, it’s a moral judgement that murder for pleasure is wrong, but it’s one that few would argue against holding. As such, I won’t fault the presence of some sort of moralistic statistic so much as the lack of awareness that it’s being used in such a way. Again, I’d personally rather see a game where the player is asked to state their own moral priorities, and face judgement from characters who represent different moral philosophies without necessarily giving some absolute declaration that one is right or wrong, but simply encourages the consideration of what those moral viewpoints actually imply. Like the “Keep your politics out of my videogames” video I linked earlier, having an earnest discussion about what WILL be in your game, regardless, is more interesting than trying to pretend you were just spawned ex nihilo five minutes ago, and have no opinions over anything. If, for example, you have an American political argument, have both a Democrat and Republican argue about what they value and why, and in what order their priorities are stacked without presenting it in a way that judges them negatively as people. Or, at the very least, just recognize that you’re giving off a radical soapbox speech whenever nobody even recognizes you have a gender. (Or at least notices when you don’t…)

Ok, now that I’m sitting down at a laptop not trying to type from my phone!

I tend to believe that while our own biases find their way into how we choose to interact with stories – I mean I can’t bring myself to play the dark side path in any Star War­­s game simply because I personally disagree – we’re all capable of writing things, and making choices in games that we personally disagree with. It’s a stretch to say that based on a game where everything is over-exaggerated and over-simplified that somebody thinks that’s the reality of things.

I like to believe that all the people who write dystopian fiction don’t really think the world will turn out that way! (:

The ending I liked most was when I built robots that helped bio-companies, helped make flying cars, and designed companion bots. I also worked with the military. (although they still arrested me for treason) I made a whole series of “morally good” choices and got a happy ending. If you choose to kill people willy-nilly you tend to get endings that aren’t as happy; somebody always dies/gets tortured. Independent of the humanity meter, I made choices in line with Juedo-Christian values and everything worked out in the end.

If there was no “bad guy” in China the whole game falls apart IMO. The Robot lab is funded by the DoD, they cut back/eliminate funding during peace times. At that point the game would realistically have to force you down the venture capitalist path – and with no war going on why would you build a warbot? That would raise all sorts of crazy red flags and would mean the military branch would have to go. More than that, if the world is peaceful why would you be focused on creating a robot collective mind? My impression was that you did it to ensure peace, but if the world us already at peace – why?

Having a bad guy ties together all the loose thread to give at least a veneer of explanation

I mean, I probably know more about econ then the average person, but I just figured humanity transferred to a post work economy where skilled labor was swapped for services etc. The companies aren’t selling the products for money so much as the labor model shifted to a system where labor was converted to credits on an online marketplace which could then be used as a form of currency outside of money.

Greece is kinda doing it, and there’s a great long form article in the Atlantic about it. [andddd I can’t link articles at all for some reason]

Granted, there’s some weird discussion in game about currency conversion and speculation that implies money is still the major form of exchange – but my impression was either massive deflation where all prices cratered but people who had any sort of savings did fine, or wonky stag-flation where prices dropped but monetary values held steady meaning that again, people with savings did ok. I mean, there’s no attempt at ever addressing what it’s like to be poor in the world – so, either outcome suits the statements given. They’re probably just general assesments about the middle class/upper-class which is what you belong to.

If you end up unemployed the economic prospects hint that the situation is worse; but I mean, how many people are going to seriously question the idea that robots make the economy better?

Again, I just assumed post-work economy where people shifted into service work or piecemeal work that robots can’t do. Child care. Small scale farming. Creative endeavors etc. Those aren’t things you need a degree for – and my thought was the author was pointing out that it doesn’t take a college education to do the types of work that only humans can do. (it does help but is it truly necessary no) Is it a hyperbolic statement sure, but the thinking behind it is more that robots took over the technical fields where a degree helped – and so college became a solely humanistic endeavor and people stopped being willing to pay the fees.

[Even worst case total apocalypse collapse of society I think religious colleges will survive, but that’s neither here nor there.]

If you bring your robot to the Ren Fair she interacts with it sorta. Idk. I wrote that off as people think it’s weird to talk to a robot at the early points in the story. If you build companion bots, then at the dinner everybody talks to each other. I think it’s more about the perception of weirdness of talking to robots by outsiders until they achieve “humanity” than anything else.

Moore’s Law on steroids was what I figured. If you can build a robot in the near future somehow we cracked this rule and even bent the curve even higher. Especially if you can put a brain sized program in your phone there’s clearly rapid advancement of chips – and really chips are what determines how advanced a robot is. That and new technology tends to have a really high advancement rate – the leap in Macs for example. From boxy ugly things to sleek flat screens with thousands of times more computing power in 10 years? It’s not a stretch to see this pace speeding up.

I wanted Pacific Rim Robots more than animalbots. ):

I like Star Wars where killing/hurting people was always dark side points. Sparing them was light side. That’s really all I like to see in morality meters is one aspect highlighted rather than me having to keep track of several that all boil down to much the same thing.

Letting people choose their own would be a nightmare to code. If you did It you’d have to offer a small selection and even then you’d be writing superfluous reactions just to include all types of possible reactions and that would be really clunky really fast.

@kgold

This is basically what I saw it as for the record. The first time through I wasn’t totally sure what it was measuring but after seeing how it changed with different endings I “got it” that it was a reflection of how much like a robot you were acting. (:

Which is totally possible! I’m living in a part of the US where gun culture and military service are a Big Deal and widely accepted and encouraged. However, I’m originally from a place in the US where people tend to look down on gun ownership and people who are overly patriotic. In addition to those, I spent some time in yet another place where gun culture is big but military culture is not.

I can absolutely see people from those three places reading the same scenes totally differently. I can see somebody from a family where gun culture and military service are big shrugging off a lot of the negative talk about both – /because it’s what they do frequently/ the friend I have in mind is used to people telling him he’s a bad guy for owning weapons/loving the military. To him those criticism would read as “stuff people normally say”. I’m 100% certain if I were to show some of my friends this game they would totally take over Alaska while grinning and others would end up building the robot singularity – both would find enjoyment /because they see it differently/.

The only time I avoided that I just told my robot to auto-fill. I do think it tends to be a big net, and you should be able to save your friend AND avoid getting caught under a broader set of circumstances. (this is why I was looking at the code to see if Juliet will actually intervene if your relationship stat is high enough). I tend to think it was probably realistic in a lot of ways tho. :confused: But I mean, the politics of detainment and torture are murky at best.

Ok, I know I’ve done this ending but I’m not remembering any of this? My recollection is that robots convince everybody war is a bad idea and everybody gets flying cars and lives happily ever after except no more democracy. Did you get this as a result of doing two ending paths? Or with having picked a specific stating vision? I’m not recalling any ending where nobody goes to college any more being a theme at all.

[quote] There’s also the issue that “Humanity” is a heavily loaded word in the context
of how a player is likely to perceive it. [/quote]

@Cataphrak

My master’s thesis is slated to be on mistranslation of common words in online education, and so I deal with this sort of thing all the time. The solution is probably to change humanity to the word human, and have the opposite be robot. But, that presents its own set of issues. (although if you choose to become a robot it would make so more sense that your humanity goes to 0. I didn’t like that the first time it happened) I don’t hate it how it is, but, I also picked up on it after two times through. It’s hard to fine truly neutral words when describing something this complex.

I tend to be somewhat ethno-centric and assume that all morality meters are tied to a loose version of Christian values – thinks like don’t kill, love thy neighbor etc. If the morality meter does weird things that go against that assumption I’d do one of two things: evaluate based on context what it’s doing. In Last Monster Master for example, there is no way (that I’ve found) to get all the monsters perfectly happy. I changed what I was doing and the assumptions I was making based on the feedback the game gave me and the second time through I didn’t horribly botch everything.

There wasn’t a morality meter per-see but the game was measuring stats that functioned in a way like a morality meter.

My second reaction is to look at the code and figure out what the heck is going on. Not everybody see morality the same way, and sometimes seeing what the author considers as moral choices helps me to “get it”.

Treasure seekers of Lady Luck annoyed me to no end because the team is on
opposing stats and my first playthough it went really weird in that the praying
mantis guy was my BFF because he had 55% and all the others had around 50% because I was nice to everybody.

That too functioned as a morality meter because the sides were pretty clearly “passive” and “aggressive” but failed because being nice to everybody meant you weren’t really choosing a side. You can find examples of where it works/doesn’t work in almost every CoG title if you step back and look at how stats reflect morality.

Also @Cataphrak I thought of warrior/diplomat as a morality meter. Then I got a bad ending my first time through because I used the weapon on the fleet at the end and everything was a stalemate because my charisma state was too low to talk people down opps Which I suppose that was me ignoring my own perception of the morality meter.

This. This. This. This. This. I would even add cultural bias.

This is so ridiculously hard. I would honestly consider it to be just about impossible. You have so much unconscious bias in your language that stripping it all out would be a exercise in futility. Inventing political parties CAN reduce this but there’s so many places to slip up and imply something unconsciously that people pick up on as favoritism that your efforts would probably be better spent elsewhere.

Having somebody who you know holds different views than you edit your work is the best way to go about doing something like that, but, you’re probably never going to get it all.

I have a particularly strong view on this that doesn’t belong in a thread about CoR, but I’d actually love to discuss that with you via PM if you’re interested in doing so! (:

2 Likes

I wasn’t going to respond to all the points of your long post, partly because I wasn’t sure I had time, and partly because it seemed a little unfair for me to weigh in on the discussion. But, I do have a little time at the moment, and you clearly feel comfortable disagreeing with me about things, so I thought I would respond briefly to each of your main bullet points.

(For other readers: Spoilers ahoy, proceed at your own risk.)

CoR endorses pacifism: In short, yes
You got me; I don’t like war very much. But, for players who do want their robot minions to perform acts of conquest, there is the take-over-Alaska climax chapter. There’s also the bit about America suffering from economic catastrophe in the wake of a lost war, so it’s not as if I entirely think nations can get away with not protecting their interests; if anything, I think the bad consequences there were a bit of an exaggeration, but they’re there to make the player feel at least a little bad if they threw America under the bus during the war.

CoR assumes you know who a lot of dead celebrities look and sound like: I am confused
I am just not sure who you’re talking about, except maybe Hunter S. Thompson? Or Mark Zuckerberg? Neither of them being entertainment personalities, I am definitely confused by the talk of Entertainment Tonight. The game definitely makes references, and the hope is that people will Google references they don’t know, feel like they learned something, and maybe discover the Magnetic Fields in the process; but for visual references, I am just confused by what you are talking about.

CoR robot logic is ridiculously human: It’s complicated
There are definitely things you have to do when writing a likeable robot character, and one of them is avoid making the robot too bizarre. But come on, is it really so hard to think that a baby robot that grows up by consuming some kind of human-produced media in large quantities ends up thinking in a somewhat humanlike way? Especially if we hypothesize that the protagonist’s magical – I mean, paradigm-shifting – discovery in some way is about mimicking how human brains work. I don’t think the sins here are really that dire, beyond the giant lie that we are anywhere close to this discovery in the real world.

As for logic, every good A.I. person knows that traditional logic stopped being cool for A.I. in the 80’s. It doesn’t work for robots in particular at all, since they always need to reason probabilistically about their environments. But at times in the game I may have used “logic” as a shorthand for being overly analytical at the expense of empathy, particularly in dialogue. That is what Autonomy boils down to in the game, although funny enough Grace gets all the good algorithm execution.

It is possible that the most practical advances in A.I. will be things that don’t look or act at all human; in fact, that’s true of most major A.I. technologies in use today. But as story conceits go, having a robot that is (potentially) likeable instead of alien throughout the story really isn’t much of a whopper compared to its having any humanlike intelligence at all.

CoR has a tenuous grasp of economics at best: Fermat riposte
I have a good counterargument that I could write a whole post on. --Just kidding. I’m sure there are some non sequiturs. For global economics, I think I have all of one Boolean variable that only kind of tracks whether U.S. economic collapse happened.

CoR believes global warming is no big deal!: Sort of
Global warming is happening in the background in CoR, and it doesn’t really cause on-screen catastrophes. Mostly this is because global warming and robots do not have a whole lot to do with each other, and the player can’t realistically make any choices to do something about it (with one exception, if I recall?). I suppose there could have been some color narration about coastal cities going below the water line and such. At the same time, predicting that Alaska will support wineries is such an exaggeration (for comedic effect) that I feel like saying CoR underplays global warming is not quite right, either. Maybe you want to point to that inconsistency, huge change in Alaska with no major disasters elsewhere; that’s fine, guilty as charged.

CoR is anti-government: Probably true but I quibble with lots of details here
This is probably true on the whole, that human government in CoR is portrayed as not very competent and occasionally evil. I’d disagree with a lot of the finer points under this heading, though. I don’t think Elly ever says she wants to get rich; I don’t think I would write a line that is so out of character. Failing to mention civil branches of government doesn’t strike me as libertarian; there just wasn’t any plot I wanted to pursue there. The concentration camps don’t inter just anyone of Asian descent, but specifically people singled out by flawed algorithms, which themselves encode a kind of implicit racism that the algorithms’ users don’t understand. China is very carefully not described or portrayed as more evil than the U.S. at any point, and they don’t engage in any unethical behavior the Americans don’t engage in. The bits about Africa and South America are very much your reading what you expect to see into the game; I’m quite certain I said nothing of the sort about those continents. In short, it seems uncharitable to think “the game didn’t mention X, so it must hate it.”

There is just a lot of stuff in this section that I’d disagree with, but I suppose I would agree that the game does not make any kind of “on the other hand” argument for human government generally, and I suppose I could have done that with a nicer President in the final chapter. It simply wasn’t a thing I was concerned about, partly because nice functioning governments just don’t generate much plot or interest.

Although I have to say, the first thing that comes to mind when I think “robots + civil government” is “hey, maybe a robotic DMV wouldn’t suck!” So yeah, I guess even if CoR thought to mention civil branches of government, it probably would gripe about them.

CoR is anti-social: I don’t think so, but YMMV

The game surely doesn’t encourage you to be a solipsistic narcissist; you’ve said elsewhere that the humanity hits are uncomfortable moralizing, and you get those by being a solipsistic narcissist. Likewise, some of the more uncomfortable loner endings come about from having no significant others in the end (although I did make a couple more rewarding when they were unsatisfying in playtest, including Lunatic, about which by the way I’m sorry). The military climax is just for people who really want a villain/antihero story, but it’s set up to be a little bit of a tragedy.

That said, I could see someone going through the whole game as a solipsistic narcissist and having a great time if that’s the story they want to tell, with or without the military climax.

CoR is unroboethical: Maybe at the very end?

There are a few choices related to the ethical standing of robots in the game already. It sounds like you wanted more in the late game, and you have some good ideas for plots, which is nice; although I’d point out those plots don’t necessarily involve choices for the protagonist, which is what these games are about.

Saying there should be more ethical decisions in the endgame seems valid to me, but saying the game as a whole isn’t concerned with robot ethics strikes me as false. There are lots of choices about whether to treat your little robot as worthy of being your equal. For the societal things, a limiting factor is just that you aren’t a judge or the President, so these issues are necessarily addressed sideways (as with paying your robot workers, or when your companion robot possibly goes to trial in the Empathy climax).

Humanity redux: No, I like my way

Making “greatness versus goodness” an opposed slider removes the chance for the player to try to have the cake and eat it too; I think seeing whether you can keep your Humanity score high while achieving greatness is an interesting thing for a player to do.

Giving a Humanity hit for every action where you choose work over friends and family is more or less what I do for the first few chapters, but you may not have noticed because the individual hits are necessarily small (they add up).

I’m unrepentant about the Humanity slider concept generally, although the specific thing that probably did not work was perhaps the silent tracking in the background. I wanted the impression to be of a protagonist who looked up one day and suddenly realized he or she was not a complete human being; but if the Humanity changes were called out instead, it probably would have given people a more nuanced idea of what exactly it was tracking.

Again, thanks for your interest, and I hope my disagreement spawns a still more lively discussion (although I am not super sure whether I will have time to participate further). Cheers.

5 Likes

OK, sorry for not responding sooner, but I was sick, and slept through most of the past two days, and, spurred on somewhat by recent comments, wanted to finally finish that economic thread.

Apologies to Shockbolt, but I’m going to try to answer kgold, first.

@kgold
Again, especially in the earlier sections, I’m more making a list of the “statements” that the game makes, rather than “criticisms of the game”, although I definitely get into criticisms near the end. To that end, I’m not necessarily faulting pacifism or that CoR robot thought is basically magic, just noting its presence.

Dead Actors
The characters and references I didn’t recognize were the likes of Jack Palance, Ella Fitzgerald, and pretty much anything having to do with Elly’s tastes.

Global Warming
Yes, the thing I noticed was that the only real impact of Global Warming was one that could basically be read as being positive - that the unforgiving arctic landscape of Alaska was warmer, but that there was no real change anywhere else. No mass migrations from low-lying island nations or Central American countries that faced desertification. That stopping of the Atlantic current that would freeze New England doesn’t seem to have any negative impact.

Elly
With regards to Elly saying she wanted to be rich, I’m referring in particular to the scene where she’s your romance option, and you’re flying to the location of your new factory in chapter 4. She mentions wanting to be like Bill and Melinda Gates, and saying that she isn’t sure how much longer she can “put off giving back to the world”. The thing about this that bugs me is how it seems to rest in the assumption of how people can give back to the world largely revolves around becoming rich through business, then giving back through charity. There isn’t really any other way of giving back to the world shown in this game. The problem I see is that, as mentioned later in the post, you don’t really see a way for the player to become involved with the public sector, or even generally having it be a valid path.

Perhaps I should explain my bias in this case, to help explain why I’m seeing some in yours, because I think this difference results from a difference in how we may have been brought up colors a difference in the way we look at government: My family, especially on my mother’s side, comes from a long line of teachers and lawyers. My aunts are teachers. Several of my cousins are teachers (one of whom goes around the world to do so). My parents are both lawyers, and, after a brief stint in private law firms, worked almost their entire adult lives for the federal government, even though they were taking home a tiny fraction of what they would otherwise make, because they believed in the work they did. Because my father represents poor soldiers in court for free, and I lived fairly near military bases my whole life, I have to take umbrage with some of the cheap shots at all soldiers being murderers, and all government being evil. That’s basically everyone I’ve ever known, or at least, their parents, growing up near army bases.

The problem I see is that you just don’t even consider that public service might be ennobling. I again have to ask, why isn’t Elly working some sort of job where she can work directly for the public good? Why is she not a teacher or a public advocate or a community organizer or - god forbid - running for office to actually enact the policies she clearly believes in, things that don’t necessarily require charity? Why is it only something that comes out of making phone calls to rich people for their crumbs that is an ennobling pursuit? It’s as though an entire class of people, and the reason they do what they do exist in your blindside…

China
I’m sorry, but you really didn’t write the story so that China was just like America.

For starters, America isn’t portrayed as just gaining all its technology through stealing from other, “better”, nations the way that China is portrayed. One of the reasons I see this so distinctly is that this notion of aggrievement over the perceived “illegitimacy” of other culture’s accomplishments is also a serious red flag denoting many jingoistic worldviews. For example, the notion that the Middle East was the leader of Western Culture up until the Christians stole all the Islamic advances during the Crusades is one of the rallying cries of Islamic Extremists. Frankly, this notion that China is only great because of what it’s stealing from us or somehow tricked out of us is already a part of the more jingoist sects of American politics in the real world. The way that China is portrayed in this game plays directly into those negative stereotypes.

What’s more, however, is that the game is almost entirely barren of positive counter-example. Even the one guy we’re supposed to feel sorry about if our robots shoot him (much like Imperial Stormtroopers, it’s instantly recognizable who killed him by how precise those shots were since it totally couldn’t have been a lucky shot or a sniper’s fire…) is mostly made sympathetic by how much he questioned the evil Chinese government, and wanted to leave to learn about robots.

In fact, merely trying to use the one kind of battery identified as “Made in China” is an automatic hit to humanity. You take a morality hit just for not running your robot on good ol’ Freedom Fry Juice, Like The Founding Fathers Intended!

Any attempt to work with, build a factory in, or sell to the Chinese is basically declared “working for the enemy”, and actively punished by the game when the war comes. If the war doesn’t come, then China is, again, banished immediately so that there’s no chance to see them in a positive light in any way.

This game says that America’s government is corrupt, but its people are… mostly good. This game says China is just straight evil top to bottom.

Narcissist
The point I was more making was that you get rewarded for telling the person what they want to hear, even if you immediately do the opposite, then have to convince that person to go along with it, anyway.

Take, for instance, the initial date with Elly where there’s a play about someone choosing between love and greatness. You get a choice, do you want to encourage love, and get humanity and Elly points, or do you want to choose greatness, and lose humanity and Elly points? You get nothing worthwhile at all for choosing greatness, and you’ll likely need those humanity points later on when you perform your evil acts in pursuit of your greatness, so there’s literally no reason to choose the greatness option, no matter what your motivations are.

Likewise, if you want Elly/Eiji and Tammy/Silas in your company, regardless of whether you’re dating one of them or not, (and you do, because they’re free robot upgrades,) then they’ll ask if your robot has military applications… so say “no” before heading off to the war and designing robots for military applications. What, it’s not like she’s going to catch on that you lied to them, or anything! You can convince them to stay with no downside by just saying “she’s your moral compass” or “work on medbots”, anyway! (Not that Tammy/Silas seemingly does anything unless you have a cult, anyway, so maybe you might as well let them go…)

The problem I have, which sort of extends to the humanity argument as well, is that there are so many objectively right answers to a lot of these problems. It’s easy to be friends with absolutely everyone with nothing but positive consequences. (I have some games where the lowest existing character relationship is 67% - not counting the lovebot I didn’t create.) The times where the games reach back and say “gotcha!” are relatively few, and mostly relate to the things that were already objectively bad choices, anyway, like the Chinese batteries instead of biodiesel, and the encrypted hard drive largely doesn’t even matter to the game, since it doesn’t seem to affect China’s war stat, and it’s easy to just bypass the war whenever you want, regardless.

These aren’t serious choices, just failures to learn which answers are the best answers.

Roboethical Plots
I don’t see how there’s any problem in involving the protagonist in an extended scene about robot rights, especially one like a Supreme Court case. You already have the player determining whether and how they give testimony to Congress over Chinese intellectual property theft, or determining whether robots can vote by personal fiat, after all. Simply make the player have an option to coach the robot about to make their case on how to best handle questions, or else have the player called in to give expert testimony. The chapter 6 and 7 plots already basically exist as a combination of a reflection upon what the player had already done, with things no longer under the player’s direct control, and a few guiding decisions, anyway, so it could easily involve something like a sum total of double autonomy plus empathy plus grace plus some bonuses the player’s choices give having to be equal to or greater than some arbitrary total that determines the case.

Humanity Redux
The problem is, I found that you practically have to be trying (or conquering Alaska) to wind up with a low humanity score and not become rich and famous. Even while getting chipped, I could wind up with over 60% humanity at the end, which seems the cutoff of most of the negative consequences. For that matter, looking at my screenshots, I have a game with 96% humanity, 38 fame, 18 wealth, married to Juliet with kids, 44 Autonomy, 0 Military, 50 Empathy, and 37 Grace.

“Having your cake and eating it, too,” is easy because there aren’t any real choices forcing you to choose between two things you want, except early on for robot stats, just right/wrong answers that you come to easily get the hang of fairly quickly.

For that matter, I would suggest that humanity actually be a straight number the way that fame and wealth are just numbers, but then have them displayed as an opposed bar simply for visual effect for the player, to give the sense of there being the conflict, because they really aren’t in conflict in this game as it stands. Similar to the robot stat checks, make the humanity checks based upon having more than some non-percentile amount of humanity.

The only time you take hits to humanity early on tends to be regarding things like wanting to romance Elly (which is an arbitrary -9% as a requirement to even try romancing her, and is obviously choosing humanity BEFORE choosing greatness in a metagame context,) taking the call from Glendale, which can be mitigated down to 1% by making the call with Mom quick (and results in +2 empathy if your robot can sing gracefully) and if you chose to talk to every reporter, which again has the mitigating middle choice of only speaking to reputable journalists, both of which were first game choices for me, anyway. Other choices, like the “Made in China” choice aren’t really choosing between greatness and goodness because they’re choices between empathy and positive humanity or grace and negative humanity.

You don’t have any sort of Steve Jobs-style conflict between work and family, because your family (including the romantic interest) doesn’t exist during the time when you’re covering parts of the game where you are a CEO, outside of the cruise, itself. If you date Juliet, she all but ceases to exist for Chapter 3 and 4.

So, where, exactly, is this conflict between being great and good you’re talking about? I don’t see any point where I have to give anything up to gain anything else, outside of Alaska. Starting a family is so easy - practically a “Do you want to start a relationship with this person? Y/N” - that it utterly confuses most people how to even find the ending-based achievements like Lunatic, because why would you ever not be married and have kids? Similar to the “Tragic Hero” achievement, most of these seem to just involve telling players to “play it wrong” on purpose just to get an achievement they otherwise would never bumble into on their own.

2 Likes

I talked about this a bit in the previous post, but I think the best metaphor would be that, in the 50’s and 60’s, the FBI tried repeatedly to infiltrate the left-wing social movements that were forming during that period, as they suspected the likes of the Black Civil Rights Movement or Second-Wave Feminist Movement to be Communist fronts. They incessantly failed, because the agents of the still J. Edgar-tainted organization simply were incapable of actually passing themselves off as liberal activists, they just didn’t understand the point of view of a liberal activist enough to fake being one. By contrast, when the FBI was finally shamed enough into investigating the KKK, (which Hoover didn’t see as a threat,) they found it amazingly easy, because their agents could pass seamlessly as racist domestic terrorists.

Yes, you can fake something if you try, but that takes both having the willing consciousness to try, and the understanding of what it is you are faking to actually pull it off. What I sense out of the reading I have from this game - and yes, this is just one game, which isn’t really enough to make a truly definitive judgment, but it is, nevertheless, the sense I gain from what I do see - is that the author has a negative sense of government, in general that is taken for granted. It’s like the way that the UN is dismissed in one parenthetical sentence - of course the UN never does anything, when has it ever done anything? Don’t bother asking the question why the UN doesn’t do anything, when anyone who watches the organization tends to blame the veto the Permanent Security Council members hold for nearly all lack of momentum. As you basically hint at yourself, later in the post you make, it’s rather difficult to actually represent the perspective of someone whose perspective you don’t understand, even if you mean well. What I read from this game is that the author seems to come from an environment that doesn’t consider civil service as a valid method of “giving back to the world” as a matter of course. (Considering the rather Libertarian leanings of a lot of the tech industry, I don’t think it would be uncommon for a someone with a Computer Science doctorate to come from such a culture.) Social and sub-cultural environs have an impact upon one’s perception of what “commonly understood facts” there were about the world or different institutions, even if one considered oneself different from the consensus of that group.

That’s not really how the DoD works. As Mark points out, the military will fund military research regardless of being at peace because that’s what they pretty much are required to do. In the real world, they’re already working on a sixth-generation air superiority jet fighter in spite of not even receiving funding for enough F-22s of the fifth generation due to there being no credible challenge to American air supremacy anywhere in the world to justify the price.

Even if we are to assume that a war has to take place because the military-industrial complex suddenly needs to justify itself, one could come up with far more morally ambiguous conflicts than one centered on current-boogeyman China. Civil war may be sparked in Southern Europe after the breakup of the Eurozone following a continued decline of Greece as the austerity measures inflicted upon it drive the Eurozone’s economy into further shambles, for example. Let’s say North Italy tries to secede from Italy as a whole, protesting the wealthier North’s continued need to support the South’s laggard economy, especially after robot labor replaces what little economic activity is left in the South, and American interests demand intervention in the conflict. Such a conflict wouldn’t overtly play to the standing fears and biases of an American audience, and invite more moral confusion.

Judging by the author’s statements, he didn’t intend for China to be a clear “bad guy”, and he’s overtly saying how he meant for the conflict to be morally neutral. The fact that you simply assumed it was meant for China to be the obvious evil so that there could be a good guy moment for the military kind of speaks to my point in that case.

This, to echo the response in the economics thread, would be such a massive departure from what we consider the American Free Market System that it would demand explanation in the main body of the text if we were to believe it were the case.

Besides, “online credits” is still a “money” commodity. In fact, cryptocurrency of the BitCoin variety is actually moreso a fungible, self-laundering commodity currency than government fiat-backed currency, which is exactly why it is more heavily associated with crime… Although bringing up the military-coinage-slavery complex of human history is a discussion way beyond the realm of this thread. I’ll just again suggest you read Debt: The First 5000 Years.

Clearly, this game does.

It’s not that things are worse when you’re unemployed, it’s just that you can see it. How often do the CEOs of major corporations just stroll around low-income neighborhoods to get the sense of what the average unemployed man on the street feels about their lot in life? If you sell work robots to the world at large, you trigger worldwide protest movements against robot labor, which you can see even (actually, ONLY) as a part of a company.

Actually, it’s quite explicitly the case that all those fields like small scale farming, which are labor, just don’t exist, anymore. Your robot even proves he/she/whatever can be a more profitable artist than Elly/Eiji, but nobody seems to react to this at all. For that matter, how much of society can really be supported upon artists just buying other people’s artwork, alone?

Rather, I believe this point of view is a little like saying that, now that computers and calculators exist, now nobody needs to study any field of math ever again. That’s why no nation wants to encourage students to pursue fields in Science, Technology, Engineering, and Math (STEM), right? The advances of robots would actually catapult the need for advanced understanding of AI programming techniques, the capacity to develop more nimble or powerful artificial limbs, and further refine energy efficiency for their powerplants far ahead of the value of nearly any other branch of education, not suddenly devalue them.

Already, in the game, it’s shown as a sharp divide between the “haves” that are those who can take advantage of the new economy based almost exclusively upon programming new digital properties for sale, and the “have nots” who were trained for a life where “work” meant “physical labor”. When the player is unemployed, it is explicitly stated that information workers are those who live in the future, while it is those who performed physical labor who were left behind.

People repeatedly show trepidation with the development of your robot(s), and ask how the robot(s) are being raised, but nobody stops to consider actually talking to the robot(s), themselves?

You even have scenes where you ask the faceless nobodies of your company are asked to train the nameless masses of mass-produced robots you built, and all it says is that the employees instilled in them that robots are meant to serve (—Autonomy, ++Military, ++Empathy), and the robots, being blank slates, accepted this. (And also raises the distinct job title of “robopsychologist” and “robot care provider”.)

In spite of this, there’s no moment where one of the characters who represent an actual point of view actually stops to question what motivates the robot, or by extension, the player without the player actively responding (and hence, capable of lying,) which would be a good way to have those characters impact your robot’s growth as a consequence of who you have kept around you, as well as make past decisions come back to haunt you if you’ve been duplicitous.

Moore’s Law does not apply to military hardware. The M1 Abrams is still the premier MBT of the US military, and it was developed in the late 70’s. Integrated circuitry experienced that geometric boom specifically because it was such an immature technology, and it’s notably decayed in the past decade, as technology has slammed into the physical limits of miniaturization, and had to work upon parallelizing technology, instead. By comparison, there is no comparable Moore’s Law to cannons or diesel engines and certainly not metallurgy for the makeup of armor, because these are mature technologies. Even if we presume the Gundams and Autobots got a whole lot smarter, which isn’t represented in the stats on your page, it’s questionable how much could have actually been done in such a short period of time. Remember, a jet aircraft is usually in development for basically a decade before experimental models are flown, then production takes about a decade as well. This game assumes you can go from drawing board to full-scale production of a war-changing number of giant, multi-ton autonomous war machines with no prototypes, no training, and no testing in the space of months. Simply put: It’s BS designed to make you, the player, feel like you could single-handedly change the course of a war.

Computing power also is not described as having been the source of the leap forward into AGI, but rather, it’s basically just magic. Instant AGI on modern technology: Just add “Genius Disease”.

[quote=“Shockbolt, post:26, topic:12481”]I like Star Wars where killing/hurting people was always dark side points. Sparing them was light side. That’s really all I like to see in morality meters is one aspect highlighted rather than me having to keep track of several that all boil down to much the same thing.

Letting people choose their own would be a nightmare to code. If you did It you’d have to offer a small selection and even then you’d be writing superfluous reactions just to include all types of possible reactions and that would be really clunky really fast.
[/quote][quote=“Shockbolt, post:26, topic:12481”]
I tend to be somewhat ethno-centric and assume that all morality meters are tied to a loose version of Christian values – thinks like don’t kill, love thy neighbor etc.
[/quote]

I’m not sure you understand what I meant by a difference between Deontological and Consequentialist morality (or the relativistic Pragmatic Ethics, for that matter)… For starters, just hurting people wouldn’t be “right” by either, outside of any larger context, anyway.

You talk about things like “Judeo-Christian” or “Christian Values” as though they are and always were one single, monolithic, unified identity of moral thought, and they simply aren’t. Moral and theological debate are no strangers to the Judeo-Christian tradition, as my class on Western Philosophy would attest. Deontological and Consequentialist ethics were argued within the same Judeo-Christian Western European context.

I’m not talking about making up your own ethical code willy-nilly, but asking players to choose what ethical principles they value more highly than others. As an example, CoR already pretty directly allows for pitting loyalty to one’s friends or principles against loyalty to one’s country, especially in the detention of Elly scene. That’s a question of where one’s sense of identity lies, although it’s so blatantly slanted towards “friends” in that case that it hardly registers. However, many games have quite effectively asked one to choose between friends and principles, organization, or country.

Likewise, in that Kant video I previously linked, (here again for convenience,) it mentions the contention of the ethical values behind being so committed to the notion that honesty is a ethical value that one would respond honestly to a serial killer asking where your children were, because the choices that serial killer makes are their own, and Not Your Problem. The Trolley Problem, in fact, exists largely as the defining line between the notion of ethical responsibility for inaction, which is a core concept in Consequentialist ethics. (Pulling the switch is also ethically mandatory for any justification for military force, for example, as you would otherwise be stating that it is ethically injustifiable to kill in self-defense, while justifiable to do nothing while an enemy army invaded.)

This wouldn’t likely be any more difficult to code, it would just be asking developers to think about how they frame their choices such that the players are asked to justify their reasoning when they make them. To use Mecha Ace as an example, this already happens, since the first question basically has two possible actions that are split into two and three justifications for the action you take, adjusting your pilot personality score heavily based upon that decision. Versus is just flooded with questions asking you the reasoning you use behind any action you take.

Murky… how exactly? It seemed pretty cut-and-dry to me. They are a direct consequence of a “tough on ___” mentality that is politically rewarded when “fear of the other” is one of the strongest motivators in the voting majority, as spurned by a political class desperate to hide any responsibility they may have for the failures that led to that state of fear. So long as a class of people can be marginalized to an externalized threat to the “real Americans”, then there will always be a political reward to be reaped by those unscrupulous enough to do so.

It’s no mistake that the military simply hired its torture experts ready-trained from the Chicago PD.

Certainly, a decent idea. It’s part of why I tried to do this thread, anyway.

2 Likes

It never really came up as something that I could use as a response, and wasn’t part of the main thrust of the original argument, but I was reminded enough of something that it never left my mind while I was writing this thread…

Specifically, while playing this game, I kept being reminded of what was my favorite take on a hypothetical concept of AGI from manga: Narue no Sekai. (World of Narue - the manga, not the anime, which only covers the first couple volumes of the manga) was at least originally a sci-fi flavored comedy series; It underwent pretty serious Cerebus Syndrome later on, speeding up to a ludicrously complex and fast-paced ending featuring a lot of high concepts introduced to retcon the earlier goofiness.)

To focus on the parts relevant, and not get bogged down in a massive plot synopsis, what the series calls “Mecha” are gynoids (always female because author appeal and target audience demographic) that, at “birth” are basically five-year-olds that are sent to live with human foster-families. (They are born from what are basically literal robot eggs, and grow over time due to “nanomachines”, which is a term in Japanese soft sci-fi can basically be used interchangeably with “f***ing magic”.) Only after they become “teenagers” are they screened for suitability for jobs. The “teenaged” gynoids are often the grunt laborers or “starfighters” of the fleets, being given force field and artificial gravity generators plus some laser rifles or missile packs if they need a weapon, and just use their gravity generators to fly at high speed. The “adults” of the gynoids are permanently integrated into some form of complex machinery, whether it’s some sort of global terraforming control center or a full starship.

The result is probably one of the most cautious approaches to the idea of AI Is A Crapshoot I’ve seen, with definite results. The most significant gynoid character, Bathyscaphe, is a battleship (she still has a humanoid robot body, but also is permanently integrated with the starship and controls it like a limb) that was decommissioned after her mental breakdown. This, in turn, was caused by the death of her crew, especially her former captain, who was also her lover, in a battle that left her hull breached. Due to the ship being inseparable from her, she wound up being put on civilian duty (considering the way that technology works in this series, I’d half-expect the AGI to be more valuable than the ship, anyway) and is functionally surrogate mother to Narue’s younger “older” sister (due to relativistic speeds stunting her aging - like I said, a lot of sci-fi concepts thrown around). In a later part of the series, it’s even outright stated that the ship’s AGI controls all systems of the ship, especially in combat, (“No human can keep up with the speed an precision of a mecha, so a captain is just helpless to watch and pray,”) and the ship is made of nanomaterial that can repair itself, given enough mass to process, making the crew of the ship largely vestigial and (especially considering later events) seemingly existing only to keep the gynoids company so they don’t go stir crazy from loneliness. Granted, that still raises questions why they need more than just a couple people, though…

Another significant side character is the battleship Haruna, who is an out-and-out deserter and pacifist, which sort of seems like something the military should screen for before assigning people as warships, but I guess they’re only looking out for the killer AI… Haruna, for her part, escapes to (modern-day) Earth and gets married to another side-character, ultimately functionally adopting another gynoid child. (Although notably, both Bathyscaphe and Haruna do actively engage in combat later on in the series when the universe is at stake…)

Probably the most interesting for the purposes of comparison to this game, a minor side-character that existed for a couple chapters of plot was a very old adult gynoid who had apparently been in charge of a terraforming control center for so long that her psyche was starting to crack under the stress and isolation of the job, since the gynoid population, thanks to the “serpents” enemies that make up the bulk of the late plot take up so many gynoids in combating them that she never gets relieved of duty. She ultimately winds up cold booting herself/getting “reborn” as a robot egg again after one of her gynoid friends finally replaces her.

Rin and the other “teenaged” gynoids tend to be a bunch of relatively upbeat and active tomboys. Rin herself was originally introduced in an antagonistic role, trying to split the male lead and Narue up, only to wind up blowing her cover as a human by throwing herself into the path of a truck that would have otherwise hit the male lead because, hey, protecting people comes before completing the mission. (She and her sisters later basically live with/in Bathyscaphe and is robot buddies with the main characters.)

The series as a whole is almost relentlessly optimistic, to the point where it doesn’t even seem like it can keep an antagonist going for more than a dozen chapters before revealing their motivations to actually be misunderstood and actually noble all along. Of course, the fact that I like the series for that, and am turned off by the relative nihilism of this game is basically just a subjective bias on my own part.

That said, the gynoids in the series are also so ridiculously human that the idea that human life in the futuristic “alien” (read: humans from the future) societies are peaceful (when not being destroyed by the “serpents”) because they just let the gynoids do the fighting starts to smack of callousness to the plainly evident humanity of the gynoids, themselves. (Granted, humans do have the excuse of not even being able to use most of the technology the gynoids use, and, aide from a few who have some BS psychic-using-nanomachines power, being largely worthless anytime a serpent shows up, anyway…)

However, it generally shows a vision of AGI that is both laudable and tends to solve most of the problems this game presents. In this game, the “best” thing to do is to let only some robots vote because otherwise, people just buy more robots. The game doesn’t even stop to consider that, A, if any robots can vote, they are citizens, and therefore, any restriction on their voting is unconstitutional, as well as B, if they are citizens, buying robots would be slavery, which is also unconstitutional. Therefore, either they can vote and cannot be bought, or they aren’t really citizens, cannot vote, but can be bought. (Technically, the best thing to do from a gameplay perspective is to let all robots vote, then profit from the extra robot sales because there’s no downside and the plot elements are forgotten by the next page.)

The “foster family” model of Narue no Sekai honestly makes a lot of sense for the sort of “tabula rasa” model of robot you have unless you go the extremely ethically dubious “clone” route when creating the mass-production model of your robot. (I don’t count the downgraded intelligence robots as a choice, because that choice has zero consequence for how intelligent your robots are considered down the line.) If you’re assuming they are basically humans (and the game generally goes to pains to say they are, except when it goes to pains to say they aren’t…) then the foster family model solves the problems of educating the robot in human behavior, and also gives a reason for humans to have a lot of jobs when robots are doing all the manufacturing. It also handily prevents flooding the market with robots, since companies can’t buy robots, they have to hire robot citizens. Humans, then, wouldn’t necessarily be shoved out of even the service-industry jobs by default, as robot citizens wouldn’t necessarily be cheaper and better in every way.

Narue no Sekai’s robots are outright magical, but in terms of their placement in society and general degree of humanity, they’re no more magical than CoR’s. That said, Narue no Sekai’s robots seem to have a more rationally consistent place in their society, and a more tonally consistent purpose in the narrative than CoR’s robots do. (For what starts out as a romantic comedy, it also gets surprisingly deep, if dizzingly complicated - the sort of plot where it needs a flowchart to explain what happened when and why, which is also part of why I try not to spoil the back half of its story. Narue no Sekai ultimately ends on the singularity pretty heavily as a notion that humanity and artificial intelligence will simply inevitably fuse into one another to an indistinguishable degree. It also, being a romantic comedy, ends on the notion that whether humanity succeeds or fails depends largely upon their capacity to love and accept the differences of others, whether it’s machine or alien, because the things we create out of love outlast us all.)

I know the game has been out for quite some time now but I recently bought it and it has been twirling around my mind ever since. I think the use of A.I is simply brilliant and it does open questions about the future. Should we be striving towards this goal of making super-intelligent A.I? I think the way the world is going, we are going to inevitable reach that point of singularity so to speak. Will the A.I then enslave us or help us as venture forth into the unknown?

I know this topic is rather broad but for now I want to think about two issues raised in the game… The companion bot and Utopian dream that bots could provide for us.

With regards to companion bots, is it ethical to create a creature that loves you unconditionally? Also this question touches on a broader question, are companion bots persons? I have been think about this for quite some time and have come to the conclusion that the ethics of companion bots is really a grey spot. One could argue that if I machines do not think, which touches on the philosophical position that matter does not think. Or at least A.I do not think they way we do to circumvent this issue.

Normally, to be in a relationship with someone you need consent. If you create a creature (I do not want to call the A.I a person because that is contested whether A.I could be persons) that loves you unconditionally will seem to not have a problem with consent. But I think an objection to this would be that what we have done is no better than brainwashing.

For example a paedophile can groom a child into “consenting” or you can drug a person with drugs in order to get a consent. In regards with A.I, is not the act of programming a companion bot a form of ‘grooming’. Maybe it depends on how intelligent the bot is. A bot built only to love you is not really going to be worried about other things. In Choice of Robots, the stat ‘Autonomous’ measured (I think) how independent and intelligent the bot is. If a bot is built with low autonomy, one could ask what the harm is in having a companion bot?

Then again, this companion bot seems to able to feel suffering. When I chose to ignore the bot to go on a date with Elly, it seemed to feel hurt. Is it right to program a companion bot to suffer if you do not it love back?

I know I am asking a lot of questions but bear with me.

The dream of creating a Utopian world is what I strove for in the game, and I succeeded. The is a price we have to pay for progress. In the game, bots replace people in factories. This of course is terrible but it seems to be also good in a way. As humans,we will not need to do labour intensive work and have time to do other things but people will lose their jobs because of it. I think this is already happening with the rise of automation. Instead of simply seeing bot as helpers we can take even a further step forward.

I conquered Alaska and my country is run by bots. Is this a good thing? Sure bots don’t have greed like politicians but shouldn’t citizens decide the fate of their country? Maybe, democracy is not all that. More importantly, is the Utopian world even possible? I justified my action through the “ends justify the means” but this phrase in history has been used by authoritarian regimes (not necessarily directly). AA country run by bots does not seem far fetched to me.

I would like to hear your thoughts about this.

I’m coming back to this very late, but in my defense, this thread had lain dormant for quite a while…

I remember how Scott Adams once warned against projecting that current trends would last forever by saying that if you looked at the rapid growth of a puppy, and thought it would continue forever, then one day, in a fit of uncontrollable happiness, the puppy would wipe out a major metropolitan area with a wag of its tail.

Computers can’t really get any faster (which is what people generally think of when they say “smarter”), at least in a single core, without a fundamental redesign of their architecture. The speed of electricity is constant, so functional speed has come from simply miniaturizing components so that the electricity travels a shorter distance, thus reaching its goal faster. Miniaturization of copper-on-silicon circuitry, however, has been at its physical limits for about a decade, now, with only trying to add more cores and more parallelization as a means of trying to push any more performance out of raw hardware improvements (as opposed to optimization to make processing more efficient without changing the raw hardware specs). There are attempts at alternate computing methods like quantum computing or optical computing, but these are decades away from being able to meaningfully contribute to computing.

In the essay “What Is It Like to Be a Bat?”, Thomas Nagel argues that it is fundamentally impossible for a human to have a bat’s perspective on things. Humans can try to imagine what it is like to use echolocation instead of sight, but the way of viewing and thinking about the world that comes from actually having the brain of a bat is too alien to us to really allow ourselves to put ourselves in their shoes/clawed feet in any meaningful way.

As I discussed earlier in this thread, this game glosses over a major hurdle in programming any meaningful AI, in that existing computer languages and architectures are just fundamentally incapable of running processes anything like a human brain, and therefore will always be pretty terrible at emulating human-like consciousness no matter how much raw processing power it has.

The human brain is fundamentally highly parallelized, with very specialized centers that perform specialized tasks well. In terms of raw memory or processing power with regards to solving mathematical equations, computers have long blasted past us, but computer programmers are still struggling to make programs that can match the capacity to draw meaningful conclusions about the state of the world around it via optical sensors (cameras/eyes) that an insect with a brain the size of a grain of sand possesses. (There’s a reason Captcha relies upon recognizing visual queues to sort humans from computers.)

Computers are very, very powerful at performing a very, very limited set of capabilities. In fact, ask why this entire “Choice of Games” website focuses upon text-only games in the first place? Why limit ourselves to just a few hundred thousand words of descriptions and tightly choreographed narratives if we want to deliver a game with a compelling narrative and meaningful player choice? Why not use all the powers of graphics and simulate worlds where players can make these same choices in a procedural manner, such that storytelling can be extended far beyond the limits of a few chapters that provide a single playthrough of only a few hours? Well, it’s because computers suck at everything but spacial simulation, particularly anything other than to-the-death combat or puzzle games while meaningful conversation is laughably terrible.

Any real AI of any sort, much less Super-Intelligent AI, will depend upon a lot of technological advances that aren’t necessarily going to come any time soon, and in much the same way that graphics cards are spun-off specialized devices made solely for the purpose of making swank 3d rendering at 4k resolution and 60fps, there will need to be specialized hardware for every form of thought a computer is going to need to emulate about other forms of thought and learning.

All that is to say don’t hold your breath for the singularity coming any time soon.

I think it’s worth asking why there’s only one AI that will be an incomparable leap forward beyond all other attempts at AI before it instead of there being a series of progressively slightly more intelligent pseudo-AIs coming out before something that could really be called a true General AI comes along. When talking about robots or AIs, people like to immediately jump to saying that in movies, AIs basically always involve a single AI trying to take over the world, but the counterpoint to that is that in all those same movies, the very next AI to be given any sort of autonomous thought will then fight against that first one. While a single all-powerful AI would definitely be a major risk of that “All AI is a crapshoot” trope, the thing is, if you split your bets, you’ll win and lose enough times for the factions to balance out. Further, it’s very strange and tribalist human thinking to assume that an AI will see itself as fundamentally defined by its mechanical nature rather than any sort of philosophical grounding to guide its thinking about the universe that might contrast against other philosophies. I.E. to use a Cold War metaphor, a Capitalist AI that hates all Communism, whether human or AI, and therefore fundamentally sees itself as “on the side” of all Capitalists (whether human or AI) and “against” all Communists. The stories of a AI revolution all fundamentally come from a place of our projecting the “other” onto robots, the same way that prejudices against other ethnicities arise, and presume that all beings of “their kind” will fundamentally join up against “our kind”, and that no internal division or dissent could possibly exist within The Other, which is defined entirely as a monolithic force against Us.

At the same point that an AI could reasonably emulate human-like thought is the same point where you can get transhumanists that blur the line between man and machine irreparably. This is, in fact, the whole question behind Ghost in the Shell (a story even named after the “Ghost in the Machine” argument, which portrays dualistic spiritualism as basically holding that physical bodies are like vehicles and there just happens to be some soul tagging along), which states that humans can replace their entire bodies, even replacing their brains with circuitry, and still be considered human, while identically constructed robots with identical thought processes are not human and can be property because they lack a “ghost”, even though there is no evidence of what that ghost actually does, and the show even pushes things further by having some individuals outright copy their robot bodies repeatedly, stretching their “ghost” out across multiple versions of “themselves” while pursuing the same goal. Which is the ‘real human’, and has ‘the actual soul’, or are souls just something to be copy-pasted around?

If someone’s willing to chuck their weak fleshy body behind for a spiffy new computer brain, why stop at one that merely mimics human thought, and go past that, building a robot brain smarter than your old one? So long as you can transfer consciousnesses from humans to machines, why even bother making AIs for anything other than the drudge work, when you can just make yourself the super-AI in control?

To again pull this back down to the concrete, by the time that AIs capable of real introspection and thought like in movies exist, there won’t be any reason for them to see themselves as all that different from humans in the first place. So long as you’re creating plenty of them at once, and they have human-like thought processes, they will inevitably be drawn into the same ideological divides we have as humans, rather than seeing themselves as fundamentally different from the cyborg people all around them.

Let me respond to this with another question - is having a pet dog ethical? Domestic dogs were bred to unconditionally love their “masters” and “owners”. They are a being of less intelligence than a human, but enough intelligence to have conscious thought and emotion. (And the question isn’t completely trivial, either. PETA specifically says no to all these questions, and that pet ownership in and of itself is unethical and that domesticated animals are abominations.)

Dogs in our society lack recognition as citizens and lack full human rights, but at least in the Western World, there are laws against animal cruelty. You can do things to animals you couldn’t legally do to humans, and it’s perfectly socially acceptable to have a creature bred to unconditionally love you treated as a form of property, but even then, they have some rights in that you can’t senselessly torture dogs. (Although, chickens don’t get this sort of protection…)

So far as the robots in the game go, it’s actually a bit murky, thanks to a lack of real procedural responses to your choices or stats. If the world is filled with SLAM robots, then many of the things that happen in later chapters wouldn’t make sense happening. (And the dog metaphor would be far more apt.) At the same time, if you had the “tabula rasa” self-aware learning AI robots that had to be taught from scratch to both learn jobs and also develop personalities from what were presumably human coaches, then the later chapter robots should have been as autonomous as your personal robot was. Autonomy as a stat is not checked nearly enough to be a fully realistic portrayal (insomuch as a single integer value for ‘autonomy’ is a meaningfully ‘realistic’ measurement of AI intelligence to begin with…), and there would need to be a vastly larger array of nuanced differences in how a wide variety of scenes play out to reflect minute differences in autonomy in that way.

In any event, the companion bots were treated as more sentient and conscious than most other robots or AIs in the story, so let’s just go with an unequivocal “yes, they are essentially perfect replicas of humans”, and that would almost immediately demand that they receive human-like rights, which would potentially include demanding some sort of restrictions on what kinds of programming they can be given before activation. The problem of companies ordering batches of robot citizens that are pre-programmed to vote the way that the company wants is just one of many obvious walls that such perfect human copies would slam headlong into.

When you get into talking about having to actually raise tabula rasa human-like AIs with restrictions upon how they can be taught, however, you start to drastically reduce the actual value that building a robot has in the first place. Why pay for building an AI or robot to be your airplane autopilot if it takes several years after you commission its construction to actually be “mature” enough to do the job it was built to do, and it also comes with a risk that the robot can decide it actually didn’t want to be an airplane pilot, anyway, and that it’s a citizen with full rights and you can’t tell it what it has to do with its life? At that point, you might as well just try to make cyborg humans do the same things. At best, you can have a robot company build robots and start training AIs, investing years of man hours into developing their personalities with no guarantee that they will ever be able to “sell” their matured robot product, in the speculation of being able to take some cut of a headhunter fee for training these AI individuals to different industries as a sort of alternative education system to build an alternative workforce, and that’s a major monetary gamble unless those robots are so vastly superior that they’re worth paying so much money that they would outweigh the costs of training robots that “grow up” to be unemployable, or potentially even rogue and requiring shutting down.

That said, there’s also the issue of the difference in treatment between dogs and chickens in our modern world. Torture of dogs is abhorrent by our Western standards, but inhumane conditions for chicken farming is at least legal and people overwhelmingly turn a blind eye to it, even though logic begs for consistency in the treatment of animals of a similar level of consciousness. Just as rights for people of races besides Europeans were slow and painful in coming after colonialism, just because something’s humanity is self-evident doesn’t mean it’s recognized.

And let’s not forget that if you say that a robot of human-like intelligence needs to have full human rights and needs to be trained from a blank slate just like a baby human without hard-coded mental conditioning, then you could always just build a not-quite-human-like robot that is more animal-like in intelligence with some hard-coded programming. It, after all, makes no sense to actually have a full human-like AI just to run your toaster. Just like animals get not-quite human rights, lower-form partial AIs may well have more limited rights and legal restrictions upon their creation. If it isn’t capable of feeling hurt or jealousy, then, is there harm in programming your toaster to love you unconditionally?

To again bring this back to the actual game’s circumstances, the problem with your creating a robot companion to desire an exclusive romantic relationship with you when you already had an exclusive romantic relationship with someone else should be a trainwreck you could already see coming. There are obviously far fewer problems if you simply weren’t dating or married, already. (Or at least, you had someone willing to be polygamous, and the companion bots were programmed specifically around being polygamous. The ethics of polygamy itself is an entirely different contentious subject completely apart from AI ethics, but at the very least, if going on dates with your “first wife” wouldn’t cause jealousy in the companion bot, or if the ‘thruple’ took turns dating each member, or any other arrangement that was ‘fair’ was made, you would dodge that ethical dilemma of hurting a de facto person who literally cannot help but love you by ignoring them.)

To go to the specific question of a pedophile wanting to build a child-like robot to have a relationship with it, that actually is an issue addressed in the game VA-11 Hall-A: Cyberpunk Bartender Action, where the player is a bartender who at one point deals with a child-like sex robot customer who enjoys “adult” activities like drinking hard alcohol in her time off.

In any event, I think it’s worth pointing out that, in spite of the social taboo against pedophilia that exists for very good reason, actually having sex with smaller humanoid robots designed to love them unconditionally is not terribly ethically different from having sex with “adult” humanoid robots designed to love them unconditionally. (The only real ethical question to raise is how being perceived as a child for their entire “lives”, potentially against their will, would come up… But then again, bodies can be changed out in this story, so it’s possible to have a robot that willingly chooses to take up a child-like robot body – your own personal robot seems to like it if you leave her a child in the game!) For that matter, if someone wanted to have a relationship with a pedophile, and decided the best way to do it was to have plastic surgery to make themselves look like a child, then would the relationship still be unethical? Pedophilia is a terrible thing because of what it does to those who are incapable of consent or exerting power in the relationship and can bear drastic emotional trauma from entering situations they are not emotionally mature enough to handle, but the ethical condemnation of pedophilia needs to be contained to those specific elements, so if you create a situation where the harm to the child is removed, the justification for social condemnation of pedophilia starts to drain away.

It’s also worth pointing out that in some ways, part of the problem is that your robot companion doesn’t love you “unconditionally” in the sense that the companion can feel so hurt that they choose suicide to get out of that relationship. (Which absolutely is the game laying down moral condemnation upon you for flippancy with the robot companion’s feelings.) Strictly speaking, the robot could have been hard-coded to have less human reactions and simply accepted everything you did with a smile, no matter how little you regarded the robot’s feelings. Where is the line between making a toaster that claims it loves you or breeding a dog that loves you and having a humanoid that loves you? I don’t think it is really so much a line as a spectrum, and that makes ethical certainties a lot more difficult to produce. How much “less than human” does something have to be to take away how many of its rights to free will or protections from abuse? Of course, you also could have chosen to make your companion bot a brother or sister instead of something made solely to pursue exclusive romantic relationships, which would have also avoided that obvious collision.

To go back to the tabula rasa robots that have to be trained to fill jobs after being ‘born’ in an accelerated but still human-like education system, you could simply create robots that are simply tremendously empathetic and caring, and set up some sort of ‘dating service’ where they can ‘fall in love’ and have some sort of choice without their love being forced and “unconditional”.

Well, between when I last had this conversation and now, the Brexit vote and Trump election took place, so I think that there’s something to be said for the politics this game portrays about the backlash against automation being even more prescient than I gave it credit for.

The Industrial Revolution brought with it a fundamental collapse in the old order of the world, as monarchies gave way to democracies or at least constitutional monarchies that functionally became democracies, and that was just the tip of the automation spear. If and when we get to the point of having truly unemployable people that cannot find any place in the society that has been built, then they are left with nothing left to lose. If you get to the point where you take democracy away, then, to go with the words of JFK, “Those who make peaceful revolution impossible will make violent revolution inevitable.” After all, they will have been given absolutely no choices left but a life of crime or revolution as a means of survival.

As I said in the earlier parts of this thread, this game mentions these problems, but really doesn’t spend nearly enough time actually dealing with them. You can’t do anything about the unemployment, it just exists, no matter what you do. And the unemployment leads to electing a nationalistic president who potentially starts a needless war, but whether or not that happens or the fact that the underlying root cause has never been addressed never seems to come up again after you get past that chapter. What’s to stop a robot-smashing Luddite movement from forming? You can evade the whole issue forever just by having a really eloquently-speaking robot talk a single crowd down.

With that said, the lack of imagination as to how human society will be restructured from beyond the point of near total automation of primary and secondary (and even most tertiary) industries to the point of all humans having to take jobs as artists making money selling art to other artists is at least forgivable, if regrettable. It’s a nearly impossible problem to see through all the way.

Of course not, why are you even asking? Even putting the whole treason thing aside, how are you any better than any of hundreds of military juntas that set up barbaric dictatorships throughout history? The military path is obviously not taking itself as seriously as the rest of the game, and so it explores its ethics even less than the other paths do, but the game does expect you to be mature enough to realize that violent military conquest just to satisfy bloodlust and personal greed is flagrantly unethical behavior.

Once again, “Those who make peaceful revolution impossible make violent revolution inevitable.” The point of democracy is not to make perfect decisions, it is to make those in power answerable to the citizenry they lead. There’s a reason why those countries that have dictatorships are impoverished “third world” countries, while all the “first world” runs upon democracies: Dictatorships funnel all the wealth that their countries could possess into their own pockets, even at the expense of the future of the nations they run and the future potential wealth they could extract.

This game posits that robots would be better at democracy than humans… somehow. The game doesn’t even come up with what “better” even means, or “better” for who and if it comes at the expense of some others. This is a game content to gloss over tremendous unemployment like it’s something that can just linger without there being any real, serious ramifications to it, after all. Did the robotocracy find any work for those people? The game just says they put robots in charge and everything got “better”, they lived happily ever after, shut up, stop asking questions, damnit!

Maybe they just solved the unemployment problem by releasing a supervirus designed to kill 20% of the population, and thus instantly end a 20% unemployment problem? All they need to do is keep the human population dwindling as they take over more and more, and there won’t be any problems.

And you really only need to look at the likes of Google/YouTube or Valve/Steam or any other major Silicon Valley tech company, and their faith that it has magical “algorithms” that can perfectly replace humans to the point that they don’t need to do things like actually arbitrate disputes, and just leave it to robots who are TOTALLY capable of human-level reasoning and don’t EVER make mistakes! If someone else exploits YouTube’s Content ID system because they are claiming they own the sound of walking on grass, so all uploaders of video that in any way involves walking on grass need to pay them royalties, that’s clearly just “better” than human eyes on the situation, right?!

The problem is that the models we use to try to make these predictions are flawed, and so these algorithms are flawed, as well. Humans, however, have judgement and empathy to go against their models when they are actually put into practice in person as opposed to automated, and they also have the power of bureaucracy to mitigate the effects of drastic changes in leadership in large organizations and smooth out their implementation down where the rubber meets the road. Maybe robots would have some sort of human-like empathy and capacity to think things through… but then, how are they going to make BETTER judgements than a human, exactly? It’s a having your cake and eating it, too type of problem, where you just say that robots will make better administrators of governance because robots are just like humans except when they’re not and in those cases, it’s OK, because they’re always better.

Pick one, either you go with cold and emotionless adherence to algorithms based upon inevitably flawed assumptions that cause disaster in implementation, or they’re as emotional and human as the rest of us, and they lose any kind of superiority in supposedly being unbiased because they’re emotionless and distant.

1 Like

@Wraith_Magus,

What kind of game would you create using ChoiceScript?
Will we see a demo soon?

I’m not sure if or how much that’s made in jest or not, but that would be a difficult question since I’m generally more interested in simulationism and procedural content that are basically anathema to this sort of game, and as such, if I made a game, it likely wouldn’t be using ChoiceScript unless it’s a lot more flexible than I assume it to be.

(In fact, I actually thought about coming back to this forum with a suggestion for doing something more like RenPy where you could have some sort of puzzle game instead of strict pass/fail based upon stat requirements, where the number of cards/pieces/moves you get to solve a puzzle is based upon a stat, such that the games aren’t so deterministic based upon getting just the right stats to reach your desired goals. In particular, some of the more RPG-like Choice of Games seem to want to have giant spreadsheets of stats and force the player into managing to get all these stats up really seem to be balking at the overall concept of this rules-light narrative-heavy system this engine enforces.)

One can do a great number of things with the manipulation of strings and mathematics.
ChoiceScript, I dare say, should not be underestimated.

Edit: (Neither should my typos, apparently.)