This thread discusses the meaning of events in the game, so it’s basically all spoilers throughout every part of it.
I greatly enjoyed this game, (it, if nothing else, gave me plenty to think about, and I enjoy something that challenges me to think critically,) and I’ve played through it through many different paths, having obtained most of the achievements, and generally read at least 95% of the text. With that said, as I played this game the first couple times, there were a few things I found disappointing. Elly’s romance (the one I pursued first and generally most often) was extremely terse and Elly as a character was rather lacking for substance. Mark asked me to explain my viewpoint on robots, but no option really allowed me to enunciate my actual point of view. The game glossed over economics in a way that showed at least some vague concept of the consequences, but then flagrantly contradicted itself just a couple pages later.
I built up a list of inconsistencies, missed opportunities, and examples of political bias, but wanted to reserve judgement until finishing the game in its entirety. As such, the list only grew, and this is now something of a monstrous thread.
However, before I start in earnest, this thread needs a series of disclaimers:
Once again, I enjoy this game. I would not play it this thoroughly if I didn’t. Criticism of aspects of a game is not a personal insult to its creator or those who enjoy the game. All games have flaws, and discussing those flaws openly is the best way to help an author improve. (And this game’s author is listed as working on another game, at that…) Stating that the game portrays political events in a way that implicates one side or another is right is not necessarily proof the author has those views, nor is stating someone has affinity for political beliefs of one type or another necessarily a condemnation or dismissal of those views by simple association with a political movement.
Saying, “It’s just a game” is not constructive dialogue. After all, it would just as much “just be a game” if it were about magic talking frogs on Mars instead of a game about the near-future. Saying something is “just a game” is a statement of the belief that no dialogue can be constructive, because there is no value in games at all to build upon. If you care enough to read and write about a game, it’s clearly not “just a game” to you, either.
This thread discusses politics and ethics, which are sensitive topics. I am not necessarily advocating the correctness or incorrectness of any one political or ethical beliefs in this thread, (although I certainly doubt I’ll be hiding my own,) but rather, I am responding to the positions that this game advocates or allows you to role-play.
I am not injecting politics into this game. This game IS political, already. I am merely starting a conversation about the political statements this game makes.
Not everything in this game is meant to be realistic. However, it takes itself seriously enough a large enough portion of the time that there tends to be a clear distinction between what is the clear fiction, and what is supposed to be fact. The Daily Show may be comedy, but it makes enough “real” criticisms of the world based upon actual facts and political beliefs that its own political beliefs and fact-checking are appropriately open to scrutiny, itself.
This game, at least from my point of view, is clearly attempting to portray itself from a “neutral point of view”, with characters who have contrasting beliefs, and giving the player an option to express their own political views, and be judged by others based upon their own not-necessarily-true set of beliefs.
I can see the desire to achieve a “neutral point of view” as some sort of laudable goal, although this goal relies upon some faulty assumptions I’ll get into later.
For now, what’s important is that if this is the goal, it also invites scrutiny and criticism when it fails to achieve that goal.
As I’ve played this game through multiple paths, I’ve found misrepresented or missing points of view, glaring breaks in rational consequences, and failures to represent critical aspects of the story, either at all, or in the most glossed-over terms, possible. For example, no matter what path you take, the story will almost invariably illustrate the utter collapse of the global economy, with rampant unemployment, a majority of people living on welfare, the collapse of consumer spending, slowdowns of production of goods, and unprecedented poverty for the majority of the population… and then it will say a couple pages later that the economy is doing great, and most people live easy lives with no wants because the economy is so strong. WHICH IS IT?!
Yes, there is a premium on space and capacity to write, and sacrifices of realism have to be made for brevity and good storytelling. However, for the purposes of judgement of what values this game holds, that actually makes what is mentioned and what is not an even more important issue to discuss, not less. If an actor or actress only has time to mention one or two people or issues out of all the people they know or things they care about in their Oscar acceptance speech, then it makes what little they DO talk about those people or beliefs that are obviously most dear to and defining of themselves.
Finally, this game also takes some fairly clear ethical stands with its “humanity” “karma meter”. Not everything that affects humanity is necessarily an ethical decision, (just being a CEO is a humanity hit because you “get cranky from the stress,”) however, enough of the decisions are clearly judgements of your ethical bearings, and the consequences of high/low humanity are portrayed in such a way that it’s fairly clear the game is making a not-at-all Neutral Point Of View ethical judgement when Humanity comes into play. Charity, self-sacrifice, and compassion and commitment to family and loved ones raise humanity, while greed, callousness to the suffering of others, and ruthless ambition lower it. It is therefore perfectly valid to judge what this game ethically values by what raises or lowers humanity, and just as critically, what doesn’t affect humanity.
With that rambling out of the way, onto the main topics:
CoR endorses pacifism.
I might as well start off with the relatively benign…
There are essentially no moral choices in this game that involve violence. Even allowing yourself to be drafted to defend America is generally portrayed as extremely ethically dubious, and almost every character presented as ethical is overtly opposed to anyone participating in government as monstrously inhuman, although I’ll get to more on that, later.
For now, I’ll just say that resorting to violence of almost any kind for any reason is basically automatically a hit to humanity, which serves as the de facto “karma meter” of this game. Going fully down the military path turns everyone against you.
Oddly, the one point in the game where this is NOT the case is when you can take your robot to the shooting range. Teaching your robot how to defend themselves from any hypothetical aggressor after going on the Late Show is a humanity hit, but teaching them how to shoot a gun and getting indoctrinated with anti-government Second Amendment type stuff about how to defend yourself from the government is perfectly fine, even though they’re basically the same - merely teaching someone how to defend themselves. I might see this as a concession to the fact that you’d get all-caps-lock death threats from certain quarters if you attached a karma meter hit to simply going to a shooting range, but there is a larger pattern at work that this fits into: Instructing your robot for violence against individuals is a humanity hit, but instructing your robot for violence against government is not.
In fact, the game is almost ludicrously opposed to even tangential involvement with conflict, portraying doing nothing as Taiwan is put to genocide or thousands of soldiers are killed for believing in the robots of a scam artist you did nothing to stop as completely Not Your Problem. If you stay at home and war profiteer from the conflict to the greatest extent you can, that’s morally A-OK, according to this game!
This exceedingly Deontological viewpoint is in absolute opposition to the dangerously cavalier Utilitarian take on Consequentialist ethics in the “Grace” ending. (Apologies for not linking these terms to definitions for those who are unfamiliar, but as an untrustworthy new member of this forum, I have a 2-link limit…) This game’s solution to the trolley problem is that it’s totally OK to let 10 people die so long as you don’t touch the switch yourself, but if it comes to robbing humanity as a whole of its agency, free will, livelihood, or even humanity’s collective capacity to understand what is happening to their world, all while secretly redirecting wealth to yourself, then as long as you trick people into being happy while you do it, it’s totally fine! This is the ethics of the Ponzi scheme: as long as I believe the scam will work, it’s not unethical for me to ask money of all the people I know, no matter how it may hurt them in the end…
At the same time, it’s never really properly explained why everyone seems so opposed to robots being in the military. It’s just assumed. I mean, seriously, it’s not like more autonomous killbots somehow increase America’s capacity to inflict harm upon any random person anywhere on the planet the President or CIA doesn’t like. America already possesses the capacity to eliminate all life on Earth several times over with nukes. Replacing flesh-and-blood combatants with Autobots that are more generally repaired if not lacking in true sentience to mourn can at least be considered ethical from the Consequentialist standpoint that it’s one less soldier that has to leave home and lose their life abroad. (And the game agrees with this assessment in the war, itself.)
Among other things I wish I had the capacity to say to Mark, there is the fact that what machines are available to go to war doesn’t determine whether or not a war starts. It is the disposition of the nation’s leaders, and the mood of the public, and the relative ebb and flow of the economic and global political strength of the powers that be. “The growth of the power of Athens, and the alarm which this inspired in Sparta, made war inevitable.”
Also on that list, whether a person works for the government or not, it has little to no control over whether a new technology will eventually lead to weapons or not. The progress of technology has been a never-ending cycle of swords beaten to plowshares and plowshares beaten back into swords. NASA’s space program started from ballistic missiles research designed to deliver nukes, and the Internet was once the DARPANET. A technology on the market is inevitably copied, and if you argue that’s not the same because I wasn’t directly involved in consenting to their use in the military, then I have to ask how that’s so different from working for the government when being told your works are not likely to be used for the military to start with? Later on in the game, when my multitool-handed medibots are given even faster fingers, the game even goes on to say that it’s an unambiguously good application of the technology. Declaring “yeah, I’m working with the army to make a robot that is programmed to run into the battlefield and save lives in the thick of fighting” shouldn’t be considered some sort of ethical failing, but apparently, this game does consider it such a thing.
At the same time, it is possible to tiptoe through the minefield of humanity hits that military participation entails, although you are constantly in danger of losing your loved ones and employees, and have to take the almost schizophrenic approach of constantly declaring how guilty you feel about things while simultaneously doing nothing to say you will attempt to change your ways going forward. Apparently, this game believes you’re completely morally culpable for any unforeseen consequence of a decision you make in collaboration with government, but as long as you keep saying what you’re doing is terrible, it’s all just fine!
Considering this rather pacifistic point of view, I actually find it odd that I had to get the Somethingian achievement for saying I was a Jainist, since this game would basically only require allowing lacto vegetarian options (or at least, hydroponically-grown root vegetables and robotically-developed substitute meat) for you to be a rather good practicing Jainist.
CoR assumes you know who a lot of dead celebrities look and sound like
Seriously, I don’t think there’s a single real person in the entertainment industry I didn’t have to stop reading and go look up. (I know who Alan Turing and Friedrich Nietzsche are, of course. I just don’t see how people can find interest in memorizing the names of everyone that appears in a tabloid or stand watching gossipy junk like Entertainment Tonight.)
Because the game skimps on actually describing characters, instead just alluding to unknown entities to whom there is no associated connotations, many sections of the story fell emotionally flat.
It also is just one of many examples of assumptions that the game makes about what biases and values the player will bring to the table. The game in general does little to accommodate a player who doesn’t bring one of a small number of worldviews with them.
CoR robot logic is ridiculously human.
By and large, this is a topic that can generally be excused under “giving the audience what they expect, not what they’d likely actually get”. After all, you’re just magically a genius who can just magically slap together the same parts everyone else had lying around and make more with it because that’s the sort of game you paid money to play. Still, this game tends to take a few things that seem realistic, and blends it with the utterly fantastic and rarely stops to clearly define which is which in these cases. On the one hand, you start talking about human reactions to unemployment through robotics, then the next, there’s a robot strike because they want human wages for their work, then the game totally forgets about this, and robots are stealing human jobs because they’re paid less than human wages again…
First off, I can’t just put aside the notion that anyone could create a human-level intelligence on a cell phone with a little C++ “if you had a genius IQ.” In fact, the game almost admits this is BS with its discussions of IQ, but never actually follows through. The sum total of all the hard drive space in the world only barely beats the capacity of a single human’s brain, but the human brain is massively parallel in contrast to the extremely linear threaded methodology of a modern computer, especially one run on a language like C++ or Java or other languages not specifically designed to be conducive to Artificial General Intelligence thought.
Sci-Fi has had this pipe dream of robots as “humans, but more logical” for nearly a century, now, but, as a likely consequence of many of said writers being more liberal-arts types giving Kirk/Picard speeches about the superiority of human emotion over robot logic, they never bothered to actually analyze what actually constitutes logic in the first place. Logic is not the foolproof, objective unit of measurement people want to portray it as, all logic is based upon assumptions, and those assumptions can, by definition, never be selected through any logical means. The notion that any logical being without emotion would naturally choose violence and domination is a reflection of our own irrational assumptions about the value of violence and dominance.
This game, like much of that sci-fi, tends to declare “irrational” those modes of thought that actually are rational - for specific sets of assumptions and values.
In this game, high intelligence and comfort with individual choice, as defined by the autonomy score, necessarily dictate that your robot will use this choice to be prideful, disdainful of others, and domineering. The game implicitly states this is the natural consequence of intelligence, unless tempered by empathy.
I remember an article talking about the way that warehouse robots operated. Their AI follows a logical organization system that is simply incomprehensible to humans, based upon a weighted mixture of distance, need to move other stacks of objects, need to avoid traffic jams with other robots, and the frequency with which the robot is asked to retrieve any given type of product.
When true Artificial General Intelligence appears, it will likely be an alien lifeform we find incomprehensible by terms of “logic” we recognize, and only become recognizable through specific, deliberate action to make it humanoid. Rather than stilted, “robotic” language, robots would find soothing tones of voice and finding the most eloquent ways to express things incredibly easy in comparison to the Herculean feat of actually recognizing language, or understanding what it wants to communicate, at all. (Siri’s tone of voice and preferred terminology is much easier to make pleasant to people than Siri’s lack of capacity to interpret what I say…)
In fact, if we really want human-like robots, we really need to start talking about something other than hard drives, standard CPUs, and languages that exist in the modern day, and start talking about molecular computing. This, incidentally, allows for human-like massively parallel operations while at the same time often being much slower at focusing upon a single objectively verifiable math problem… just like a human. Basically put, the physical and performance of the brain has a massive impact upon the mode of thought. A human brain is highly adaptive, to the point of phantom pain from missing limbs when the brain remaps the sense of touch over other parts of the body. A human brain also has extremely dedicated neurons making recollection of specific complex memories easy, (“Grandmother Cells”) but no capacity to easily query a giant database.
Again, a lot of things that are forgivable in the sense of being a story that appeals to the sorts of people that want an unrealistically human-like robot buddy, rather than a realistic fiction…
All of this is what makes it so jarring, however, when you actively punish the player for treating the robots (and androids and gynoids) as human, or at least equal to humans.
Trying to say robots shouldn’t be abused or hurt or killed is almost always grounds for humanity loss in this game. You are actively expected to sell these creatures that are overtly trying to protest for their rights into what the game itself will call slavery.
You are explicitly described as seeing your special robot as your own child. They can call you “Father” or “Mother”. The whole line of robots, in fact, will call you Father or Mother, and then you sell them or tear them apart or strip them of their agency with an app on your phone at will because “they’re just robots”.
I don’t bring these issues up, the game does. When someone tells you that you’re selling artificial life that has their capacity for consent or self-determination lobotomized out of them for a profit, and that such a thing is slavery, you have four options: Further strip them of their capacity to consent to their sexual slavery until they are incapable of really recognizing it for what it is, deciding to “just make them nicer”, (oh good, so the slaves will have even more sensitive emotions to hurt!), doing nothing about it, or destroying them all in a functional genocide.
Need I say why ALL these options are HORRIFICALLY UNETHICAL?
Take this in stark contrast to Data from Star Trek: TNG, or Bender from Futurama, which carry with it the same absurd concepts of logic and emotion in ridiculously human robots, but then actually have the characters in the shows treat them exactly as they would other humans. (Or in Bender’s case, often getting treated better than humans…)
The disconnect between what’s realistic and what isn’t is what makes it so sudden and jarring. In this game, Data is Picard’s slave, and any time the Enterprise is on a hazardous mission, Picard just orders the replicators to make a few more clones of Data to send out as red shirts (well, technically yellow,) without bothering to stop and make speeches about the value of artificial life.
CoR has a tenuous grasp of economics, at best.
I was going to write so much in this one that I figured it would probably be best to just make it its own thread, but I figure I should at least leave a marker.
CoR believes global warming is no big deal!
OK, so the results of global warming are that Alaska is now a much more comfy place to live because it has a New England-like climate, while New England now has a much more arctic-like climate… but this isn’t worth discussing at any time when talking about MIT. Besides, you’re just spending much of the game in California, and it’s not like multi-year droughts and wildfires could possibly impact a place like that, right? RIGHT?!
OK, so this is one of those things that could be forgiven for not really being the focus of the game, but if you’re going to talk about it at all, at least give some reason for mitigating its impact. Just saying that some people aerosolized sulfuric acid and at least shelved the problem temporarily, if potentially open to controversy, at least gives an excuse for its not being any kind of issue in the game.