• Click here - to select or create a menu
  • Home
  • About the Author
  • About the Blog
  • My Let’s Plays

#43: Complexity is the Enemy: Why Video Games Benefit From Simplicity

October 17th, 2012

It is no secret that video games have been in a constant state of evolution. Unlike books, movies, and music, our medium is still very much a young one. We are constantly pushing the limits of what interactivity with media can do. As gaming continues to push and grow, it has begun to demonstrate a very clear trend in recent years. Rather than strive complex, intricate systems that require a lot of patience and skill to master, most games have opted for simpler, easier to pick up and play systems. Many people lament this change. They feel that games are being “dumbed down” and think of it as a worsening of the medium as a whole. I disagree with this assessment. I believe that simplification is a good thing for our industry. In this week’s post, I will explain my reasoning.
The primary reason simplifying games is a good thing is that it leads to a bigger audience for them. Before you moan about all the “f***ing casuals” or “’hardcore’ Call of Duty players,” please take a moment to listen to my point. Bigger audiences allow developers to do more, since their sales are likely to be much higher. A degree of risk can be taken and further innovation can be made if sales of other projects can be virtually guaranteed. As much as we complain about the dullness of yearly release schedules for games like Call of Duty (and let’s be honest, the yearly release does negatively impact Call of Duty games), the profits on these games could be used to fund other projects that are more risky and may not be as well received. (They are not, usually, because of the way AAA companies work, but they could be.) Look at Valve for an good example of the positives of guaranteed profits. The near monopoly Valve has over PC gaming thanks to Steam virtually assures them that they will make profits no matter what they do with their money. Because of this, they are able to take (Valve) time to plan out, tweak, play-test, and re-tweak all of the parts of their games to ensure that they are of high quality. While people do bemoan the how simple modern games have become, they do help to attract these revenue streams that allow for more risky projects to be developed to advance the medium and cater to other tastes.
The other benefit of this extended audience, due to simplified systems, is that it brings in a more diverse and interesting set of viewpoints into the industry. This may seem something unimportant, but it is crucial to the advancement of the industry. Most people who have knowledge of the industry are aware that it is pretty much dominated by 20-30 something white men. While this should not be unexpected, it is detrimental to the industry. There is only so many ways 20-30 something white men can look upon a subject or topic. If we can bring in more demographics and people, each with their own perspectives, viewpoints, and biases, then we can broaden both the types of games that get released and their themes and topics. In any sort of entertainment industry, injecting new people and experiences will be a good thing. It helps to avoid stagnation and keeps things fresh and exciting for people. Different demographics are inherently going to have these new viewpoints due to the fact that they live different lives. Having a higher audience increases the number of people interested in games, which leads to more folks wanting to make a career out of it. This influx will invariably lead to more diverse people simply due to the law of averages. With that, we could see some much needed diversity in video games.
The second advantage to making systems simple and discarding complication is the way that it reduces tedium in game mechanics. This is something most people are at least aware of, even if they do not exactly know it, but it needs to be said anyway: Just because something is complex does not make it deep. On the other hand, just because something is simple to pick up and play does not make it make it shallow. Depth comes from the degree to which one can learn and master the systems at play. Though not, strictly speaking, a video game, Chess is the ultimate example of this. The game itself is simple to understand. There are only a limited number of rules one must need to know. However, everyone knows that chess is a game of intricacies and depth. There are hundreds of thousands of possible permutations of the game board and equally as many tactics to experiment with. While anyone can play to moderate success, someone who is an expert of the game will easily defeat a novice or intermediate player. We have seen video games with similarly simple, yet deep mechanics. Final Fantasy V is a good example with its job class system that has many different combinations. Another demonstration of this would be the recently released Dishonored. The game has a fairly limited tool-set that the player can use. However, the level design and game systems encourage experimentation and combination of these tools to efficiently and skillfully get passed a number of different situations. Like the other games in that fit this description, it falls into the category of “easy to learn, hard to master, ” which is something I whole-heartedly encourage. If developers keep mechanics simple, it forces them to use them in more creative and unique ways, rather than bloat their games with unnecessary filler.
While I support this trend of keeping games simple, I must confess that we must be careful with it. There is such a thing as over-simplification. Some games do benefit from a slight amount of complexity. It depends on the game in question. Other times, the mechanics are so simple and the level design is so mediocre that it makes for a generally bad experience. It is necessary to balance simple systems that any player can use with depth that allows others to go into the system and try to fully master it. Depth is what is most important, not complexity. Developers need to make deep experiences in order to attract people. We do not need excess complexity in games anymore. That is a thing of the past.

#42: What is Needed to Evoke the Feeling of Horror?

October 10th, 2012

Out of all the genres of video games, few a more fascinating than the survival horror genre. It is one of the few existing genres that has the express purpose of eliciting a specific emotion. Because of this, the genre has tougher standards and is more of an evolved and practiced science than others. There are tricks and tactics developers can ascribe to that are tested and true. With the release of Resident Evil 6, which was very poorly received by the gaming press and public, the subject of horror has once again become relevant. This week, I want to talk more about the genre. I will discuss what is, in my humble opinion, the best way to invoke horror and why you will rarely see new horror games outside of the indie scene.
One of the first factors that horror developers must keep in mind is the concept of atmosphere. The tone and layout of the environment is a very key factor in this. Horror relies on the player feeling like the environment is out to get them. They need to feel weak and oppressed and the world needs to reflect that. To invoke this feeling of helplessness, a developer can do many things. One of the easiest things they can do is limit the resources a player has access to. By giving players limited resources, developers force them to use those resources as efficiently as possible. When confronted by a group of monsters, the player would need to decide whether it would be more beneficial to engage them, take the risk and try to run past them, or retreat hoping to find more resources and/or find an alternate path. Making a player decide this on the spot creates suspense and tension, creating an oppressive atmosphere conducive to the feeling of horror. Another strategy for building a scary atmosphere is to use unsettling set pieces to creep out the player. Now, when I say set pieces, I am NOT referring to the explosion-filled, Micheal Bay- like linear levels in a Call of Duty game. Instead, I am referring to the self-contained stories told via the environment similar to those common in Bethesda games. Using the environment to tell small stories regarding the people in an area is a powerful narrative tool, especially in a horror game. When it comes to scaring the player, their own mind is the most effective tool a developer can use against them. Knowing this crucial piece of information, a designer can implant details into a room and maybe include a note or audio file or two to draw a scene in the player’s head. While the designer will be able to create the general idea, the actual image will be generated by the player’s mind, which means that it will be custom tailored to frighten them. This further creates an unsettling and frightening atmosphere for the game.
Keeping with the idea of using making the player draft up details in their head, horror is often best achieved by showing as little as possible. Obfuscation is a very valid method for supporting the idea of horror in a video game. Many of the most successful horror games have worked well because they embraced their technical limitations and kept many details obscure. The most well-known example of this would be Silent Hill 2. Due to the limitations of the original Playstation system, Silent Hill 2 was not able to draw all the details of an area on screen at one time. In order to compensate, they blanketed the area just outside their draw limit with a dense fog that kept it out of view. This, combined with the unsettling atmosphere, had the beneficial side-effect of letting the players use their imaginations when traveling through the titular Silent Hill and added to the tension of what was going on in the game. The other way a designer can force the player to use their imagination is through keeping a minimalist mindset when designing the game. We humans are used to living in densely populated areas for the most part. Thus, we feel naturally freaked out when we see areas devoid of life. When a designer deliberately places few, spaced out lifeforms (friend OR foe) in an area, it invokes the Uncanny Valley effect. Seeing a familiar urban setting without the familiar urban population is close to what we are used to, but not quite close enough that we feel comfortable. This also calls forth a feeling of isolation. One man/woman, alone against overwhelming odds with barely any ability to fight back is inherently terrifying. A good example of this is in the free indie title, Slender. Though like any horror game, its effectiveness depends on the person playing, the developer of Slender was highly proficient at using few details in order to terrify the player. Trapped in a small, enclosed, wooded area with exactly one for, the Slenderman, players have no way to fight back and no one to support them. This is about as bare-bones as a horror game can be and, when it works, it works to great effect. When my friends and I played the game, one of them had to leave the room and go take a walk outside after playing in order to calm himself down. Another jumped the moment I moved the chair a few inches. This limited, but precise use of details and obfuscation was highly effective, yet it is also the reason AAA developers have such a hard time capturing the essence of horror. Games like Dead Space and the newer Resident Evil games are funded with multimillion dollar budgets and top of the line technology. Because there are few limits, they make highly detailed models for all of their monsters. With foes that well-rendered, it is far more tempting to throw them all into the limelight and force players to look at them than it is to keep them in the dark and let the players keep their imaginations and sense of tension active. This makes it hard for them to truly frighten the player beyond mere jump scares.
However, despite all of this, it is important to do one last thing when building horror games, and it is something that is critical to the art of fear. For prolonged, enduring play sessions, which many gamers can be prone to at times, being tense and on edge the entire time can be incredible taxing in a mental sense. In order to avoid depleting the player’s mental stamina, it is important to give them well planned and spaced-out areas of safety where they can take a breath and relax. This gives them time to rejuvenate themselves, manage their inventory, and plan out their next move without the overbearing weight of an oppressive atmosphere. Generally speaking, these are also places where the designer would offer the player the option to save their game. While allowing players a chance to relax is a good thing, rooms like these, where the player does not have to worry about confrontation, serve a duel purpose: They serve as a contrast from the oppressive atmosphere. If a player experiences nothing but horrors and nightmares, they will slowly build up a tolerance to them. When developers have these periods of rest, they expose the player to a different stimuli and vary the atmosphere a little bit. It serves to remind the player that there is an opposite to being under constant threat, which in turn makes the threat that much more terrifying. Done well, these areas can serve to make the player scared to leave them. The player will know that they are in a safe haven, but leaving will place them in a hostile environment again. This leads to some players procrastinating and waiting as long as possible to exit. While some designers may see this reluctance to move on as a sign of failure, the opposite is true for a horror game. If a player is too scared to leave a safe haven, then the developer knows he/she did their job properly. This contrast between the safety of an area of respite and the danger of the rest of the game is a strong asset that ought not be taken lightly.
Horror is a very fickle beast. It requires immense effort to uphold and maintain throughout an entire experience. Even when it is done well, it is all up to the individual players and their mindsets to be truly effective experiences and will rarely yield similar returns to that of a shooter with an equivalent budget and attention to detail. All of the factors that determine the likelihood of AAA doing it and getting it right work against it. When designing games that are designed to invoke fear, developers need to be extremely careful and use deliberate, well-thought out strategies for keeping players engrossed in the atmosphere of their game. This is easier said than done and is the main reason why many of the well-known titans of the genre, like Dead Space and Resident Evil, have begun to shift from horror to action. Even thought this is the case, fans of the genre should not lament it too much. Humanity will always have a place for horror in its heart and people will always be there to try to satisfy that demand. Given that many old genres like isometric RPGs have been seeing a resurgence of late, it is not implausible that even should the horror genre fade (which is highly unlikely), it too will return in due time.

#41: The RPG Cultural Divide: East Versus West

October 3rd, 2012

It is no secret that there is a huge cultural divide between Western and Eastern styles of video game development. Due to the way each region of the world developed on similar, yet fundamentally different, lines over the centuries, the games developed by each regions cater to wildly different tastes and demographics. The most obvious divide we see is the one between Role Playing Games developed between the two regions of the world. Though both derive from the same RPG systems (like Dungeons and Dragons), they each took those systems in wildly different directions indicative of their cultures. We are all at least somewhat aware of this since we distinguish between Western style and Eastern Style RPGs, but what really separates the two? This week will be dedicated to answering that question.
The first key difference between the two styles of RPGs is that while Japanese RPGs generally tend to emphasize being part of a team, Western RPGs have a higher focus on the individual. We see this manifest in a variety of ways. In Eastern RPGs, like Final Fantasy, the player rarely takes the role of a single protagonist. Instead, they play as a group of people who are working together towards a common goal. While there is often a very clearly designated “lead character,” (Cecil in Final Fantasy IV or Cloud in Final Fantasy VII) they were always just the head of a group and not a significant figure that can do everything by themselves. Even in the later games of the Persona franchise, which borrows many tropes from Western RPGs, the player character is the team leader. Though exhibiting great power in their own right, they have party members and teammates to rely on. Their powers are even a direct result of connecting with others and forging bonds, still indicative of the team aspect of many Eastern RPGs. Compare this to RPGs developed in North America and Europe. In games like The Elder Scrolls or Mass Effect, the player is placed squarely in the center of the action. They are directly responsible for doing things. It is not a small team of individuals completing objectives and advances the plot, but rather one person. Even when the designers give the player squad-mates (like in Mass Effect) or companions (like in Skyrim or Fallout), the protagonist is clearly the driving force, the strongest character in the game, and the one who takes control at key story events. The lead character’s individual contribution to the plot is highly valued over the contribution of other characters.
Another way in which Eastern and Western RPG design are separate is in the way they allow players to interact with the plot of the game. In an Eastern RPG, developers generally have a very tight reign on the narrative. There is a plot to the game, yet the player has limited ability, if any, to influence it. When they are given agency, it is only with regards to minor details. A good example of this is the blitzball tournament near the beginning of Final Fantasy X. The player is technically able to win the tournament. However, if they do, they will only receive a slight reward for it. Otherwise, the plot advances as the same way regardless of whether the player won or lost, and it is never mentioned again past that point. This is not a criticism of the game, but merely an observation of what JRPG developers expect of their players. On the opposite side of the world, Western RPGs have a very strong focus on player choice and how that choice influences the narrative. Players are given a higher degree of freedom to poke and prod. Developers ask players to look around, gather information, and make decisions that will directly affect the game experience, if not the overall plot of the game. While absolute freedom is impossible, since games are just programs and thus have constraints, they try to loosen the reigns as much as possible. The ability to make choices that affect the events of the plot is best exemplified in some of Obsidian Entertainment’s latest works like Fallout: New Vegas and Alpha Protocol. These games force the player to choose between several factions, each with their own views on the events at hand, and pick sides. Another example of choice in games is the Mass Effect series, despite mycriticisms. The plot itself will generally remain generally the same, but the player can impact events and change many of the series’s key events in significant ways. Choices have consequences and the franchise forces the player to live with them. In essence, Eastern games took a few liberties with the concept of role playing while Western games tried to stay truer to the concept. Both are valid tactics, it all comes down to the designer’s preference.
The final point I will make with regards to the difference between Eastern and Western RPGs is the JRPGs tend to be of a generally slower pace than their Western counterparts. Though there are exceptions to the rule (like the Star Ocean franchise), JRPGs are usually turn-based or semi-turn-based. Battles focus on taking in all the relevant information and making good moment to moment decisions into order to win. The speed and flow of battle is intentionally slowed in order to give players time before committing to certain actions. Tactical thinking and good strategy is much more important in these games than speedy inputs or reflexes. The Final Fantasy series is very well-known for this. They pioneered the Active Time Battle system that has become a staple of the franchise and one of the most enduring examples of turn-based gameplay. For a while, the West used turned based systems as well. They worked well for the isometric RPGs of old (and still do). Though even back then, those turn-based games had a faster pace than their Eastern counterparts. Now that we have come to modern gaming, Western-style RPGs have become more action-oriented. Instead of being an outside force directing a group of people in a turn-based fight, games like The Elder Scrolls and Mass Effect have the player actually play as the main character in a three-dimensional space, moving around and engaging enemies directly instead of being some omnipresent overlord directing from over the shoulder. While they are not always as quick and visceral as shooters and action games, Western RPGs were always significantly faster and more direct than their Eastern equivalents: It has just become more pronounced now. It is the player themselves, as the Dovahkin or Commander Shepard, who goes through and defeats hundreds of enemies. However, it is worth noting that this one is even less of a hard and fast rule than my previous two points. It is more of general trend and there are multiple games that deviate from it.
I must once again stress that this is not meant to criticize the style of either region. Like my earlier comparison of Fallout 3 to Fallout: New Vegas, it is more of a compare/contrast between development styles. Depending on the goal of the video game, be it in mechanics, plot, etc., both of them have benefits and drawbacks inherent to their design. Though unlike the Fallout comparison, these two styles could effectively be considered separate genres entirely because they are that different from each other. It is fascinating that two groups can take the exact same inspirations and achieve different, yet equally viable results from them. This speaks to the cultural differences between us all. It is not a bad thing by any means. In fact, I think it is to be celebrated. That is why games are treated as forms of expression and speech. They speak to us and to our sensibilities. All these different people and philosophies brought together by a love of entertaining the masses. Truly, I can think of few things better than that. 🙂

#40: Nostalgia and the Perception of the Game Industry

September 26th, 2012

Many of the people who read the things that I write have been playing games for a very long time. We have together poured tons of hours into exploring worlds, meeting people, and doing amazing things otherwise impossible in our regular lives. When you reflect back upon the games of previous eras, the odds are in favor of you looking back fondly, having pleasant memories of your experiences. However, the opposite is often said of modern gaming. When many of us think of modern games, we do not think highly of them. What is the reason for this? Is it because gaming actually has gotten worse over the years, or is there something else to it?
In all honesty, I do not believe that it is the former. There have been vast improvements in the way games play as the years have gone on. I know this from experience. Recently, I went back to replay a franchise from the Playstation 2 era, Jak and Daxter, because it had been re-released in the form of an HD Collection. When playing through, I realized something: Older games are much less fun than I remember. While I still enjoyed the series, I was also amazed at how much I tolerated when I played those old games as a child. I had forgotten about how aggravating it was to die at the very final part of a boss fight or a platforming segment and have to start over from the very beginning due to a lack of checkpoints. The frustration and tedium that is born from having to do many pointless, uninteresting, and arbitrary mini-games and challenges in order to unlock bonus content and extras seemed almost alien to me. This was the moment, for me personally, where I realized how far games have grown. Just like how PS2-era platformers grew out of the lives system of their predecessors (itself a hold over from the bygone arcade era), modern games in all genres have streamlined their mechanics and learned how to alleviate frustrations in order to make the experience more enjoyable. While I do not think modern games are perfect, I do not necessarily long for the “good old days” of gaming. So why do we get this feeling that old games were awesome and new games suck? This week, I will try to find the answers.
One of the most obvious reason for the nostalgia we have for previous generations is a combination of Sturgeon’s Law and human nature. For all of the two of you who frequent the internet, yet are completely unaware of Sturgeon’s Law, it is a rule discovered by science-fiction writer Theodore Sturgeon in 1951. When critics of the science-fiction genre said that the vast majority of its works were of poor quality, Sturgeon made the realization that, in fact, all genres and all forms of creative works are composed of mostly inferior, crappy productions with only a few real gems standing out. This rule has stood the test of time and has been condensed to “90% of everything is crap!” In that sense, works from this period in gaming are no different from previous eras. However, when we look back upon the games of old, we rarely remember all of the sub-par works. In fact, we mostly focus on the best works from prior generations simply because they are the ones that became more popular, widespread, and long lasting. These circumstances combined conspire to make us feel like we are surrounded by a pile of crap. While it is true, it is no less true than it was before.
But even with that in mind, we have not quite accounted for all of the nostalgia. No, there have to be other factors at work. I have a number of theories as to possible factors of this. My first theory is that the internet has made it much easier for dissenting opinions to become widespread. Think about it. In the old days, the only way we would be able to hear other people’s opinions of games is through gaming magazines and friends. Nowadays, we have ready access to the opinions of millions of people at our fingertips. Notable dissent like the Retake Mass Effect movement among other vocal elements of the gaming community were almost completely unheard of until recent history. This is a unique era in that respect. The prevalence of the internet has had an amplifying effect on the spread of information. Not only do we communicate faster, we form opinions and do critical thinking/analysis much more rapidly as well. Furthermore, negative opinions are much more likely to be spread online than positive ones, which results in an overall warped perception of gaming culture.
Another factor working to reinforce our nostalgia for the “good old days” are the increasingly intrusive business practices of gaming publishers. In the old days, publishers did not have much choice in what they did with their games. Since most consoles lacked reliable internet connections, they had to release the complete final product on the disk without the capability of altering it in any way. Back then, for better or worse, the product you bought was generally the product you got simply due to the technological limitations of the consoles at the time. This meant that it was necessary to do extensive bug testing and proofreading. Nowadays all consoles (except for those of the unfortunate group of people that live in rural areas) have access to stable internet connections, which means games can be patched and extended after the fact. Of course, since publicly owned corporations tend to value profit over all others, it was natural that they would try to milk these new innovations for all they were worth with things like On-Disc DLC, Day 1 DLC, cutting corners only to patch the game later, and DRM schemes. which I have discussed in the past. Make no mistake, this would have happened earlier if the capability to do so was more widespread in prior console generations. Nonetheless, this has caused a warped perception of the games themselves. It is difficult for us to divorce the qualities of the overall game with the practices of the publishers who help create it, so it should not come as a surprise that people have begun to hold this generation in contempt.
My final theory as to why this nostalgia is so widespread is a very simple one. Because of the high risk/ high reward nature of the industry today, such as it is, games have become increasingly homogenized over time. It takes many more resources and significantly more time to make a AAA game now than it did in the past, we are all painfully aware of this fact. This means that where in the past, publishers could produce several different and diverse projects and were almost guaranteed to profit off their combined sales (some would flop, some would so well, yet they would generally balance each other out), it is a different story altogether for the modern industry. They have to be more risk-adverse in order to ensure that they mitigate losses and profit at the end of the day. Unfortunately, “risk-adversity” tends to lead to publishers wanting to copy the thing that is most successful, even if they do not fully understand it. In other words, where we saw diverse games in the past that could cater to different player tastes and demographics, we now see a shooter, another shooter, a shooter/RPG hybrid, and still another shooter. These are not just all shooters, but they are all shooters with the same “gritty realistic” tone and bland color palette consisting of fifty shades of gray. There is less balancing these games than there was in the past. While the indie scene and Kickstarter are certainly doing their part to mitigate this homogenization, they simply are not large enough to cause a significant impact. Besides, most people think about the AAA side of gaming when the gaming industry comes to mind.
Again, modern day gaming is by and large much better than gaming of previous generations. However, there is much that contributes to a perception of lower-quality than previous generations. Unfortunately, in any entertainment industry, especially one as expensive, culturally pervasive, and profitable as the gaming industry, perception is everything. If people start to think that games are sucking, they will just go find something else to spend their money on. The industry is not like food or gas. It is a frivolous expense that can be easily cut. The AAA industry will need to clean up its image and stop its unsustainable business practices if they wish to remain in the top dogs in gaming. It is a sad fact of life, but it is true and we all know it.

#39: Kickstarter and the Risks Behind It

September 18th, 2012

Many of the people who watch the gaming industry look at the trends and patterns of it and foresee many problems plaguing it going forward. The stock prices and profit margins for the biggest publishers in the industry are doing very poorly. AAA gaming is given ridiculous and unnecessarily high budgets that transform mild successes in terms of units sold into amazing failures in terms of profit. Lastly, a lot of the innovative elements of the industry feel stifled by the increasing oppressive environment fueled (knowingly or otherwise) by large publishers. Many great ideas and developers have had difficulty getting funding in this climate for a variety of reasons like niche appeal and risky ambition. People were getting fed up. And then something interesting happened. Double Fine studios, headed by Tim Schafer, decided to make a bold and up-until-then unheard of move: They decided to use Kickstarter to crowd-fund the studio’s next project, allowing them to make the game of their dreams free of the influence of publishers. It made headlines and became very successful. This inspired other developers to place their own ideas on Kickstarter, including projects like Wasteland 2 and Ouya. Kickstarter campaigns have once again reached the headlines with Obsidian’s Project Eternity.
For those of you who do not follow the industry, allow me to explain the gist of Kickstarter and crowd-funding. Kickstarter is a website that allows people to post their ideas for “creative projects” in the hopes that people will take interest in it and donate money towards funding the project. (Note: This does apply exclusively to video games. It can be any creative project that has a definite end goal and results in the creation of something.) When a project is posted, the poster sets a goal for the amount of money received through donors and the time allotted to reach this goal, usually within the span of one month. During this time, people pledge money to the project. While the money never changes hands until the very end, backers promise, as specified in the Terms of Service, to keep enough money in their account to cover their pledge. At the end of the time period, if the money pledged to the project meets or exceeds the goal posted at the beginning, then the project poster takes the money, after Kickstarter deducts its fee for services rendered, and agrees to spend it on completing the project to the best of their abilities in a binding legal contract enforced by Kickstarter’s terms. If the project fails, then no money exchanges hands and the project goes unfunded. This means that the project has to set a goal high enough to theoretically cover the estimated costs of the project, but low enough to avoid falling short of its goals, providing an interesting competitive dynamic. For those with a creative mind, the concept of crowd funding can be extremely useful. Naturally, it would make sense to extend this to video game development, since it too is a creative endeavor. However, there are some unfortunate realities that we need to accept with Kickstarter.
The first thing we need to accept with Kickstarter is that successfully generating enough funds through the site is more difficult than most people would be led to believe. A successful Kickstarter campaign needs to be able to generate enough buzz and publicity to attract potential funders. This is easier said than done. The project in question would need to set itself apart from other projects by providing a unique gimmick, an interesting concept, or a pedigree that other projects would lack. We have seen this more than once. The Kickstarters that are most well known are from established developers and gaming personalities. Think of the campaigns I listed at the beginning of this article. Out of the four of them, three came from highly established brands and/or names in the industry. The last one, Ouya, was from a less established industry veteran and had the good fortune to be one of the gaming press’s darlings. All of them had a bigger claim to fame than most Kickstarter campaigns have and thus attracted a larger crowd, meaning they did not have to worry about the second half of the equation. They had enough publicity and reputation to gain funding. Should another campaign come along that attracts enough people, they need to then convince those people to agree to part with their money in the name of funding a project. To do that, they need to be convinced that the campaigner and their team have the ability to actually create the game. Writing up a design document alone will no longer cut it here. It would be necessary to have a working model of the game and either a gameplay footage reel or, preferably, a demo version available for play. Funding and/or time would be essential in making this a reality, so a layman making a game from start to finish using only Kickstarter funding is highly impractical. All of this combined results in less than halfof all gaming related Kickstarter campaigns earning enough to reach their goal. (Note: That statistic includes tabletop games as well as video games.)
Though that still leaves a number of campaigns that achieve the goal and get successfully funded. You may be tempted to believe that because they received money, they now have to build the game. There is a bit of a problem with that though, which leads me to my next point: We have no guarantee that a Kickstarter campaign intended to make a video game will actually result in the creation of a video game. Before anybody of my readers panic, let me make this perfectly clear, by the terms of service put forth by the folks that run Kickstarter, which all of the users agree to, all of the funding for a Kickstarter project MUST go towards that project. The person/company who ran the campaign will be held legally responsible if they take the money and instead go on a vacation in the Bahamas or do anything else with it that could not realistically benefit the project. In that sense, the contributors can feel secure in their investment. However, just like with investors and stock owners of major corporations, Kickstarter campaign donors are not guaranteed a return of investment: There is always a risk involved. While the money gained does have to go towards the project outlined on Kickstarter, they are not obligated to succeed and create what was specified. There are very good reasons (legal, practical, and moral) to not hold them responsible for the success of the project, but it is an important thing to make note of. Unlike AAA publishers who have the authority and responsibility to check-up on the project and oversee its development, possibly firing and directing staff on the project (for better or worse), Kickstarter donors have no form of oversight unless the campaigner chooses to give them one, thus the gamble is significantly higher. They are going on blind faith that the creator has the skill, knowledge, and time to complete the project. It is not a deal-breaker as donors acknowledge that they will make no profit beyond the rewards specified by campaigners and contributions are rarely high enough to cause people to worry if they made the right decision, but it is something we need to acknowledge regarding the crowd-funding model.
The last thing I wanted to point out with the trend of Kickstarter funding is that not every game would work as a Kickstarter campaign. In fact, the games that would potentially benefit from this method of acquiring funds cover a very narrow spectrum. An ideal Kickstarter game would have a budget out of reach for most people and small, start-up companies normally. However, they cannot be too big or they would never be able to acquire enough funding. This would mostly cover games along the lines of indie games, two-dimensional platformers, isometric Role Playing Games, and others along those lines. Games like that would only require a couple of thousands to one or two million dollars in funding. While many people would scoff at me for saying “only a few million dollars,” keep in mind that most games in the AAA market cost several tens of millions dollars or more. Even in the PS2 era, some games cost around ten million dollarsor more to produce. Getting that much money through a Kickstarter would be next to, if not outright, impossible. The only semi-reliable way to acquire that kind of cash is through the financial backing of large publishers like Ubisoft or EA. Rarely do we see a group of people or a business that has the savviness to remain independent while funding and making consistently good games. It is much more difficult than it sounds, which is why publishers are still around. As much as we dislike companies like Activision and EA, they do serve a purpose. I feel like this is common sense to a degree, yet I do find that people on the internet sometimes seem to forget this simple fact.
I applaud this use of Kickstarter to begin funding projects that might not otherwise see the light of day. However, we do have to acknowledge the limitations of crowd-funding. Our industry is one that is fundamentally fueled by high-risk, high-reward investments that consume tons of money. While we can debate the necessity of AAA budgets being as high as they are (I am very outspoken in my own opposition), they are a thing in this industry and fuel many of the gameplay advancements we have seen. Until a decent conversation can be had about the fundamental nature of the industry, such as it is, Kickstarter will be far from feasible as a suitable alternative to the current business model. Even after we reevaluate AAA gaming (considering the state of the industry, it is inevitable that somebody comes in and changes how it gets run), I remain unconvinced that Kickstarter could do very much beyond small start-up projects. It would simply require far more money trading hands than would be feasible through crowd-funding. So while I do praise this wave of innovation, I urge you to remain level headed regarding the use of Kickstarter and realize that it is not the great new way to fund video games that many people make it out to be.

#38: The RPG Genre: Back to the Drawing Board

September 12th, 2012

Most people who read this regularly are aware of how much I love Role Playing Games. I love them for their emphasis on story and player interaction with the story through their mechanic. It is fun to play through these games and be truly immersed in a brand new world and its story. However, these games are far from flawless. Being video games, they can only do so much in terms of simulating a world. Since all games are just computer programs, they have to be represented in ways that a computer can easily process and display. In the old days, the limitations caused by the technology of the time inspired a number of RPG genre conventions. That was way back then. In the modern day, many of these technical limitations no longer exist because of the way technology constantly evolves. Developers are no longer bound by the technological limits of that past and are capable of doing much more with their games.
However, many of the old conventions and styles that were seen back then, once used to abstract many of the things that were (and sometimes still are) difficult to represent any other way, are still present in the RPGs being created in the here and now. A few days ago, I had a conversation on Twitter with escapistmagazine.comcontributor Grey Carterabout some of these mechanics that have withstood the test of time. Specifically, whether or not it is worth it to keep these mechanics around. In this week’s post, I will apply my analysis to the topic and see if video really did kill the radio star. Is it time we rethought RPGs and how they act in a mechanical sense?
As usual when writing an article like this, it helps to define what I am referring to so that we are all on the same page. When I refer to an RPG, I mean any story-focused game with a strong sense of character progression and/or customization. This can mean anything from the Final Fantasy games of old all the way to more modern games like Fallout: New Vegas or Mass Effect 3. I will be taking a look at how these games use old school mechanics and why they use them in the way that they do. Then, we will see if it is possible to do things differently now, either making the game either more immersive or improve them in terms of control, role-playing, or entertainment value.
One of the biggest conventions of the RPG genre is the use of skill points as a way to represent the player character’s proficiency with regards to certain disciplines, both in and out of combat. In a (semi-)turned based RPG, it makes sense for characters to have stats that represent their ability to perform certain actions successfully, be it firing a gun, casting a spell, swinging a sword, hacking a computer, or talking their way out a dangerous situation. Since it is difficult to have much in the way of player input in a turn-based game, skill levels are the only way to differentiate one player’s character and style from another player’s. The only way to show player progression in a turn based game is to increase their character’s stats and skills, which affect overall damage output and chance of success. Considering the technical and mechanical limitations of such games, implementing a system of stats and skills the determine how talented the player is makes total sense.
When we move into a three dimensional, action-oriented space, this quickly becomes irrelevant. In an action-RPG like the more recent installments in the Fallout franchise, shooting mechanics and player skill are now factors in the success of the player. However, in these games, there exists a system of stats and skills that influence the outcome of confrontations and events. Improving weapon skills increases the damage output and accuracy of weapons governed by it while doing the same to non-combat skills allows the player to do more with them via Speech checks and minigames. Sadly, I do not think any of this is necessary. Since we now have a fully realized world with combat comparable to (though not better than) many First Person Shooters and minigames that require player skill to execute properly, it makes less sense to abstract these elements. For RPGs like these, it may no longer make sense to even have skill levels and points for the character since the player’s own skill, which will improve over the course of the game, can be taken into account. This can even be extended to non-combat scenarios. Lockpicking and hacking can be done through minigames as demonstrated by recent titles like Fallout 3, whose lockpicking is widely regarded as one of the best infiltration minigames of all time, and Deus Ex: Human Revolution, which had a very interesting and enjoyable hacking minigame (sadly marred by a few questionable design decisions in the game) and the best conversation mechanic I have ever played with. Fallout 3 also had a hand in proving that skill points in these non-combat aspects of an RPG are completely arbitrary. In the game, it was impossible to even make an attempt to pick a lock unless the player had a high enough Lockpicking skill to do so. This makes even less sense upon realization that these higher level locks are genuinely tougher to pick. It is more logical either make the game more difficult, or a have a skill that governs what locks the player can pick. Having both is excessive. Though I understand that many would be wary of introducing player skill as an element of play, since it has the potential to leave some players out due to a lack of it, this is why modern games have adjustable difficulty as a way to equilize the imbalance between skilled and unskilled players. In the end, it is a design choice to be made by the creators of the game. I just believe it is worth thinking about this decision when going forward, since some games simply have no use for these mechanics.
The other common trope used in RPGs that I will be going over is the concept of vendor trash. By vendor trash, I mean items the take up inventory space, yet only serve the purpose of being sold to merchants for money. I can understand why developers do this even today. It makes no sense for the player to kill a wolf and have it drop five gold coins. To facilitate immersion, they would instead have a wolf drop a pelt that the player can then sell to vendors to make money. Though this concept is immersive and makes sense for a world, it is not exactly fun for the player to have to carry around tons of loot that takes up valuable inventory space which could be used to carry more useful items like weapons, armor, medical supplies and food. While I am a fan of forcing players to make meaningful choices, it is hardly meaningful to force players to choose between picking up a new sword or picking up a gold ingot that can be sold for money used to purchase a new sword.
In my opinion, vendor trash still has a place in RPGs, but it should be handled differently. Since vendor trash is effectively just gold waiting to be cashed out, it should be in a separate category and take up no space. While some may argue that it is not immersive to carry all sorts of vendor trash and not have it weigh the player down, I would argue contrary to that. When a designer forces the player to interact too much with the underlying systems of a game world, they start to lose their immersion. Thus, it is important to balance ease of use with simulation, which is far easier said than done. Also, by that logic, it would be unimmersive to allow the player to store tens of thousands of gold coins in their inventory without taking up space.
It is also possible to use vendor trash in other ways. For example, in Final Fantasy XII, which has the unlimited inventory space that many JRPGs do (as an interesting side note), did away with random animals dropping gold coins when they die (as an abstraction of taking their pelts) in favor of vendor trash. What they also did was introduce a new type of good in the vaious shops called Bazaar Goods. How it worked was that when the player sold cetain combinations of vendor trash to dealers, it would unlock certain items and item packs in the Bazaar. The game explained that selling vendor trash to various stores introduced these component items into the economy, allowing people to use those items in the construction of new ones to be put up for sale. This was an interesting way of making seemingly useless items have more purpose beyond just being gold in item form. After all, people would start making items with the goods that adventurers would gather and sell. Designers should put more thought into systems like this because RPG players will usually end up interacting with the economy very often. It is worth it to make this experience as painless, yet interesting, as possible.
To be fair, both of these mechanics were in place well before RPGs existed in video game form. Old RPGs, both from the West and from the East, take inspiration from Dungeons and Dragons and other tabletop games in that vein and are, as such, deeply entreched in the way people think about RPGs. Back then, they had use as a gameplay abstraction to otherwise realistic events. While a healthy respect for tradition is always a valuable thing to have, I feel like it is necessary to analyze old ways of thinking to see if they are still necessary in the modern era. When technology and game design evolve, some of the old ways of thinking no long apply. In the cases I outlined above, both mechanics still have merit in modern games, but they may need to be tweaked a little in order to make them more palatable. Though I am sure there are other examples of outdated mechanics presisting longer than they should have, I cannot think of any more that need discussion. Nonetheless, it is important to do an analysis like this if we want to improve this medium as a whole.

#37: Mass Effect 3: The Problem With its Multiplayer Microtransactions

September 5th, 2012

By now, people are well aware of the many failings of the third Mass Effect game: It had Day 1, On-Disc DLC that seemed far too integrated with the game to be anything but an obvious cash grab, most of the game failed to acknowledge the player’s choices from previous games and made them feel irrelevent, and the ending was a failure in more ways than one. However, there is one area of Mass Effect 3 that people tend to ignore, the cooperative multiplayer. I am not here to talk ill of the multiplayer mode in its entirety. In fact, I enjoyed my brief time with the mode. They used the core mechanics of the game in a very clever way to produce an enjoyable and coheasive experience. However, I have one big gripe with the cooperative mode. That would be its use of microtransactions and how they affect the overall experience.
Theoretically, I am not against the concept of microtransactions. It is fine for developers to charge for unlock codes to things players can get by just playing the game normally. From a business standpoint, it makes sense and is a good way to increase the income generated by the game. It also allows players with less free time to compete with players who play constantly by using money to gain the rewards normally obtained through gaining experience. Both parties, the creators and the consumers, stand to benefit from offering this option. Considering the state of the AAA industry, it makes sense for a publisher to try to make as much money as they can off an investment while maintaining the good will of the fanbase, and this is one of the best ways to do that.
It is not the fact that Mass Effect 3 had microtransactions that bothered me. What bothered me is the fact that they allowed microtransactions to negatively affect the design of how the game progresses in another obvious attempt and jarring cash grab. Allow me to explain. The way progression in Mass Effect 3’s cooperative mode works is that the when the player finishes a match, they gain experience towards the class they played as for that match as well as in-game credits which can be used to purchase weapons, characters, upgrades, and items. Here is where things get interesting. It is impossible to directly purchase the these items. Instead, the player must purchase packs which have a random chance of dropping the item wanted. As icing on the cake, the player does not need to use in-game credits to make these purchases. If they do not wish to go through match after match to build up credit to buy packs, they can always use real world money to purchase them. I can only assume that the reason they chose to handle microtransactions in this manner is to maximize profits. However, handling it in this manner ruined the player experience in a few ways.
The biggest way this ruins the experience is that it can potentially negate any advantage one might gain through microtransactions. The draw of using microtransactions, at least on the player’s end of the bargain, is that it allows a player to earn rewards for a small fee that would require time on their part to unlock normally. It is paying for expedience. This is lessened through the use of packs. The developer cannot guarentee that someone paying via microtransactions will receive the item they wish to buy, which defeats the purpose of having the option. (Again, from the consumer standpoint, not the standpoint of the publisher, whose goal is to make money.) Rather than give customers a guarenteed payoff for spending hard-earned money on the game, they give them the chance to waste their money by purchasing packs without getting anything of value out of it. The only reason I can see to use this model is to capitalize on people’s inability to gauge purcahsesand hope that they spend tons of money on the store before realizing exactly how much they spent. While part of me thinks that this is sheer genius on the part of EA, the other part sees nothing but a slimy and unrewarding business model surrounding an otherwise enjoyable game mode.
The other reason this negatively impacts the cooperative mode is the fact that it completely randomizes the reward system. A big problem with the system Mass Effect 3 has in place is that there is no way to reduce the pool from which you draw items from. The same list of items can drop from all of the packs in the game. The only difference between packs is the likelihood of obtaining rare items. Many players have bought hundreds of packs and only obtained a few items in the same category of equipment they will actually use. Countless stories on the internet exist where a player who mainly uses Generic Weapon Type X gets nothing but Type Y from the packs they are buying. This results in being unable to upgrade their equipment to more powerful weapons for several experience levels worth of matches, meaning that they are farther behind than other players who have been favored by the random number gods. When designing this system, they should have taken into account how it could and would affect the overall progression of the players of this cooperative mode.
Now, I have come down very harshly on the microtransaction system included in Mass Effect 3. However, I do believe it could have worked. There are alternatives the team at Bioware could have used to include microtransactions while preventing, or at least alleviating, the progression problem that belies the current method of inducing them. The first of my proposals involves scrapping the trading card game-like system we have now in favor of one of direct purchases using either in-game credits or cash. In this system, every weapon, character, and item is unlockable from the start. Each of them will be assigned a price in both cash and real world credits. To unlock an item, the player will need to either save up the credits through playing matches or by outright purchasing them with money. Upgrading weapons would also cost credits or money. Since we are no longer using random draw and are allowing people to pick out and save up for items, the prices would need to be elevated in order to compensate. I would advocate this system because it would place player progression more in their own control. This way, they do not feel like they are not getting anything out of playing the game or spending money because they know exactly what they are saving up for or buying. There is complete transparacy and no one will come out angry or disappointed. While I personally consider this to be the ideal, I can see why a publisher might not like it. It does reduce the ammount of money they can earn through microtransactions and it reduces the Skinner Box styleenjoyment a player might feel when buying packs.
With that in mind, I have another proposal. My next plan would be to shamelessly rip off the microtransaction/drop system for a very successful free-to-play game: Team Fortress 2. I am sure the vast majority of the ones reading this are already familiar with the system in place with Team Fortress 2, though I will do my best to explain it for those who are not familiar with it. In Team Fortress 2, the player is allowed to equip items that have positive and/or negative effects on the player character. These items are available for sale from the in-game store for real-world currency. However, players do not have to spend money to obtain these items. It is possible, through playing the game, to obtain these items through random item drops. They occur semi-randomly in the game and often enough that the player will obtain them at a steady rate. The positive of this system is that it keeps the Skinner Box manipulation of players, giving them the satisfaction of getting great items after enough tries, yet allows players who do not like this style of play to purchase the items they want directly. This provides an outlet for those who dislike random number generators while maintaining the option to just keep playing for a chance at getting the item. I would advocate more frequent drops then Team Fortress 2 has when going this route, as their drop rates are a little low for my tastes and doing so would make drop hunting less annoying. However, as an option in general, this style is very appealing.
But let us once again assume that EA is not sold on that style of handling microtransactions. Let us go further in our assumption by saying that they are insistant on using the trading card game-like booster pack system that takes both in-game credits and real world currency. It is possible to make a few minor tweaks to the system already in place in order to improve it. The biggest problem with the system is how it can give the player a really long run of bad luck by giving them weapons they have no desire to use. This is caused by the fact that every pack purchased draws from the collective pool of every item in the cooperative mode while only affecting the spawn rates of rarer items. What we can do to make this less luck-based is to divide packs into different categories. It should be possible to split up the weapons between packs so that there are dedicated packs for SMGs, Assasult Rifles, Sniper Rifles, Shotguns, and Pistols. Doing this gives the player the ability to control the general type of the items dropped while maintaining the random element inherent in the system. It is similarly possible to do this with new characters by giving them a dedicated pack. Of course prices for these packs would need to be adjusted. If they wanted to, they could still have the option to buy those packs that can contain anything, but they would need to be cheap to encourage that pack’s purcahse over others. By giving players a slight control over drops (by affecting which type of item drops), the possibility that the player is negatively impacted by random draw is minimized, if not outright eliminated. It also preserves the Skinner Box that can encourage players to continuously play the game or spend money on it.
This addition to the Mass Effect franchise, the cooperative mode, is a fun extra added to the game. It has all of the ingredients of a good time. To me, it is good verging on being great. The mode was marred, however, by the way it handled microtransactions. They could have been done well and served as more than just another cheap attempt to make more money. (Though that would have always been a motive, there is no avoiding it.) It could have added to the accessibility of the game, but it has to be done in a more intelligent way. The system in place with Mass Effect 3 feels sloppily done and hamfisted into the mode, giving players the impression that they are being exploited by corporate. Since it seems like free-to-play is becoming a bigger part of the industry, it will be even more important going forward to master the inclusion of microtransactions and their affect on the game. Hopefully, developers and publishers alike can learn from this game’s failures and move forward.

#36: Game Reviews and the Gaming Press: Why Do They Suck?

August 29th, 2012

The gaming press is a pretty powerful entity in this industry. The consumers go to them for the latest in gaming news, including what is about to come out, what companies are partaking in less than consumer-friendly business practices, and other news a gamer might want to know. The publishers rely on them to get information to their customers and effectively spread the word on projects they are working on. However, the press also has one critical function to serve: To review video games. Game reviews are supposed to be an important part of the gaming ecosystem. It is, theoretically, immensely helpful when contemplating the purchase of a game to have a collection of opinions regarding it. They should help to distinguish between bad games, average games, and good games, but how effective are they in that capacity and why do they sometimes get called into question? This week, I attempt to figure out the answer.

Before we get to that, we need to understand the process behind most reviews on major game review websites. More often than not, major review sites receive copies of the games they are to review a couple of days to a week prior to the scheduled release date. The reviewer in question then has to use their free time (I am not aware of a review outlet that actually allots time for its staff to play games on the job since they have other duties to attend to, though I am sure they exist.) to play far enough through the game to get a good understanding of it (since it is rare that they play through to the end) yet leave enough time to gather their thoughts and write a review about it. The pressure will be on them to finish the review as quickly as possible because the faster it gets uploaded to the site, the more page views it will get and more advertising money the site recieves.
Obviously, this style of reviewing, which is commonplace in the industry, is not very condusive to writing reviews of the highest quality. One of the biggest issues in this is time. Reviewers are rarely given an adequate time to write very good reviews. First off all, games nowadays usually take around 10-12 hours to beat (30 or more for an RPG). For someone who has a job, a social network of people they communicate with regularly, and many of the responsibilities life throws at us, playing through a game that long in so short a time while trying to analyze it critically is asking for a lot. It is not likely that the person in question will be able to throughly explore a game and look for pros and cons beyond what is immediately obvious. Any extra moment they spend playing the game reduces the amount of time left over to gather their thoughts and write the actual review, which is already lacking enough. In the remaining days before the deadline, they need to structure their thoughts on the game and all of its aspects. Then they need to plan out and compose a written review that encapsulates all of those thoughts as best as it can. Considering the timeframe often required to write these reviews, it is a miracle that they are anywhere near coherent. The amount of time to write and proofread those reviews is nowhere near enough to do anything more than list-off what the game does and whether they think it is good or bad. Going into any real detail is almost completely out-of-the-question.
The less obvious, yet more critical problem is the window in which the review is released. Many people would, quite logically I might add, say that the best time to release a review of a game is somewhere around 3-4 days before/after it is released. However, I disagree with this for a couple of very important reasons. The first reason for this is that a review published within the release window for a game will not sway people who are on the fence about buying the game. When a game is released, the only people who buy it on the very first day it comes out will be people who were already eagerly anticipating the game. Regardless of how positive or negative the review will be, they will buy the game. All the review will do is reinforce the decision in their head (even a negative review will be seen as a personal attackand will not be recognized as a legitimate opinion). The same can be said about negative reviews and people who will not buy the game. Anyone who is on the fence regarding the purchase of a video game will not be immediately swayed. They will often wait for anywhere from a few weeks to a month because of my second point regarding the window of time: Real good conversation on a video game only arises in the weeks after its release. It is only then that people have bought, fully-played, and digested a game and all of its parts. This is when people can critically analyze the minutiae of the game in question and figure out exactly why the game is as good or bad as it is. We saw this with the Mass Effect 3 ending controversy. Most reviewers, in a rush to get their reviews out on release date, did not get the chance to play through the entirety of the game and reach the ending. A few weeks later, after the game was out, many people actually had the chance to experience the finale of Mass Effect 3 and… reacttoit. It was only then that people were able to look into it and think about what worked/did not work about it. People do not form opinions in a vaccum. Since we are social creatures, we think and form opinions by talking with other people. By gathering information from different viewpoints and perspectives, we gain intelligence and form better opinions. This is why I do not think releasing reviews on the same day a game comes out is a good idea. It is difficult for a reviewer to form detailed, well-informed opinions when they have not had the time or even the ability to discuss the game and gain the necessary prespectives of other people nor have they the time to fully comprehend and analyze their own prespective. It is a recipe for disaster.
Another serious issue, though one I suspect is blown out of proportion, is the fact that game reviews are supported by advertisers within the games industry. Everybody knows that major game publishers like Activision or EA buy ad space on review hubs like IGN or Gamespot. Because of this, there is a perception that these reviews are being bought by publishers in order to make their games look better. It is not that hard to suspect when sites like IGN release an article titled “Why Do People Hate EA?” and only ask Peter Moore, Chief Operating Officer of EA, the reasons behind it instead of asking the aforementioned people. Again, I do not believe this is as serious an issue as people make it out to be. I think that the people who write these reviews generally stand behind them. However, it is still a issue worth bringing up and discussing as it does have the potential to impact reviews and how the gaming press thinks with regards to the games they talk about. There are well-known cases of reviewers being forced into embargos as a result of receiving review copies of certain games, like how Konami forbid mention of the cut-scene length and install timesof Metal Gear Solid 4. It is not an easy problem to solve and there is no simple solution beyond not showing game advertisements, which would cut into profits.
One last problem that I am far from the first person to make note of is the over-inflation of review scores. A scale of 1 to 10 is often used to indicate the overall quality of a game in comparison to other games. On this kind of scale, 5 is often denoted as average simply because it is in the middle. Anything above a 5 is above average and below a 5 is below average. This makes sense and is intuitive for the most part. However, this is not necessarily how review scores (which are a terrible way to handle reviews, but have been accepted as commonplace simply because of the fast-paced nature of today’s society) work. If you were to go to review-aggregate site Metacritic, you would find that there are many more average or positively-reviewed games than there are negative ones. This is simply not possible mathematically. If there are more positive reviews than average or low reviews, then the praise becomes the new accepted standard for average and the review scores should be placed back in equilibrium. We are not seeing this and I think I know the reason why. The reason for this inflation of review scores (and resulting decrease in credibility and weight that reviews carry) is that the fans of franchises cannot tolerate reviews that are neither perfect nor near perfect. We saw this when fans of the Uncharted series lashedoutagainst reviewers for giving the upcomming-at-the-time (as in, they had not played it yet) third game an 8 out of 10, which is well-above average on a 1 to 10 scale. The game was not ever released, yet people claimed that the reviewers had “no clue what they are talking about.” It is a huge testament to the source of many of the problems with game reviews.
In fact, upon very close scrutiny, many of the other problems that plague game reviewers and the reviews they write come down to the fans who read them. The reason they are rushed to finish these reviews before a game is even out is because fans do not want to wait. They want to see opinions on upcomming games as soon as possible. Fans cannot wait until popular consensus has arisen and critical analysis can be had because fans do not want that. It is a desire for instant gratification and someone to support their opinions of the serieses they love and hate that drives them. A critical eye and valuable insight into the fine details of a game is not wanted by the masses. The request is for a “You are right” or an “I think you are wrong” (which will be preceived as a “selling out”). The only problem that does not come down to the fans is the issue of advertisers, and that is complex enough as it is. This is frustrating for people like myself who want insight and critique of the medium. To improve the quality of reviews from major gaming journalists, we need to fundamentally alter the culture driving them. This is not an easy thing to do as it would require much work and be met with much resistance. I do not even know if the effort would be worth it. But at the very least, this is discussion worth having. This is something that needs to be said.

#35: What is Immersion and How Do We Achieve It?

August 22nd, 2012

 Most of the people who play games agree that they are able to let us explore new and interesting worlds in ways that books or movies simply cannot, something I have discussed myself on several occasions. However, when we talk about this ability, the same word tends to crop up over and over again: This word is “immersion.” All gamers have at least a rough idea of what it is, but very rarely do we discuss the idea in any sort of meaningful way. This is my attempt to try and remedy that. My intention is to try to figure out how developers can increase the immersion of players in their games. First, let us define “immersion” for the purposes of this article. If we can nail down a definition, then it will be easier to have an informed discussion about it because we will all be one the same page. Immersion is the truest form of a willing suspension of disbelief. It is when the player feels like that he/she is an active participant of the video game despite knowing that he/she is not actually in the game. When someone is truly “immersed” in a game, they tend to think of the people and places depicted not as models and textures put together in a game engine with working physics and number-driven systems. Instead, they think of them as people who are in a world and doing what they do with a delibrate and driven purpose. Designers and developers usually strive to achieve this feeling of “immersion” for the end user. There are a number of tips and strategies that they use to make this happen.
One of the simplest thing people a developer can do to facilitate immersion is to maintain an internal logic and consistency in the setting, characters, and plotline. Please note that this is not a call for realism in games (I have already discussed that). It is simply saying that there should be a rationale behind every thing that is going on. The world must have its own rules and systems that it adheres to. As a general rule, if the world in question uses magic/technology and it adheres to a specific system, then it must never deviate from that system without some sort of contingencies set in place. For this example, let us say that we are talking about a fantasy world with magic that has a built in system of equivilent exchange (where every good thing that happens due to magic must be tempered with an equivilant negative side-effect and vice-versa). If in this world, a sorcerer successfully revived his dead wife free and clear with no downsides, then the player would (rightfully) call foul. Something that incongruous would need either adequate foreshadowing that alludes to the possibility of someone doing this (a skilled and powerful sorcerer can explain how that could theoretically happen sometime before the scene), hang a lampshade on it during the scene (someone points out in the middle of the resurrection that it should not be possible), or provide an explanation after the fact showing that the sorcerer technically did follow the rules and actually did sacrifice something dear to him in an equivilant exchange. These are all tools a skilled writer can use to explain away things incongruous to their internal systems and logic.
It is also worth noting that an designer does not need to worry about never breaking logic even once. It is bound to happen eventually. What they have to worry about is not breaking internal logic too often and not doing it with major plot events. Players can generally forgive a few minor errors and even justify them in their minds. However, take advantage of this fact too often, and the players will begin to lose immersion. This theshold at which the immersion is broken and disbelief is no longer suspended varies wildly from person to person. Therefore, a writer should be extremely careful when making major breaks with continuity. This internal logic also extends into characters and their motivations, so a writer needs to keep in mind what a character’s personality and goals are when determining what the character should do in the plot. If game developers want to maintain the feeling of immersion, they need to define a rationale behind how the world works and maintain it to the best of their abilities. We accept that real world logic does not always apply. The problem arises when a world’s own logic no longer applies to itself.
While the previous tip can be applied to any form of media in general, this next tip applies to games in particular. It is a very good idea to synergize story and gameplay as much as possible and avoid Gameplay and Story Segregation. One of the more immersion breaking things a game can do is give the player a situation that he/she could ordinarily overcome using conventional gameplay only for them to circumvent it somehow. One of most well-known examples of this in action is the death of Aeris/Aerith in Final Fantasy VII. In Final Fantasy VII, player characters are killed in battle quite often (depending on the player’s skill, obviously). To revive these characters, there is an item called a Pheonix Down which revives them so that they can keep fighting. However, when Sephiroth kills Aeris in a cutscene, there is no ability to revive her with similar means. Her death is required for the plot to advance. They never once lampshade, subvert, or acknowledge the possibility of using items or healing spells, which is weird because Final Fantasy has done permanent-death for a player character before. In the fifth game in the franchise, the player’s party is being attacked by the villain, ExDeath. To save the others, one of the party uses his crystal to gain powers well beyond what humans are capable of. He fights ExDeath and holds his own despite the fact that his HP is at 0 and he should be dead. After the fight, the others try to heal and revive him with common spells and items the player is quite likely to have. They find that he exhausted his power so throughly that he was beyond healing. He was a dead man. Furthermore, unlike Final Fantasy VII where Aeris dies and the player potentially loses a party member he/she trained at the expense of others, the designers of Final Fantasy V had the dead party member transfer his skills to his daughter, meaning that the player is not inconvienced by the plot’s insistence of killing off a party member.
Some people reading this might view this as an extension of my previous point about willing suspension of disbelief and internal logic. To that I say, I am glad you are paying attention, because you are correct. The problem is that many games simply do not take that into account. Developers have an irritating tendency to treat the plotline and setting of a game as separate to the gameplay when that is simply not the case. Gameplay is a natural extension to the events at hand. It informs the player as to how the world they are now exploring works. If those mechanics are incongruous with the story or the setting, it makes the world look disjointed and players will be more likely to see the cracks.
Another thing that games can do to increase the likelihood of player immersion is to avoid something I learned about from Chris Franklin, also known as Campster, of Errant Signal fame (which you should be watching): “ludonarrative dissonance.” In layman’s terms, ludonarrative dissonance is when themes and morals present in the game’s storyline are directly contradicted by the gameplay. This is not like the previous point, where internal logic is broken. When this concept is invoked, the game’s world can, not necessarily will, still be following its own rules, but it fails at upholding the underlying moral message throughout. It can even be presenting two opposing messages, one in the story and one in the gameplay, knowingly or not. This is often a problem in modern miltary-themed first person shooters. Many of them love to try to be serious commentaries on the harsh realities and reprecussions of war. However, at the same time, they appear to revel in the bloodshed and slaughter of hundreds of people that the player must defeat in order to advance. These two facts are in stark contrast with each other. Developers cannot talk about loss when directing the player to kill thousands of people, no matter how “evil” they are portrayed. The theme of the story is undercut by the morality presented in the gameplay where killing all the enemies is presented as a “good” action because they are terrorists and/or in the way of the player’s objective. Though it does not affect everyone, this kind of contradiction can be extremely jarring to some people, breaking their immersion and even their enjoyment of the game.
The last piece of advice I have for developers seeking to facilate immersion in games is to simply make games that play well. There is a lot to be said for a game that is fun to play. When a player enjoys a game and is having an engrossing and entertaining experience, they are much less likely to analyze the plot and look for plot holes and logical fallicies/inconsistencies. On the other hand, if the game is boring and uniteresting to play, then the player will more often than not examine the story of the game in greater detail since that will be the only thing keeping their attention. It is not always possible to plug every plot hole or account for every inconsistency, so the best solution is sometimes to just distract players with good gameplay (as loathe as I am to admit it). If it is of high enough quality, then we are more than able to forgive a few mistakes or missteps.
Immersion is a difficult thing to achieve. It requires so many interlocking systems and conditions to come together in complete synergy. Furthermore, it also depends on the player and his/her mindset, which is inherently fickle. It is not possible for a developer to create a game that is 100% immersive for everyone. All they can do is facilitate immersion by creating as coherent and interesting a game as possible. Focus on building and maintaining an internal logic with both the story and the gameplay and how they tie into each other. While this seems like a tall order, it is much simpler than one might think. All it requires is going in and thinking about the story. Writers should ask themselves if there is another, easier and more sensible way for events to unfold and ask other people on staff and with a critical eye to take a moment and look it over. Many series have loremasters and dedicated wikis for that serve this very purpose. There really is no excuse for not doing something so incredibly simple.

#34: PC vs. Console, Which is Better?

August 15th, 2012

I have been playing games for almost my entire life. Games are very much a huge focus of mine, which is why I love to write about them. One thing you learn when being a part of this sub-culture is that the people who populate it tend to be very… tenaciously fanaticwhen comes to decisions they make. This causes gamers to debate a lot. One of the major debates that have cropped up more often of late than usual is whether PCs or dedicated gaming consoles are superior to the other. There are points to be made for either side. This week, I try to apply my analysis onto this debate and try to come up with a satisfactory answer. For this analysis, I will look at several factors and analyze them to see whether PCs or Consoles do better in that category. Once that is all said and done, I will render my final verdict. Since I am a console gamer who has only recently picked up PC gaming, there is a possibility for biases, just a forewarning.
The first category we will look into is Initial Investment. On the console side, the initial cost of investment is usually fairly low. Though consoles have had higher price points this generation (which is why $600 PS3 could not move units), most of them hover around the $250 price point. This is because console manufacturers, with the exception of Nintendo, do not make money on the initial sale. In fact, each sale results in a slight loss for the company. They rely on the sales of game software (and Blu-Ray movies in Sony’s case) to make up for the initial losses and eventually profit. In contrast, PCs tend to have a much higher initial investment. Even lower end gaming PCs take an initial investment of around $500 dollars nowadays. Higher end gaming PCs will be very costly. The consumer will likely be shelling out $1000 or more in order to play top of the line games. It makes sense. Unlike gaming consoles where the company has a reasonable idea of what the end user plans to do with it, PCs have a significantly higher degree of uncertainty. Because of this, PC manufacurers and distributors of their parts need to make a profit through the initial sale and they do not have propritary formats to liscense out for money (any PC will generally run any program from any CD/DVD if it has the power to do so). Naturally, this will drive up prices for the hardware and give consoles the advantage on that front.
The next thing to go over is the cost of games for the hardware. Most console gamers are aware of the ever-prevalant $60 price tag on most major console releases. As many of them are also aware, that cost tends to be quite prohibitive. It is simply not feasible, espcially in this economy, for a consumer to purchase all the games he/she wants. Even after a few years, prices rarely drop to a significant enough degree. There is the downloadable market which offers a more flexible pricing system, but by and large most games are subject to the $60 retail price. This is not the case for the PC. Most PC games are purchased from services like Steamand Good Old Games. Retail is by and large irrelevant for PC gamers. While AAA games still tend to release for $60, they are infinitely more suceptible to the market and tend to get marked down and put on sale and a higher frequency. A semi-patient PC gamers will always be able to buy games for a more reasonable price than a console gamer with the same level of patience. And even people who are not into PC games are aware of the massive sales that Steam throws semi-regularly. There is also the indie scene to make note of which is more prelavent on the PC, allowing for more variety in games at a lower coist. All this combined gives PC gamers a clear advantage when it comes to purchasing gaming software for their chosen device.
But purchasing a system and getting games for it are one thing. It is an entirely separate matter when it comes to making the games work on the player’s system. This is a trivial matter for the consoles. Since every Playstation 3/Xbox 360 has the same system specs as any other system, game developers can be reasonably assured as to what the end user for their games will have and can program the games and their development tools with that in mind. The results in nearly the exact same performance on every console. Players will not have to fuss over system settings and compatability. Any PS3 game will work on any PS3, for example (unless it’s buggy, in which case every PS3, on average, will encounter the same amount of bugs/glitches). This is not necessarily the case with PC games. Unlike consoles, every PC will have different specs from other PCs. The developers of these games have no idea what the end user will be capable of playing. This means that they have to release minimum specs required to play the game and ideal specs to get the most out of game, shifting the responsibility of compatability from the developers to the players. It is the player’s job to make sure that he/she has a rig capable of playing the game. It also means that PC games have the tendency to be more fussy than their console counterparts. This makes it necessary for users to go through settings and the occasion forum post if the game is not working. While developers are usually available for support (it affects their bottom dollar if you cannot play a game), the fans are ultimately the ones who are responsible for keeping their systems up to date and getting the game to function, giving consoles a clear advantage here.
Another crucial topic in a discussion like this is the control scheme behind each system. This in and of itself is a major topic of debate among gamers: The question of the console controller versus the keyboard and mouse. The controller is obviously a more accessable form of play for the common gamer. It is very easy to pick up a controller and play the desired game. There is also much to be said for continuity of controls between games. Games tend to have similar conventions regarding control schemes. While small adjustments between games will be necessary, more often than not games in the same genre will have very similar controls. Unfortunately, the simplicity can also become a downside. Certain genres simply cannot work with a controller. While controllers are much better for things like racing games, they make other genres like Real-Time Strategy nearly unplayable due the number of inputs and the degree on precision required. On the keyboard and mouse side of things, we have a different case. The KB&M style of control is very precise. Depending on mouse sensitivity, (which can usually be adjusted) it is much easier to make smaller and more accurate movements with a mouse than with a thumbstick. Furthermore, the keyboard and mouse is more malliable than a controller. It can used in a number of different ways simply because there are more keys and most PC games allow for custom controls and a greater variety of control schemes (at least more often than console games). This is both a blessing and a curse. While KB&M allow for more precise and customized controls, they do not have the pick up and play ability that controllers have. The end user will more than likely spend some time adjusting his/her controls and fine-tuning them while a controller user can spend more time playing the game. Going back to the PC versus console debate, PCs ultimately win out. Not because keyboard & mouse is inherently better (It depends entirely on what the end user wants from the control scheme), but because the PC has access to both because 360 controllers are compatable with PCs and there exists software to do the same with PS3 controllers. This means that consoles are restricted to one of the two while PC gamers have access to both.
And now to answer the ultimate question: Which is truly superior, PC or console? The answer is an extremely solid “What is your preference?” If you are a more technically minded gamer who is on lastest of the cutting edge, then PC is the right choice for you. For a more customized experience or one that offers a greater variety in games, then PC is again the right choice for you. However, the console is better for those who do not want to worry to much about technical stuff and want to just dive into a game. Price also needs to be a big consideration. PC is better for those who prefer to take a big initial hit and then take advantage of discounts and lower prices on the digital market while consoles have a lower barrier of entry, but have more expensive games. That is ultimately why this debate continues to rage on. People have different definitions of what is “best” and thus judge it on different criteria. This is what fuels all the various “best ever” fanboy arguments. It is impotant for us to take this into consideration. If there is a game or system you do not like, keep in mind that the odds are that other people like it for different reasons.
Page 133 of 137...130131132133134135136...
Recent Posts
  • Astro Bot – Part 2-3
  • Astro Bot – Part 2-2
  • Astro Bot – Part 2-1
  • Astro Bot – Part 1-3
  • Astro Bot – Part 1-2
Recent Comments
  • Astro Bot – Part 2-2 – Press Start to Discuss on Sly 3: Honor Among Thieves – Part 6-3
  • Assassin’s Creed 3 – Part 2-1 – Press Start to Discuss on Assassin’s Creed 3 – Part 1-4
  • Assassin’s Creed 3 – Part 1-4 – Press Start to Discuss on Assassin’s Creed – Part 2-2
  • Assassin’s Creed 3 – Part 1-2 – Press Start to Discuss on Assassin’s Creed 2 – Part 1-2
  • Assassin’s Creed: Revelations – Part 4-2 – Press Start to Discuss on Assassin’s Creed Brotherhood – Part 4-4