Setting Up in Parts (Working Title)

Hello everyone once again! Summer is well and truly here. Hot weather, longer days, and apparently some football torunament is currently running? Bring out that bbq kit you’ve kept hidden away since last year and crack open the beers. Time to enjoy the sun.

In this post, I want to discuss something that is becoming a trend within the videogame industry. Rather than release a game in its entirety, several game developers are now releasing games in parts. This is a practice of releasing ‘episodic’ content, in which an entire game is released as episodes comprising several parts spread over a period of time.

As always, we will explore what this means, the reasoning why companies are going this route, and also looking at a recent example of this, using the game Life is Strange.

Why Episodic Content

Games, no matter how big or small the project, take time to develop. For the bigger and more ambitious projects, development times can easily be several years. People can be waiting for some time before a game comes out. For example, Duke Nukem Forever, for a multitude of reasons, literally took almost forever before it finally came out. Perhaps that is an unfair example as Duke Nuken Forever was plagued by development hell. More often than not, games in development get released without a hitch.


Happy 5th (Golden) Year!

Hello everyone, wherever you may be. Two posts in one month, what madness is this? It seems I waited until May for the first post of 2018, so it’s only fair not to keep you waiting ages for another.

Well, May is a very special time, as it is the month in which the blog was first started, way back in 2013. A lot has changed in the world since then, including within the world of videogames and technology. While content lately has been very slow, the blog is still active and I have more ideas on upcoming content. In fact, I am current in the process of producing another post due (hopefully) soon, so watch this space.

Ultimately, I am still proud to keep this going and steadily adding to it. It is there for you as it is for me. So here’s to 5 years. I hope you join me for number 6.

5th birthday

To Give It The Cold Reboot

Hello everyone and it seems we are making good progress through 2018. But I have a question. What do you do when you see bread that has gone stale and mouldy? You throw it out and hit the shops (unless eating mouldy bread is your thing?). In this post, we discuss what it means to ‘reboot’ a series and why it happens.

In need of a face-lift

Have you noticed that in an established series, all of a sudden there’s like a complete wipe to everything’s that happened before and now you’re wondering what just happened? Everything seems different to what you’ve seen before, as if someone flicked a switch somewhere. This is known as a ‘reboot’, a refreshing of all continuity within an established series. It is a reimagining/restructuring of the characters, timeline or story of an established series from the beginning. Think of it in terms of computers – to reboot a computer is to restart the operating system.

There are a couple of reasons why the decision to reboot a series is made. A reboot can bring about new (or renewed) interest from fans to a series that has started to grow stale and tired. As a series goes on and on, keeping audience retention becomes harder if audiences start to get disinterested with the series. When this happens, the decision to ‘reboot’ is usually not far off.

In terms of films, where reboots often occur, reboots gives the series a chance to be reinvented, allowing the casting of new actors, locations and stories in an effort to attract new target audiences and refreshing things. Long-running film series are most at risk of being given a reboot.

The James Bond film series, going since 1962, went through a major reboot with 2006’s Casino Royale. The previous entry, Die Another Die, had been criticised for being saturated with product-placement and over-reliance on gadgets and special-effects at the expense of any sensible plot. Having seen the film, I kind of have to agree. The film started of ‘believable’ but the plot was discarded in exchange for over-the-top action sequences, such as the shootout between Bond’s ‘invisible’ Aston Martin and Zao’s pimped-up Jaguar that dragged out far too long. Did I also mention the film was product-placement heavy? After the mixed reviews, the filmmakers, Eon, had decided on rebooting the series and going back to simplicity. With this direction also came a change of Bond himself. Pierce Brosan left (I personally felt he was a great Bond) and in came Daniel Craig.

Other media sources can also be rebooted such as TV series, books, and of course, videogames. As this blog primarily discusses videogames, as a case study, there is one particular game series I would like to focus on. It has become a legendary series that has long cemented its place within the industry. That series is Doom.

The Doom Beginning

At its core, the Doom series is a first-person shooter, consisting of the player moving from room-to-room shooting hordes and hordes of demons and undead from Hell in order to survive. The player is often an unnamed marine, popularly known as the ‘Doom Guy’, using only his tenacity and an array of advanced weaponry for survival. While the concept may seem generic now, when the first Doom game was first released in 1993, it was considered revolutionary in setting up the first-person shooter genre we come to know.

One of the groundbreaking aspects of the original Doom was the game engine. Although looking outdated now, for its time, it was considered relatively advanced and realistic 3D graphics. The engine included full texture mapping, varying light levels, and height differences of in-game objects. Doom’s visual highlights were greatly enhanced as a result. The use of darkness acted to confuse players and heighten tension. Various in-game objects were dynamic, such as moveable platforms, floors rising to form staircases, and doors opening and closing.


Doom was technologically impressive and a critical and commercial success

Although appearing to be 3D, the graphics depicted by the Doom engine were not true 3D. They are internally represented on a single plane, which is a flat, 2D surface with an infinite range. This is most obvious when moving and turning. As the player turns about, the engine will ‘compensate’ for this, giving the impression they are 3D. In other words, the engine used ‘graphical trickery’ to make it appear objects were moving. This was brought about so that the game could run smoothly for home computers of 1993.

When Doom was released, it received critical acclaim to the point where it was considered to be one of the most influential titles in the industry. Praise was given to the gameplay and graphics. In a poll done by GameSpy in 2001, Doom topped the list. Similarly, in its 10-year anniversary issue, PC Gamer considered Doom the most influential game of all time. The game eventually sold between 2-3 million copies through to 1999, being played by millions and millions. The game’s developers, id Software, jokingly said they expected Doom to be ‘the number one cause of decreased productivity in businesses around the world.’

First Reboot: Doom 3

The series have been an immense success, with the style of Doom firmly established. By 1997, four main Doom games were released. The gaming community was well aware of what Doom was and the influence it brought about to the industry and community alike. The fast-paced, run-and-gun nature inspired many other games. As a testament to Doom’s popularity, many of these games were known as ‘Doom clones’ for featuring many of the same features. Id Software licensed their game engine to other developers, who released their own titles using the same technology.

It was approaching the year 2000 and id Software had just finished their latest project, Quake 3 Arena, a multiplayer-orientated shooter. But it was at this time that the company were reaching a ‘creative crossroad’ as to where they should go next. As Joseph Cirllo writes, ‘id needed to make a decision on what their next great game would be and at this point had no direction to go into. The only thing they knew for certain was that the focus for this next game would be single player.  There were complaints from the press and even the players over the fact that Quake 3 Arena was a multiplayer only title, and id was ready to go back to doing a well organised single-player game.’

With each iteration of Doom, the formula was usually the same run-and-gun action you’d come to associate with the series, just with better graphics each time. Id Software had a reputation of going back to what it knew best; what they were known for. By the fourth Doom game, it seemed the formula was getting tired and worn out, as were some of the employees. Project manager Graeme Devine then proposed an idea to id Software owners, Kevin Cloud and Adrian Carmack, which was to develop a role-playing/action game called Quest.

Despite the owners’ liking the idea of trying something new, Quest never materialised because there were many at the company who opposed it. One of the arguments was that id Software was known for the action games, and to develop a role-playing game was unlike anything they had ever done. ‘It would also mean they would cater to a whole new set of players, and these gamers wouldn’t know who id Software was and would be sceptical of their game’ (Kent, Kushner).

One of the chief employees to argue against Quest was none other than game engine designer, John Carmack (and who you may remember from this post about virtual reality ). By 2000, technology had advanced considerably since the original Doom. Carmack had been doing research on the latest tech to see how far he could make the next graphics engine go. What he learned was good. He had ideas that they could create a new Doom taking advantage of these technologies, creating ultra-realistic settings and locations that took Doom to a whole new level, allowing them to do things they could never have done in 1993. A technology enthusiast, he sent an internal company plan in June 2000, strongly accounting plans for a remake. Many of his fellow colleagues agreed that a remake was the right decision.

However, Kevin and Adrian hated the idea of doing a remake. As Joseph Cirillo writes, ‘In their minds Doom was perfect and if they tried to change the perfection of Doom by making a new game that in any way wasn’t like the first it could very well alienate their fan base from them. Their games had catered to a specific range of users, and those users stayed with them throughout the whole company’s life span.  If they created a game that was unlike anything they had done before, it would separate id from their fans and could cost them thousands of dollars in unsold games.’

Carmack, along with several other colleagues, presented the two owners an ultimatum. Either they would allow the team to remake Doom, or they would leave. Carmack did not want to purposely creative divisions; he had the company’s interests at heart. Doom had made id into the company it was and sales of a new Doom with the latest tech could be lucrative. After this meeting, both Kevin and Adrian reluctantly agreed. Instead of their preferred Quest, id’s next project would be Doom 3.


John Carmack, a veteran of id Software and one of the main reasons why Doom 3 was developed instead of Quest

Doom 3 is instantly different from previous titles in the series. The most obvious difference is graphically. The game was released in 2004 and it looked stunning. The graphics portrayed a startling realistic feel, and coupled with the lighting and shadows, made the setting unnervingly creepy and life-like. But in terms of gameplay, the biggest difference was that it now featured a dedicated story.

Story in Doom had never been apparent or necessary before, yet with the direction of Doom 3, having a discernible plot along with the modern graphics, visuals and sound would help to create a truly unforgettable experience, which was what Carmack was looking for. The theme of the game also shifted. Rather than the fast-paced, run-and-gun people knew, Doom 3 focused on survival horror, fighting hordes of demons and undead in dark and dingy environments.

I have played Doom 3 myself many times and I fondly remember being apprehensive and on edge, but also fully taken in by all the visual aspects of the game. I can truly say that I felt the same way as the many media publication that came to see the in-progress game at E3 2002. As Cirillo explains, ‘during the twelve minute demo the press was treated to what could only be described at the time as a quality presentation on par with what the movie company Pixar had done. The game had surpassed all expectations and saw numerous awards from E3 for the year.’ It won five awards that year, and it wasn’t even released yet!


Doom 3 displayed graphically impressive visuals, leading to disturbingly life-like details, which also highlighted the survival-horror theme of the game

A lot of the core elements carried over to Doom 3. You still play a silent protagonist – the Doom Guy – and you still move from place to place killing hostiles, albeit at a slower pace. The setting was at a research facility on Mars, where the science team were working on teleportation tech. Testing it out multiple times, they unwittingly stumble into the realm of Hell…and literally all Hell breaks loose. Naturally being one of the few survivors, the Doom Guy must take it upon himself to survive using his skills and weaponry, hallmarks of the series. It turns out an ancient civilisation on Mars had been battling the Hellish hordes eons before. In an effort to stop the invasion, they had sacrificed their entire race, with their ‘souls’ being transferred to a weapon called the Soul Cube. This weapon was wielded by their mightiest warrior. He succeeded in sealing the portal to Hell but at the cost of his life. Somehow, the Soul Cube would become trapped in Hell itself. The player fights his way through, successfully frees it and uses it to destroy Hell’s own mightiest warrior, the towering Cyber Demon.

The anticipation during development, and then the success upon release seemed to justify Carmack’s decision to create a remake. Reviews were positive and focused much on the graphics and visualisation. GameSpot commented that the environments were ‘convincingly lifelike, densely atmospheric, and surprisingly expansive’. Several reviews noted on the dedicated storyline, with IGN writing ‘the UAC base also has a very worn and lived-in feel that adds to the realism.’

While Doom 3 was released in 2003, there would not be another Doom title for over a decade (discounting its expansion and the BFG Edition). Work had begun on a new title, tentatively called Doom 4, but this never came about. Instead, after many long delays, we had ourselves the second reboot of the series.

Second reboot: Doom (2016)

Doom 3 had set many benchmarks in the industry and community that another Doom game seemed likely to build upon the success. That idea was Doom 4, which was announced in May 2008. John Carmack stated the game would look ‘three times better’ than their recent project, Rage. It would be planned to run on the id Tech 5 game engine, the latest version, and so be that better.

The initial development of Doom 4 was supposed to be a reworking of Doom 2. In terms of story, Doom 4 would have taken place in a more urban environment. Conceptual art and screenshots suggested that the player was caught in the middle of an invasion of demons on Earth. It would have been a literally hell on earth scenario.

However, things slowly started to go south during production. The first moment when it appeared things were not going well was in 2013. Doom 4 was announced in 2008 and nothing had come out in five years. It was in April of that year that an expose by Kotaku documented that Doom 4 was fighting a different kind of hell; that being development hell. Jason Schreier (2013) writes, ‘I’ve talked to four people with connections to the Id Software-developed game, and they’ve described a studio plagued by mismanagement and lack of communication that has frustrated staff both at Id and Id’s parent company, ZeniMax.’

The development process was claimed to have suffered from mismanagement, with the proposed story described as ‘lame’ and ‘a mess’. Several proposed scripted sequences drew (unwanted) comparison with the Call of Duty franchise, leading to the tongue-in-cheek phrase Call of Doom. Tim Willits, working on the project, citied a ‘lack of spirit’, saying, ‘every game has a soul. Every game has a spirit. When you played Rage, you got the spirit. And (Doom 4) did not have the spirit, it did not have the soul, it didn’t have a personality.’ To further strike a blow, id mainstay John Carmack himself would leave to join Palmer Luckey’s Oculus Rift.

Once it became clear that the team was unsatisfied with the direction Doom 4 was heading, they decided a reboot was the best option. The lack of personality in the project was thought to be the primary reason for this decision. The product they were making wasn’t good enough to them and not the product the fans were wanting. Mismanagement and lack of attention had allowed Doom 4 to lose track and in turn, hit team morale hard. This was a chance to wash the slate clean but it meant that the past couple of years had been for nothing.

Despite this, the team was exciting about the prospect of doing a reboot. Schreier (2013) writes, ‘at one point, a source told me, the Doom 4 team had a big meeting in which company leaders talked about what Doom meant to them. John Carmack got up in front of everyone and said something like, “Doom means two things: demons and shotguns.”

Nasty enemies and nastier weapons; quintessentially, that was the Doom that players knew and loved. With a reboot finally agreed, all other projects were put on hold so all efforts could be put into this project to make up for lost time. As a metaphor for a line being drawn in the sand, the project was renamed from Doom 4 to simply Doom. Executive producer Marty Stratton explained the name change because ‘it’s an origin game, reimagining everything about the originals.’ It was about going back to what made Doom, Doom. The feel of the game would no longer be horror-orientated like Doom 3 was. The setting was now back on Mars, based within a research facility that was experimenting with ‘energy derived from hell.’ The player would again play the Doom Guy (or Slayer) and must fight demons and undead. And that formed the basis of the story. The team believe the actual story was not so important and put little emphasis in developing this. They did add background information, such as terminals, that the player could read to give some context, but otherwise the team made it clear that plot was not primary.

Instead, the team focused on ‘movement’ as the dominant pillar. The previous Doom games had always been about moving from one area to another; the horror-orientated theme of Doom 3 somewhat slowed the pace down considerably. Weapon reloading was removed (allowing continuous shooting) and maps and levels were designed to limit opportunities for them to hide. Instead, the player was always expecting a fight. The game would be created in such a way as to be ‘over-the- top’ to emphasise the team’s desire to creating the ultimate first-person shooter experience. The audio was also produced with this in mind. The soundtrack featured heavy-metal inspired audio (heavy drumming and high tempo) to heighten the moment.


The moment when the chainsaw was used to dish out bloody pain upon enemies drew the loudest cheers from the audience during the E3 showing

All of this was best summed up when Doom was showcased at E3 2015. A demo was shown to a full house. The demo showed the player moving fast and swiftly, double-jumping, vaulting and mantling obstacles in environments where he was constantly under attack, highlighting how movement was at the forefront of the design process. There was also a greater focus on combat. Along with the standard combat mechanics, there were several instances where the player literally killed enemies with their bare hands. The audience cheered and whooped at these ‘glory kills’, such as when one unfortunate demon had its heart ripped out. When the player came into contact with several weapons, from shotgun to chainsaw, the audience cheered and whooped each time, especially when the chainsaw was used to gorily shred several enemies to bloody pieces. Boy, did they love that one.

On presenting the demo, Marty Stratton briefly explained what the team were looking for when designed the game. In his statement, he said something very interesting. He said, ‘the foundation of any Doom experience – past or present – is unquestionably combat that’s centred around three things: badass demons, big, f-ing guns (he really wanted to say it!), and moving really fast.’ He had just summed up what Doom was all about, and why they made this second reboot the way it was.


Doom (2016) was a revert back to the earlier games of the series, centring around badass enemies, big f-ing guns and movement at its core. It was a critical and commercial success

The demo really showed to the world that they have kept this mantra to heart. When it was shown, it was like being a football match; the fans lapped it up and enjoyed what they saw. I bet everyone at id had a massive grin on their faces. I have seen the demo and it ticked all three boxes. It really was a reboot, a return to its roots. I watched it and I could certainly understand the reaction for I almost felt the same way. It was almost majestic. Doom was back, bloodier than ever.

Back to Basics

When Doom was released in 2016, it was a commercial success. By July 2017, 2 million copies had been sold on PC. Critically, it was praised for its fast-paced gameplay, styling and visuals. Gamespot said the ‘reboot captured the spirits of the older games, while refining them with modern elements.’ Likewise, Game Revolution called it ‘top-notch.’

I used Doom as an example of a reboot as it was recent and that the entire Doom series was popular. We explored their creative thought process and preceding events that ultimately led to it being rebooted twice. Id Software had put a lot of effort into Doom 4, only to ultimately discontinue with the project and instead literally start all over again after being dissatisfied with its progress. On this occasion, creative differences were the reason.  They lost a lot of time but it was worth it. In the end, the second reboot proved to be a hit.

As we discussed earlier, reboots happen for several reasons (as it was Doom 3 and then Doom), and to varying degrees of success. A main factor for being rebooted is dissatisfaction from audiences, critics and even the series’ producers themselves with how a long-running media series has gone. Think again back to Casino Royale, which was the result of critical and commercial disappointment with the previous entry, Die Another Day.      

My intention was not necessarily to discuss if rebooting is a good or bad thing, although they generally occur due to negative circumstances. However, when the decision to reboot is made, it is not always a bad thing to go back to basics in order to start things over. It gives the series a chance to reinvent itself. This was more apparent with Doom (2016), where it was reminiscent with the earlier titles. Sometimes it is necessary to give it the reboot.


Joseph ‘Maniac’ Cirillo III’s article, ‘The Making of Doom 3’

The Making of Doom 3

David Kushner’s book (2003), ‘Masters of Doom’

Jason Schreier’s (2013) article, ‘Five Years And Nothing To Show: How Doom 4 Got Off Track’

So Here It Is…

Hello everyone,

You know what time of year it is? It’s the ‘C’ word again. Christmas is a wonderful time for many reasons. It’s a chance to have the family over, to tuck into that turkey and roast potatoes (brussel sprouts anyone?), to unwrapping the presents that you, on the outside, love, but on the inside you’re already thinking of taking back to the store.

So ends almost another year and I’m sure someone will do one of those obligatory montages of what 2017 had to offer, but I like to keep things simple around here. Wherever you are and whatever you’re doing, I would like to thank you for sticking around even when content has been sparse. I know, I blame real life but as long as this site is live, it means it’s still kicking. The world of videogames never sleeps, which means we can expect even more exciting things to come in 2018. But let’s leave that for another time.

I would like to wish you all a very happy Christmas. Remember folks, there’s another year to come soon so go easy on the booze.

Vector christmas illustration with snowman 590

Happy holidays!

Let The Money Flow Through You…

Hello everyone once again. In the long time I’ve been away, seems the world keeps moving on (better or worse, depends how you look at it). In the world of videogames, things have certainly moved on. One topic I would like to focus on is one we have visited previously.  It is time we returned to the dreaded micro-transaction. It was always going to be an issue of money. I wanted to highlight this point because, recently, this came to a head with the release of Star Wars: Battlefront II, and the subsequent uproar that followed shortly before going to sale. But first, a little background…

What is Star Wars: Battlefront II?

Star Wars: Battlefront II is the sequel to Star Wars: Battlefront, but also acts as a tie-in game to the highly-anticipated new film, The Last Jedi. In the same vein as the Battlefield series (it is made by the same creators), it pits players in two teams against each other in various locations form the Star Wars film universe.

In terms of game-play, there is not a lot different than previously before, although the sequel boasts extra content from the upcoming film. There are different game modes to play from as the two teams fight it out for supremacy. Players will also be able to play as characters from the films, either as ‘light-side’ heroes or ‘dark-side’ villains such as Luke Skywalker, Darth Vader, Han Solo and Rey. Notably, there is now a dedicated single-player campaign that was absent from the previous game. This adds a new flavour to the game and gives people who want to play offline someone something extra.


As with the previous title, players get to fight it other players in Star Wars locations. In this screenshot, it is Naboo, which featured in Episode 1: The Phantom Menace

However, it is not the actually game we are necessarily concerned with, but rather, the reaction to some of its design decisions that have been, unsurprisingly, negative. In fact, the amount of negativity (in the politest sense) has been staggering so let us take a moment to recap what this all means.

Shut up and take my money?

Why are people getting angry over a videogame for? There are plenty of other things to get angry about in real life. Ok, there is no doubt about that. In terms of this discussion, people are getting angry over what they perceive as pure greed and money-grabbing from a company that already has a bad reputation in the industry. So what are people getting angry about Star Wars: Battlefront II? The easiest answer:  the not-so-subtle monetising of the game.

Micro-transactions have steadily crept in to become almost an industry standard, and it doesn’t sound like it’s going away.  Frequently used free-to-play games, micro-transactions helps keep player interest in playing the game by allowing them to upgrade their character or profile by using real money. How much money spent depends on how much the player is willing to part or committed to the game.

Frequently common in the massively multiplayer online role-playing games (MMOPRG) genre, where players are invested in their player-character as they attempt to level up to increase their skill set. To make their character better, players more often than not would have to invest a long time playing to enough experience points to rank up. This is known as a ‘grind’ where players work their way through the bottom rung of the ladder to the top. Sometimes it is easy to go through the earliest levels but once you reach a high enough level, the time it takes to rank up gets longer and longer. In all sense of the word, a grind can really test a player’s resolve to get to the top. It is very easy for someone to lose interest and move on to other things and that is what game developers are fearful off.

An alternative to lessening the time to grind is to allow players to pay for upgrades or items using real money, such as experience point bundles or special items. If a player was willing to do this, they could level up at a faster rate, depending on how much was spent. This is textbook micro-transactions – transactions within the game itself. Micro-transactions are common, and vital, in free-to-play games as they fulfil two things: 1) sustaining player interest levels and 2) sustaining the game (and game company) financially. One person buying in-game may not make much difference, but if tens of thousands are, then the company could stay in profit.

In the post I did about this topic, I used the free-to-play game World of Warships. Not so much an MMOPRG, the same principles of micro-transactions apply. Players could play the game without spending a penny; however, their grind would take much longer than someone who purchased a ‘premium’ account, which gives 50% extra income in-game credits and experience points. It is feasible to get to the best ships simply playing the game but the time it would take could take several months of playing. Checking the premium shop of World of Warships, players can choose from several bundles to help with their progression.


World of Warhips Premuim shop. Here, players can choose from specific packages for different Premim Account lengths

For premium time, the cheapest is 93p for one day all the way to £68.78 for an entire year.  Alternatively, players can also spend money buying doubloons, which act as ‘gold’ currency to help buy other in-game items.  A bundle for 30,500 doubloons costs £85.55, almost £100. It is hard to justify spending that much money but there must be some takers. From a business point-of-view, it’s entirely understandable to implement a form of micro-transaction as means for profit and sustainability. It is when it is overdone that it can become problematic. Asking someone to part with their hard earned cash is never easy, especially when they have paid already for the game itself.

Stash that Loot

A by-product of micro-transactions is a system known as a ‘loot box’. The loot box is used to hand out randomised in-game rewards to players, usually for the MMOPRG genre. Essentially, it is an in-game crate/box that contains a specific set of special items. These are generally given out during general play or through completing specific tasks. Players can also buy them through the game using real money.

Items contained within these loot boxes can vary but often graded by ‘rarity’; that is, an incredibly rare and powerful item could be given but the chance of that is usually minimal. While the set of items given are randomly selected it can come with certain guarantees, for instance that it will contain at least one item of a certain rarity or above. To highlight rare items, a colour scheme often corresponds so that there is a heightened excitement of revealing the items.

In the multiplayer game, Team Fortress 2, there is an ‘item drop’ system where players are randomly given items through play that was implemented around 2010. Some of these items were weapons or cosmetic hats, but have grown to include supply creates. I have played the game when they first introduced this and have seen them add more and different crates to the game. For example, during December, special Christmas-theme crates were introduced for festive items.

The feature is always the same: each crate was pre-loaded with a specific item set, with rare items listed as a different colour. Upon opening the crate, a random item is given out, although this is based on the probability system, so a common item was more likely to be given as opposed to a rare one. Upon playing the game, one would normally accumulate several lots of crates in their inventory to open.


Team Fortress 2 crate. Each crate contains a specific list of possible items. A ‘key’ is needed to open these

But the catch is this: in order to ‘open’ these crates, one needed a ‘key’. Keys are a further consumable item and are the only way to open crates. But you can of course guess that the keys themselves are not free. Currently, the supply crate key is selling for £1.89 per key. This may not sound much for a one-off, but that’s exactly what the keys are. They are single-use only item and once a key is used, another must be purchased. If a player wanted to open 5 crates, the total cost of buying 5 keys is £9.45. You could get a meal deal for less. Also remember that whatever item you get is randomised so a player could have spent money on a key to get something they already owned.

The player’s inventory is managed in server databases run by the game’s developers or publishers. This may allow for players to view the inventory of other players and arrange for trades with them or for players to showcase what they own. Valve, the game’s developers, reported a 12x increase in player count after Team Fortress 2 became free-to-play. The in-game supplies a whole range of items to buy and with the game’s huge player-base, this alone is probably enough to sustain itself.

A Place of Scum and Villainy

Loot boxes (and micro-transactions for that matter) may offer replay value for the game, but they have come under criticism and controversy. Loot boxes have been likened to contribute towards videogame addiction as part of a compulsion loop, which in turn could lead to gambling issues. The idea of getting that rare item is so impulsive, people are (and have) willing to fork out lots of money to get these.

This forking out money has lent to the term as ‘pay-to-win’. Pay-to-win is players pay for upgrades or items to get an advantage over others who have not, and is generally a negative phrase. Games that implement micro-transactions are often criticised for promoting this pay-to-win mechanic.  ‘The allegation that a game is pay to win can still carries some weight even now when games are absolutely riddled with it. It is an old hatred, an established fear in gaming, the sense that your opponent has an advantage because he or she got their credit card out. For all the negative connotations of gamer behaviour that have come up over the years, video games do try remain a refuge of fair play, albeit increasingly unsuccessfully. The concept of pay to win undermines that fairness to the core’ (Hartup, 2015).

Some players pay to help with the grind of the game, which is not so much an issue. Eventually, given enough time and effort, the players who did not pay could complete the grind but not nearly as fast. People who are willing to pay generally are entitled to what they get, as they shelled out more money. For things like cosmetic items like hats for Team Fortress 2, this doesn’t pose a problem. The problem happens when people pay for items that change the balance of the game in their favour, such as paying for better weapons or abilities that become a clear advantage over others, and that’s where pay-to-win gets its name from.  It’s a fine balancing act for the games company to keep everyone on side, because to annoy one set of players is likely to be bad PR.

Before we continue, let’s just preface this by saying that Star Wars: Battlefront II’s publisher is none other than EA. If you didn’t know already, EA has had bad press about the way it tries to monetise its games using in-game purchases.  In the previous game, there was a feature known as a season’s pass, which if one paid would get access to all the downloadable content that would be developed and released. This created a divide between those that did and those that didn’t. This time round, EA have decided to put in place micro-transactions where everyone has access, including the loot boxes. Within these loot boxes are a feature called ‘Star Cards’, which act as booster upgrades to specific game classes. The rarer the card, the better its bonus would be. The higher-tiered Star Cards give direct advantages to players who pay, giving them a distinct advantage in a classic example of pay-to-win. The alternative to this is to spend many, many hours of playing to accumulate enough credits to gain access to unlock special hero characters like Luke Skywalker.


Battlefront II Star Card, used to grant special abilities to players

Not surprisingly, this was met with an overwhemlingly negative response. A Reddit poster wrote to complain they had spent $80 for the deluxe edition of the game and still Darth Vader remained locked. Someone had calculated that to unlock Darth Vader or Luke Skywalker, it would take roughly 40 hours of play. A full time job is about 37.5 hours to give scale. Alex Newhouse (2017) explains, ‘some fans began expressing frustration and anger toward the game when it came to light that you could only gain access to certain heroes through loot boxes. Further, rough calculations estimated that it would take upwards of 40 hours to earn enough in-game currency to purchase the boxes – meaning that you were either in for a long grind, or you could pay real money to get more loot boxes to get in-game currency more quickly.’

In response to the situation, EA took to Reddit, a social news platform, to address the backlash with this statement:

The intent is to provide players with a sense of pride and accomplishment for unlocking different heroes. As for cost, we selected initial values based upon data from the Open Beta and other adjustments made to milestone rewards before launch. Among other things, we’re looking at average per-player credit earn rates on a daily basis, and we’ll be making constant adjustments to ensure that players have challenges that are compelling, rewarding, and of course attainable via gameplay.’ –EA, November, 2017.

This statement has gone on to become the most downvoted post in the history of Reddit, somewhere in the region of 674,000 downvotes. Once the issue become known to the videogame community, the wave of negativity literally snowballed. To many, this statement came out as a feeble excuse by EA to use the value of ‘hard work’ and ‘reward’ as motivating factors to unlocking special characters when it would have taken an obscene amount of time. There is nothing wrong with earning rewards through general play, but it feels like people are quietly encouraged to pay in order to obtain them. Part of the appeal in Battlefront II is the ability to play as some of the film series’ iconic characters, and if those going to take 40 hours of play to unlock, that would put many off.

The upgrade system in the game has been described as notoriously complex.  ‘Battlefront II’s biggest problem might be the sheer disconnect between what you do in the game and how you progress. It’s a structure that ultimately robs players of feeling like they’ve accomplished something. In Battlefront II, every class and hero character has a variety of equippable “star cards,” which are modifiers and upgrades that players can choose to equip when they run into battle. Players can equip up to three star cards at a time, which grant abilities like alternate weapons, increased damage, or more durable starfighters. The problem is, Battlefront II buries the ability to unlock star cards behind loot boxes, crates of in-game items with randomized contents that players typically can earn through playing the game or with actual money. It’s still a pitifully slow rate. I can use crafting parts to manually unlock cards, but those are also only dispensed through loot boxes, purely at random, and are the only way to unlock the fourth and final level of a particular card’ (Gartenberg, 2017).

That’s No Moon

EA is not exactly the most-loved name in the videogame industry. They are no stranger to negative feedback and response but I’d wager that even they were taken aback just by how strong the negative reponse was. When a post on Reddit written by EA themselves become the most downvoted in its history, then they knew the gravity of the situation. By the end of November, EA lost $3 billion in its stock value. It was no surprise that EA later changed its prior stance least they lose even more face. No company could received that much backlash and not do anything about it.

In the wake of such negative feedback and criticism, the game’s developer, DICE, released a statement from its executive producer, John Wasilczyk, stating, ‘unlocking a hero is a great accomplishment in the game, something we want players to have fun earning. We used data from the beta to help set those levels, but it’s clear that more changes were needed. Based on what we’ve seen in the trial, this amount will make earning these heroes and achievement, but one that will be accessible for all players. It’s a big change, and it’s one we can make quickly. It will be live today, with an update that is getting loaded into the game.”

‘‘In response to this, DICE is dropping the number of credits you need to unlock “top heroes” by 75 percent; Luke Skywalker and Darth Vader can now be unlocked at 15,000 credits, while Emperor Palpatine, Chewbacca, and Leia Organa can be unlocked at 10,000 credits. Iden Versio can be unlocked with 5,000 credits under the new system.’ In other words, they made the decision to reduce the amount of credits needed to unlock the game’s iconic characters. Additionally, one day before initial release, EA temporarily disabled micro-transactions.

If people weren’t bothered playing as iconic characters from Star Wars, that wouldn’t be so much an issue. Yet cutting down other players with a lightsaber as either Darth Vader or Luke Skywalker certainly is one of the selling points of the game, and to have that aspect ‘locked’ to many was probably enough for many to vent their frustrations and annoyance of it all. Charging people in the region of £40-50 for the game itself and then having them grind their way to these characters probabaly isn’t appealing to many.

As we’ve said before, grinding in games is not a new concept. I wanted to revisit this topic because , as far as Battlefront II is concerned, it highlights the ugly nature of micro-transactions in that is almost feels ‘encouraged’ for people to pay extra in order to have progress within the game. I am not suggesting that EA is forcing people to use micro-transactions; that choice if for the player themselves. I think what people are getting frustrated most of all is how micro-transactions are increasingly assimilating itself into standard business practices (of which is to make more money).

People are not stupid and they can see when enough is enough. The massive response on Reddit was enough to make a global company like EA change its stance. As mentioned previously, micro-transactionshave already existed, yet when a triple-A title game also implements it in such a way that its almost forced, there is a disconnect between developer and community. On this occassion, player reponse was enough to get things changed, but we can be sure that this will not be the last time before we hear something like this again.


Chaim Gartenberg’s article, ‘EA’S Battlefront II changes highlight the disconnect between gameplay and progress’

Phil Hartup’s article, ‘Should videogames let you pay to win?’

Alex Newhouse’s article, ‘Star Wars: Battlefront II Reddit post receives over 680,000 downvotes’

The (Very, Very Late) 4th Season

Hello everyone,

It’s always bad when you are late to a party. It is espeically bad when it’s your own one. So here’s to the site, which turned 4 back in May. Hooray! Here comes the obligatory, ‘where has the time gone?’, question. Although to be fair, we are nearly done with 2017. Time does fly.

Posts this year have been non-existant, to which I apologise. Main excuse is real life getting in the way but I am alive and the site is here to stay. I do have ideas to talk about (although I said this last time round) so it’s not like I will never return. I just took a hiatus which turned into a very long one. The world of videogames and technology never stops.

Anyways, here’s to us for the 4th time!!!!


Virtually Here

It’s been a while. Hope you had a good time last time we spoke. Two years ago, we spoke about virtual reality.We talked about the concept of what it meant to experience virtual reality through technology, focusing on a particular project called the Oculus Rift. At the time, it was in a developmental stage.Fast foward two more years and virtual reality is now a real reality.

Make Believe

If you remember from the last post about Pokémon GO, one of the lasting impressions was promoting the idea of augmented reality to a wider audience through people playing the game. The augmented reality aspects of the game was that it ‘placed’ pokémon into real-world locations when using a smartphone with GPS. If you happened to be looking at a building through your smartphone, the game would sitatued a pokemon at that location. Think of it this way: computer-generated images (CGI) are clearly not real but made to look or feel as authentic as possible. Think of Star Wars: The Force Awakens and how much CGI and other special effects went into that to make characters and scenes as believable as possible.

One issue I have to raise is that in the last post I’ve used both terms of augmented reality and virtual reality, incorrectly thinking they meant the same thing. They are similar but different, so I apologise if I’ve misled anyone. From now on, this post will refer to the technology as VR. So what is the difference?

In AR, users are able to interact with virtual contents in the real world and tell the difference between the two. In VR everything is virtual (in the sense that the user has no connect with the location they are in virtually). Pokémon GO is a classic example of the former, where virtual pokémon are placed in real-life locations. This is not the case with VR where everything is an illusion. However, let’s not forget that both terms are changing the way we see things from a technological perspective.

The main point of the 2014 post was discussing the Oculus Rift as the case study. This project was born out of a desire to see an affordable and better experience of AR. The Rift was essentially a headset (or head-mounted display) that useshighly-sensitive LEDs and external sensors to create a virtual experience. Since 2013, it was still in development. But now in 2016, we are receiving the first lot VR kits to be available on the market. But first, let’s have a revisit of what the Rift was.

Revisiting the Oculus Rift

Californian Palmer Luckey had a keen interest in virtual reality, stemming from his hobby fiddling around with several electronic projects such as lasers and coil guns in his garage. His interest allowed him to steadily build a collection of many head-mounted displays. The first concept of the Oculus Rift came about due to Luckey being become frustrated with the then-current head-mounted displays in his personal collection. He felt they were generally of poor quality and he could do better. ‘Virtual-world sci-fi like The Matrix and the anime show Yu-Gi-Oh! intensified the desire. Why, he asked himself, can’t we do that yet? His modding and iPhone repair work had left him with a lot of money, so he bought a $400 Vuzix iWear VR920, then the most cutting-edge consumer VR headset – enthusiasts call them HMDs, for head-mounted displays – on the market. Then he moved on to the more expensive eMagin Z800 3DVisor. And he kept looking’ (Rubin, 2014).

In order to realise his project, Luckey started a Kickstarter campaign, although he also supported himself by doing various jobs such as iPhone repair work. He regularly posted his reports on MTBS3D (MTBS standing for ‘meant to be seen’), a forum website that was used by fellow VR enthusiasts. One of those regulars happened to be John Carmack of id Software, creators of the popular Doom games. Carmack read with interest about Luckey’s project, ordered one of the prototypes, and showcased it at the Electronic Entertainment Expo (E3) 2012 using a modified version of Doom 3 BFG Edition.


John Carmack demonstrates the early Oculus Rift at E3

The demonstration to a larger, enthusiastic audience was a pivotal moment in Luckey’s project. The Oculus Rift suddenly found a large audience that were as keen as he was, and he dropped out of university. The Kickstarter campaign eventually raised $2.5 million, an outstanding raise from the initial $250,000. This allowed him to start a new company called Oculus VR. Joining him was Brendan Iribe (as CEO), fellow VR enthusiast and former executive of Gaikai and Scaleform, Michael Antonov as Chief Software Archiect, as well as Michael Abrash (Chief Scientist). John Carmack also joined in some capacity.

The Rift itself has gone through numerous development stages through research and development. Initially, the first stages revolved around DIY kits for interested developers from the Kickstarter campaign. The Development Kit 1 (DK1) was given to backers as a ‘thank you’ to those who invested $300+ into the project in its early days. The idea was to give developers a chance to integrate their content in time for the Rift’s release. The current ‘stage’ is called the Crescent Bay.

How did the Rift work?

As the term implies, virtual reality was to create a truly virtual, but immersive, experience that was designed to trick the brain into being immersed, forgetting about the technology that went into creating it. This approach used several key factors to achieve this. As Sophie Charara (2016) explains, these typically included ‘a PC, console or smartphone to run the app or game, a headset which secures a display in front of your eyes (which could be the phone’s display) and some kind of input – head tracking, controllers, hand tracking, voice, on-device buttons or trackpads.’

1. The Headset

The Rift resembes an oversized goggle, referred to as a head-mounted display (HMD). Charara (2016) explains, ‘VR headsets use either two feeds sent to one display or two LCD displays, one per eye. There are also lenses which are placed between your eyes and the pixels. These lenses focus and reshape the picture for each eye and create a stereoscopic 3D image by angling the two 2D images to mimic how each of our two eyes views the world ever-so-slightly differently. One important way VR headsets can increase immersion is to increase the field of view, i.e. how wide the picture is. A 360 degree display would be too expensive and unnecessary. Most high-end headsets make do with 100 or 110 degree field of view which is wide enough to do the trick. And for the resulting picture to be at all convincing, a minimum frame rate of around 60 frames per second is needed to avoid stuttering or users feeling sick. The current crop of VR headsets go way beyond this – Oculus is capable of 90fps, for instance, Sony’s PlayStation VR manages 120fps.’

Let us take a moment to understand what she means by frame rate because it is quite important. On imagining devices like TVs or cameras, the frame rate is the frequencyin which they display consecutive frames (images). This is measured in seconds (frames-per-second or fps).When you watch a TV, how ‘smooth’ the images appear are a direct consequence of the frame rate employed. To put it into perspective, a low frame rate would give you choppy images.

Tarantola (2014) explains, ‘the human eye is capable of differentiating between 10 and 12 still images per second before it starts just seeing it as motion. That is, at an fps of 12 or less, your brain can tell that its just a bunch of still images in rapid succession, not a seamless animation. Once the frame rate gets up to around 18 to 26fps, the motion effect actually takes effect and your brain is fooled into thinking that these individual images are actually a moving scene.’ He further states, ‘so if a frame rate is too slow, motion looks jagged, but if it’s too fast you can have problems too. Live-action movies filmed at 48fps tend to have that certain soap-opera effect people hated in The Hobbit.The Hobbit was criticised for being overly ‘hyper real’ to the points ome critics pointed that they could see the background sets such as make-up on actors and painted sets.

In the human visual system, it is thought that we are able to process 1000 seperate images per second but that would be an extreme to implment. The trick is to find a balance that makes the visuals smooth and not look like they will make people uncomfortable. In current videogames, 60fps seems to be the standard. I’ve seen two comparable videos of games at both 30fps and 60fps. Some people say that can’t see any difference, but I noticed that the 60fps version transistions much smoother and therefore the visual experience is more enjoyable. However, it seems the Rift goes beyond 60fps so the visuals so be that much smoother.

2. Motion Tracking

So you wear the headset but how to you input the data received into something quantifiable? Input is important as much as output is. For example, if you use a computer, you use a keyboard as an input device, which then translates into quantifiable data/responses (i.e. keystrokes). For the Rift, there is a compatible piece of technology called Oculus Touch, which is basically a wireless controller that resembles a gamepad attached to a strap. The idea is that becaase you physically hold something, allowing you to make hand gestures that are translated into the virtual expereince, thus giving you an input response.

Importantly, the Rift would need additional input devices in order for it to be able to track a user’s postion (as in motion tracking). Sensors would be the best solution as they needn’t be large.On CGI-heavy films like Avatar, actors would wear special suits that were attached with multiple sensors. Sensitive cameras placed around the set would capture their postions (including facial expressions) through the sensors, which would then be mapped into software. The film-makers would then do their special effects to the scene using the data.

The Rift worked in a similar way. On the actual headset, there are a series of infrared LEDs built in. These sensors then communicate with another input device, a wireless sensor. It looks like a small microphone-shaped pole but is designed to speak with the headset. Oculus calls this the Constellation Tracking System. The Rift has LEDs on most of its side, allowing for a full 360 degree rotation.


Oculus Rift and Constellation sensor

The communication between sensor and LEDs is accurate, pin-pointing precisely the position of the user. This is done by knowing the configuration of the LEDs; the information is then trtansferred with sub0millimeter accuracy and with almost zero lag (or latency). The Constellation can either use a single sensor or multiple ones placed around the area. By employing mutliple sensors at different angles, the system is able to track the entire room.

The VR Future?

In the last post, I talked briefly about what the future holds for this technology and I said only time would tell. Well, time certainly did tell in which the industry is taking off. We not only have Oculus Rift, but other brands on the market with technology heavy-hitters such as Samsung (Gear VR), Sony (Playstation VR/Morpheus), and HTC (Vive) getting in on the act. There was already an enthusiastic response (at least from a techological view) when the Rift was demonstarted at E3 so that should be an indication of its potential.

Even though we are seeing a lot of industry activity, that does not mean the market is ready. In order for VR technology to be truly successful, it would need transition from niche to mainstream markets. It has certainly grown in prominance; as I said earlier there are VR kits now to buy. But I wouldn’t say it has peaked yet. There are several issues at the moment that I think are holding it back.

One of the immediate issues is the high cost for such devices. The Rift is several hundred pounds so already many people will be priced out.This doens’t mean they still can’t experience VR. For people who want to  experience VR on a more affordable level, there exists several cheaper versions ranging from £10-20. Most of these use a smartphone in order to create the experience. Many of the cheaper versions consists of nothing more than a cardboard base and some lenses. You place your smartphone into some holding slots, which you then look through with the lens in the comfrot of your own home. I’ve happened to try one of these cheaper models and I was surprised at how immersive it was. Looking through nothing more than lenses, I was actually turning my head/body around when playing a game. You forget somewhat that you’re in a room and lose yourself in the VR experience, and that’s what this is all about.

Example of cheaper VR kits

Example of cheaper VR kits

Another reason is the high cost in processing power. Not only do you need to have the necessary input devices, you also needed a beefy comptuer system to run the whole thing. ‘Chipset maker Nvidia predicted last December that in 2016 only 13 million PCs will be powerful enough to run VR, meaning that less than 1% of the 1.43 billion PCs in use globally this year are up to scratch. This explains the slow start’ (Rossi, 2016). Recntly, Oculus VR has issued statements about their ‘Asynchronous Spacewarp’. The ASW is a technique that allows VR titles to run at aorund half the processing power by ‘extrapolating frames’; in other words, it is a frame rate smoothing technique halving the CPU/GPU time required to produce nearly the same output from the same content. The idea is that the minimum system specifics of the Rift are lowered, allowing more people with less powerful computer systems to use it.

There’s no doubt that some of these issues will be corrected in the future. But these will most likely take several years at the very least. Remember that it was only three years ago that the project really started yet the potential for rapid growth is projected. ‘When Oculus Rift launched its $2.4 million Kickstarter crowdfunding campaign in 2013, it was billed as the first truly immersive virtual reality headset for video games. Expectations such as this have allowed potential virtual reality to grow quickly, with revenues from both VR hardware and software products projected to be a formidable $5.2 billion USD in 2018. At the same time, the number of users adopting the new tech, predominantly gamers, is expected to reach 171 million. At present, 43 million people worldwide own a VR headset, so ownership could rise more than three fold in just two years, even before the predicted peak in 2021’ (Rossi, 2016).

It is certainly a clearer picture than from 2014. There is a market, there is potential and there is money to be made. I wrote this post because we are seeing VR start to become more available. 2016 was supposed to be when VR really took off; while we’re seeing several companies developing and releasing their own models, I wouldn’t say it has peaked yet. Certainly the technology needs more refining but I think it’s safe to say it’s a question of when, not if, it will become a major thing.


Sophie Charara’s article, ‘Explained: How does VR actually work?’

Ben Rossi’s article, ‘The Future of Virtual Reality’

Peter Rubin’s article ‘The Inside Story of Oculus Rift and How Virtual Reality Became Reality’

Andrew Tarantola’s article, ‘Why Frame Rate matters’