David R. Howard
18 min readJun 3, 2019

Edge Case Scenario: On Possibility Space, Speedrunning and Parallel Universes

Note: This is an essay which I originally wrote as a term paper for a game studies class whose core text was Critical Play by Mary Flanagan. Citations are in MLA except where hyperlinks suffice.

Super Meat Boy (2010)

“Play is grounded in the concept of possibility[…]Of all the possibilities for action that we perceive, only a few become ongoing projects: we can only do ‘one thing at a time’.”
-Mihaly Csikszentmihalyi and Stith Bennett, “An Exploratory Model of Play”

Derived from probability theory, which has its roots in mathematical analyses of games of chance, “possibility space” is a popular tool for understanding videogames among scholars and designers alike. Despite the name possibility space does not manifest in games as literal volumetric space, but rather as “an abstract decision space or conceptual space of possible meaning” (Salen and Zimmerman 390). For this reason certain historical applications of the concept to level design and story structure have been somewhat self-limiting — not truly representative of the bigger picture. Exploring possibility space is not merely the binary act of choosing “the door on the right or the left”, but the moment-to-moment breakdown of the entire spectrum of methods by which situations may be approached, including active avoidance or no action at all. In this essay I aim to define what possibility spaces are, how they are made, how they are explored and what that exploration implies about the nature of videogames. My approach borrows from Manovich’s “database-narrative hybrid”, as well as Boluk and LeMieux’s “metagame”, and uses speedrunning (the practice of trying to complete a game as fast as one possibly can) and the internet video phenomena surrounding it to interrogate the true extent to which explorations of possibility space are “player-driven”.

Possibility Database

Possibility space is in many ways akin to the “magic circle”, as it defines a self-enclosed and quasi-spatialized play area in which a game takes place. Specifically for videogames, possibility space is not just something embedded within play, but rather it comprises the entirety of the game artifact itself. Possibility space could potentially be expressed as the number of unique frames of output that a videogame can produce, which even for a simple game would be rather large. If we were to calculate this for Pong (1972), we would need the total sum of positions for each paddle, total sum of ball positions and total sum of combinations of scores just to begin to understand the many possible permutations. Of course this approach would also be reductive for most games, for a few reasons: 1) It does not take into account nonvisual output such as audio, haptic feedback or network communications 2) It does not consider possible corruption, defect or interference at the hardware level 3) Many systems and monitors, especially older ones, do not necessarily operate on a fixed framerate and 4) External output is not necessarily equivalent to internal “state”; it is impossible to tell in which direction the ball in Pong is travelling from a single screenshot. So let us consider another approach.

Lev Manovich suggests that all New Media art, including videogames, trend towards one of two archetypes: the narrative and the database. In this dichotomization he interprets the “algorithm” of a given game as a form of narrative. This is an interesting harmonization of rules and narrative which runs counter to (or perhaps foreshadows) the ensuing ludology/narratology split within games studies. Yet like the “ludonarrative” Manovich ultimately concludes that “competing to make meaning out of the world, database and narrative produce endless hybrids.” This notion of videogames as database-narrative hybrids is one way we can come to understand possibility space: an all-encompassing matrix of game states — not unlike a supermassive flow chart or decision tree or web directory or Twine story — whose interdependent connections through which the “narrative” plays out are contingent on player input (or lack thereof). In this way Kickmeier-Rust and Albert liken gameplay to cellular automata, where “game entities are seen as cells of a multi-dimensional grid” (213). However like the Pong example it should be noted that videogames are not indexical in the same way that a book or film strip is. Game states are not actually crafted, encoded or pulled from storage (except in certain contexts like when loading saves), rather they emerge from the collage-like algorithmization of individual elements, as Manovich says “data does not just exist — it has to be generated”.

So how might players explore possibility space? Well, if every state of a videogame is part of its possibility space, therefore all play must be considered exploratory, not just that which occurs in “open-ended” or “simulation sandbox” games as per Tornqvist (90). However there is some disagreement over whether these explorations are player-driven or procedure-driven. Bogost argues that procedure creates the opportunity for play, that “when we play video games: we explore the possibility space its rules afford by manipulating the symbolic systems the game provides. The rules do not merely create the experience of play — they also construct the meaning of the game” (121–122). However in the very same volume of work Squire contradicts this procedural rhetoric, claiming that “to understand the meanings of game play[…]we can’t just look at the rules; we need to look at players’ performances and understand their understandings of them” (178). This sentiment is echoed loudly by Sicart.

I concur that without players a videogame is like a database which can only be accessed in part — a sealed-off black box blinking “insert coin” for eternity — for “play is possible only when players decide it is possible” (Flanagan 192). So while even “passive” videogame observations like “zero-player” games, screen-savers and attract modes can constitute a sort of participatory play act, the possibility space tends to become hyper-limited, and exploration becomes non-self-directed. To further illustrate how exploration of possibility space is primarily player-driven, we should investigate how it is that videogames come about in the first place. But to answer that, we need to talk about parallel universes.

The Game Gets Confused

To the layperson the development of a videogame may seem like filling a glass with water. That is, to take some pure crystallized concept (as evidenced by the industry’s emphasis on master design documents often referred to as “bibles”) and to fill it with content until the promise of that initial idea is fulfilled. In reality game-making is more like cooking a broth: an iterative process which requires constant adjustment, taste-testing and re-balancing; the key difference being that game-making can be subtractive as well as additive. Consider how this relates to Manovich’s algorithmic understanding of play, which invokes Sims (2000) designer Will Wright’s assertion that “playing the game is a continuous loop between the user (viewing the outcomes and inputting decisions) and the computer (calculating outcomes and displaying them back to the user). The user is trying to build a mental model of the computer model.” This then, must cut both ways. Not only do players create mental models to understand and explore possibility space, but game designers create computer models from their own mental model of what their game “ought to be”. A videogame is the net result of the negotiation between conceptualized and computerized model-making; a procedure unto itself which involves writing, compiling, testing and rewriting code based on deviance from expectation.

When approaching videogame design computer programmers must implicitly cast themselves in the role of the player, as N++ (2015) developer Mare Sheppard says in a GDC talk “our job as level designers is first to explore that huge possibility space that’s latent in the game design, so that we can discover various arrangements of elements that produce interesting experiences for the player to engage with”. However because game-makers often have prior knowledge of their games’ inner workings, they are also psychologically preconditioned to play them a certain way. Thus a “critically designed” videogame must go through both rigorous second-party QA testing and third-party play-testing processes — ideally with diverse participants to get “fresh” perspectives — only then can it be determined how successfully the core design was conveyed (Flanagan 257–259).

If we were to visualize this relationship between the theoretical/mental model and practical/computer model as two circles superimposed upon one another, then in the designer’s ideal world possibility space would emerge from the “middle-out”, such that it becomes congruent with their original preconceived notion. However more often than not possibility space expands from the “edges” of the established, yielding a pseudo-Venn diagram of what the game was presupposed to be and what has been actualized within (or as) its possibility space. On one side of the diagram would be any cut content (concept art, unused assets, beta levels, placeholder text, etc.) and on the other would be any incidental phenomena such as bugs, glitches and exploits. My core thesis is that many videogames, if not most, possess a dormant subset of possibility space which is often, if not always, far greater in magnitude than presumed. Building from Parker’s term “expansive gameplay” and Gazzard’s term “appropriated play”, I will henceforth refer to this outward-extending portion of possibility space as the “expanded space”, and the inverse portion as the “presupposition space”.

In Metagaming Boluk and Lemieux write that “the greatest trick the games industry ever pulled was convincing the world that videogames were games in the first place”, and through this reasoning we can come to understand that all videogames are in fact mutually-agreed-upon “metagames”. “SimCity” (1989) is just a framing device used to market and commodify a standalone computer program, one that is fundamentally no different than a browser or word processor or any other software, and this metagame only exists within the overlap of possibility space and artistic presupposition. Note that I say presupposition rather than intent, partly to sidestep the question of auteurship, but also because I believe it better articulates the distance between thought and knowledge, or expectation and execution. For instance, in a procedurally-leveraged game like Spelunky (2012) emergent behaviours may occur that were not intended by designer Derek Yu, but which he could also presuppose might be possible and not otherwise break the communicated logic of his game design. For example Yu has relayed an anecdote of a player encountering an unforeseen situation where an enemy that had been stripped of their weapon (a boomerang) stole a new one from a shop, triggering the shopkeeper to attempt to kill the player. This is in stark contrast to the “solo eggplant run”; an extremely difficult feat within the game thought to be impossible by Yu but proven achievable due to a bug stemming from simple programmer oversight.

Furthermore despite larger budgets and greater expertise, current AAA development methods are actually less equipped to close this widening gap between expansion and presupposition, simply because the more team members there are who have an impact on a game’s final output the less assurance there is that any one team member has total understanding of its possibility space, nor the full ramifications of their contributions. When game-makers encounter expanded space during development, they must either claw it back into presupposition by closing off the passage to it via some form of programmer intervention, or alternatively expand their presupposed notions i.e. “it’s not a bug, it’s a feature”. Conversely the more players of a videogame there are, and the more readily able they are to communicate between each other, the faster they will explore possibility space.

For these reasons post-release patches and updates for games have become commonplace on internet-enabled platforms, whereas in the past such fixes were rare and usually reserved for localizations or recalls, and while developers don’t desire to ship “buggy” games in a way this is done strategically. Given continuous access over a long enough period of time, the collective “player” will always uncover more possibility space than the collective “designer” designs, including even the most well-hidden secrets and Easter eggs. This is why speedrunning as a “formalised and socialized” act of exploring expanded space continues to become increasingly relevant to game design practice.

Games Done Quirkily

Tandem to (yet largely unaware of) academic discussions on the ontology of videogames, speedrunning communities in the late 1990’s and early 2000’s were posing philosophical questions about intentionality long before the subculture gained traction in the 2010’s thanks to the charity marathon Games Done Quick (GDQ) and live-streaming service Twitch. Such inquiries included: “What is considered in-bounds and out-of-bounds?”, “What does it mean to play in ‘the spirit of the game’?” and “What is a glitch?” — topics which YouTube user EZScape covers in several of his videos on the subject. Through these it’s explained how in the early days of speedrunning the only officiating organizations there were, Twin Galaxies and Speed Demos Archive, either shunned or outright banned the use of glitches, bugs and exploits, which were viewed as the digital equivalent of performance-enhancement in sports; in other words cheating.

However because the metagame of the speedrun is so entrenched in the methodical exploration of possibility space, over time these social mores gave way to a playculture which actively seeks out bugs and glitches as a means of accomplishing faster times. This often involves some degree of “unplaying” vis-a-vis sequence breaking, whereby entire sections are either skipped over or done out of order to optimize time with maximum efficiency (Flanagan 33). Dormans has written that “dominant strategies effectively narrow the broad possibility space set up by emergence into a single confined path”, however the history of speedrunning shows this to only be half true. For any given actively-ran speed game new tricks and strategies (strats) are constantly devised and revised, as the world record (WR) time is inexorably lowered towards a hypothetical “perfect run”.

This is a recurring theme in the video documentaries of YouTube user Summoning Salt, a runner of Mike Tyson’s Punch Out!! (1987), for example in the span of just 4 days in 2009 the Donkey Kong 64 (1999) Any% completion WR was lowered by nearly 3 hours. In two of the most popular speed games, Super Mario 64 (1996) and The Legend of Zelda: Ocarina of Time (1998), the Any% categories can be successfully ran without collecting a single Power Star or clearing a single dungeon respectively. As Callum Angus notes, Ocarina of Time has been effectively “deemed ‘broken’” by its speedrunning community, which is in part what precipitated the perceived need for a “glitchless” category in order re-establish a sense of skillful play. Scully-Blaker refers to these differing approaches as “deconstructive runs” and “finesse runs”.

At its best deconstructive speedrunning is a celebration of videogames and the exploitability of their expanded spaces, what writer Ryan Cooper calls “the joy of jank”. Through their practice runners make a mockery of the procedural rhetoric which presupposes that the rigor of rules and complexity of systems are the source of a game’s meaning, which Angus likens to the Dadaist tradition, in that “a speedrun makes it clear that a video game is[…]a fluid medium that can tell different stories and mean different things depending on whose hands are holding the controller.” But at their absolute nadir speedrunner perceptions of brokenness can come across as diminishing and mean-spirited, usually in the form of the “lazy game developer” trope, which devalues the labour that allows for the possibility of speedrunning in the first place. To an extent this is understandable: in normal play situations (often problematically called “casual play” by runners) a game-breaking bug can ruin the experience and/or result in loss of progress. Many in the speedrunning community are also of the demographic which remembers when buildup of dirt inside a cartridge or a scratch on a CD-ROM could outright prevent gameplay, and so any “glitchiness” has an intrinsic association with faulty machinery.

However despite being used interchangeably, an error, bug and glitch are not necessarily one in the same. For instance a “softlock” is a discrete set of states in which an otherwise game-breaking bug does not result in a crash or fatal error, essentially trapping the player within a specific pocket of possibility space like an inescapable black hole. And the crucial difference between a bug and glitch is embodied in a SGDQ SM64 run in which an accidental crooked cartridge tilt occurs, causing the game to almost immediately crash, forcing a manual reset and effectively “killing” the run. Another common refrain used among commentators at GDQ events to describe hard-to-follow technical play is that “the game gets confused”. This is inaccurate of course, because videogames do not have the capacity for misinterpretation of instruction. Much like how a computer does not discriminate between “game” or “software” artifacts, a videogame does not differentiate between “bugged” and “regular” code, it just executes the program as written and produces an output as it would for any other possible state. When it comes to bugs there are no broken videogames, only broken expectations.

To perform a deconstructive speedrun is to thread the needle through the eye of expanded possibility space, even if only momentarily, a la the “frame perfect” trick. For runners, discovering this space has become like a mining a precious resource, and because they play so frequently within the expanded domain many new strats are also found by sheer happenstance. The true enormity of possibility space beneath the exterior “façade” (Ismail’s wording) or “scaffolding” (Scully-Blaker’s wording) of most gameworlds makes itself known in speedrunning practice. As Cameron Kunzelman writes in response to the most recent AGDQ, “when you contemplate [this], you’re on the edge of the sublime.”

The Impossible Choreographer

Other than deconstructive vs. finesse, there are two distinct types of speedrunning: live runs (either real-time attack (RTA), in-game time (IGT) or individual level segments (IL’s)) and tool-assisted speedruns (TAS) — though as the dominant form RTA is usually implied unless otherwise stated. If an expertly-performed live speedrun is like watching a virtuoso pianist, then a TAS is like a player piano, both in the sense of executing on a predetermined pattern of inputs as well as the potential for displays of inhuman ability. When these emulator-enabled tools were initially released, there was some degree of tension and animosity between tool-assisted and conventional speedrunners due to controversy around the faking of WR runs. However as speedrunning has matured TAS and live runs have formed a more symbiotic relationship, with TAS serving as a sort of R&D test bed for the real-time viability of certain strats. TAS affords players the ability to explore possibility space at the finest level of “granularity” available; to squeeze through the cracks in possibility space too tight for human runners. To the TASer every videogame is a turn-based strategy.

Probably the most well-known TAS to date is Scott Buchanan’s “Watch for Rolling Rocks — 0.5x A Presses” individual level segment of Super Mario 64, which went viral after he uploaded the run with a commentary track to his YouTube channel pannenkoek2012 in January of 2016. The video spawned several memes due to the humorously complicated jargon used to describe gameplay manoeuvers (e.g. “scuttlebug raising”), as well as a screencap of a comment by TJ “Henry” Yoshi claiming that “an a press is an a press. You can’t say it’s only a half”. To wit, Buchanan’s run was not only an attempt to complete the level as fast as possible, but also part of the A Button Challenge (ABC), a SM64 community initiative to complete the 120 star route of the game with as few A-button inputs as possible. So not only is the ABC a metagame within a metagame, but because the A-button of the Nintendo 64 controller is mapped to Mario’s jump i.e. the core of the game’s design, it is also a radical example of unplaying (and while not as compelling, he has also explored such runs for every other possible input).

Moreover the actual methodology of Buchanan’s run becomes fascinating when viewed through the lens of possibility space, especially his use of “hyper-speed walking” (HSW), a technique executed by moving Mario backwards at an extremely specific part of the Hazy Maze Cave level where there is a slope submerged in water perpendicular to both a wall and ceiling. Due to a confluence of bugs, this allows Buchanan to build up Mario’s horizontal speed limitlessly for over 12 hours, and this tremendous speed is then used to travel across a near-infinite grid of parallel universes (PU’s). This is all in order to circumvent parts of the level which would otherwise require multiple jumps. HSW calls to mind Scully-Blaker’s idea of speedrunning as a “heightened presence in the gamespace, a hyperexistence” whose velocity “‘does violence’ to the narrative contained within”.

PU’s are also the perfect analogy for the vastness of possibility space and specifically expanded space, as they exhibit the same quasi-spatialized property. Reaching a PU in SM64 is not like entering a level normally: due to lack of textures they are completely invisible, cannot be observed by the game camera, do not contain any instantiated objects like coins or enemies and really only exist due to a mathematical quirk by which the coordinate values for Mario’s position are truncated when checking for collision against the base level geometry. Thus “the game gets confused” and Mario is able to collide with in-bounds surfaces even though the player has actually navigated out-of-bounds. This is what makes the minimum number of A-button presses for the level possible, and through this TAS playing SM64 becomes a deconstructive act of “appropriating the spaces of play and taking them somewhere else, where not even the designer can reach”.

In fact, Buchanan has often unconsciously contextualized his work in terms of possibility space. Before his ABC runs he was best known for videos chronicling “The Impossible Coin” and “The Mystery Goomba” — two instances in which an object appeared to be unreachable, with the former proven possible to reach and the latter impossible. Buchanan’s explorations also take on a quality of excavation in a video entitled “A New Impossible Coin”, where he not only uncovers yet another coin hidden within the level geometry of Tiny-Huge Island but also speculates how iterative changes by its designers may have led to their “burying”. In another video which shows SM64 TASers’ predictions for the lowest possible number of A-presses in the 120-star route, Buchanan demonstrates how even the closest guess, 54, was significantly higher than the cut-off date result, 45 (which has since been lowered to 27)[EDIT: since the time of writing this has been lowered again to 23, as well as another glitch being discovered which makes a 0x Any% run possible on the Wii Virtual Console], and he eloquently observes that “indeed, it’s impossible to know how much we don’t know about the game”.

Among YouTubers pannenkoek2012 is fairly unique in that he tends to shy away from personal branding and focuses solely on his TAS work for its own sake, with his sporadic uploads serving more as documentation than entertainment. In spite of this his impact can be felt across YouTube playculture: from the videos of redfuzzydice (fka MagicScrumpy), who attempts to reach out-of-bounds items in Super Mario Sunshine (2002) like “The Mystery Banana”, “The Impossible Rocket Nozzle” and “The Secret Tree”, to the recently-formed community revolving around Super Mario Odyssey (2017) challenges which includes HelixSnake, DGR, Fearsome Fire, Gamechamp3000, looygi et al. Specifically Gamechamp3000’s “jumpless” run, HelixSnake’s “minimum captures” run and the Sunshine “hoverless” run can all be seen as direct descendants of the ABC challenge. Relativity to possibility space is also made explicit once again in DGR’s Is it Possible? series.

Conclusion — 0.5x Real: Metagames between Real Worlds and Fictional Rules

In this essay I have explored how the possibility space of a videogame can be divided into presupposed and expanded play areas, and that their expanded spaces are often much broader than initially thought. However one dimension which I have not considered is how players themselves may further extend possibility space, either through Action Replay/Game Genie, ROM hacks, mods, Machinima, glitch art practice, file-ripping or other user-generated tools and content. These expansions can be seen across internet video let’s plays as in PeanutButterGamer’s “HACKING!” videos, Shesez’s Boundary Break series, Polygon’s Car Boys, Touch the Skyrim and Let’s Go to Hell and the countless presupposition-defying creations of Minecraft (2011) and Mario Maker (2015). If TAS is a player piano, then these explorations of possibility space might be something more like Black MIDI: a hijacking of digital form for the sake of aesthetic and comedic value above and beyond playability.

But perhaps the most apropos example of player-driven expansion comes from the TAS block of AGDQ 2017, in which a TASBot (a microcontroller wired to a physical game console) is used to execute a technique known as “arbitrary code injection”. By performing hyper-specific button inputs, TASers are able to manipulate memory addresses in such a way that they can insert their own code, which most famously has been leveraged to beat Super Mario World in under a minute RTA. Even more startlingly, by using a specially-designed setup that networks together multiple 8 and 16-bit consoles, the Super Nintendo cartridge becomes a conduit through which an entire fully functional operating system is processed; capable of advanced features such as video playback, connecting to the internet and even, hypothetically, emulating other consoles. Thus it could be argued that the possibility space of The Legend of Zelda: A Link to the Past (1991) is comparable in scope to that of the computer on which I wrote this essay, which is to say near-infinite.

Citations

Bogost, Ian. “The Rhetoric of Video Games.” In The Ecology of Games: Connecting Youth, Games, and Learning, ed. Katie Salen. MIT Press, Cambridge, MA, 2008, pp. 117–140.

Boluk, Stephanie and Patrick LeMieux. Metagaming: Playing, Competing, Spectating, Cheating, Trading, Making, and Breaking Videogames. University of Minnesota Pres, Minneapolis, London, 2017.

Csikszentmihalyi, Mihaly, and Stith Bennett. “An Exploratory Model of Play.” American Anthropologist, vol. 73, no. 1, 1971, pp. 45–58.

Flanagan, Mary. Critical Play: Radical Game Design. MIT Press, Cambridge, MA, 2009.

Gazzard, Alison. “Grand Theft Algorithm: Purposeful Play, Appropriated Play and Aberrant Players.” ACM, 2008.

Kickmeier-Rust, M. & Albert, D., 2009. “Emergent Design: Serendipity in Digital Educational Games.” In Proceedings of the 3rd International Conference on Virtual and Mixed Reality: Held as Part of HCI International 2009. Springer-Verlag, San Diego, CA, pp. 206–215.

Salen, Katie, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. MIT Press, Cambridge, MA, 2003.

Squire, Kurt. 2008. “Open-Ended Video Games: A Model for Developing Learning for the Interactive Age.” In The Ecology of Games: Connecting Youth, Games, and Learning, ed. Katie Salen. MIT Press, Cambridge, MA, 2008, pp. 167–198.

Tornqvist, Dominicus. “Exploratory Play in Simulation Sandbox Games: A Review of what we Know about Why Players Act Crazy.” International Journal of Game-Based Learning (IJGBL), vol. 4, no. 2, 2014, pp. 78–95.