Skip to main content

Medical Technics: From Embodiment Skills in Computer Games to Nintendo Surgery

Medical Technics
From Embodiment Skills in Computer Games to Nintendo Surgery
    • Notifications
    • Privacy
  • Project HomeOpen Graph Test
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Introduction
  6. Sonifying Science: Listening to Cancer
  7. Aging: I Don’t Want to Be a Cyborg, I and II
  8. Aging Cyborg, III, IV, V, VI, and VII
  9. From Embodiment Skills in Computer Games to Nintendo Surgery
  10. Postphenomenological Postscript: From Macro- to Microtechnics
  11. We Make Technology, Technology Makes Us
  12. Additional Resources
  13. Acknowledgments
  14. About the Author

From Embodiment Skills in Computer Games to Nintendo Surgery

AS NOTED IN MY EARLIER LLLB (late life little book), Ironic Technics, much of today’s surgery is often dubbed “Nintendo surgery” with the idea that the new skills in this microsurgery relate to the eye–hand skills common to playing computer games. I have often pointed out that when the screen games that dominated computer games first appeared to addict so many young players, much publicity worried about a downgrading of more active, particularly outdoor, play by adolescents, but in retrospect, we can see that this practice ended up being a sort of pre-skilling for what is commonly called Nintendo surgery. (What in critical theory is often decried as de-skilling in this context can also be pre-skilling.) I could not but experience this similarity when I had my first angioplasty in 2006. My surgeon operated a thin tube device that entered my artery from the groin. This laparoscopic probe was outfitted with a light, a balloon device, and eventually a drug-saturated stent. There was a multiscreen display that he could view directly but I could also view from a side angle. I had previously experienced a similar observational experience with older-fashioned colonoscopies. In the colonoscopy case, while awake I could view the multicolor screen that showed the lit-up interior of my intestine; but how could I tell it was mine? This was a first experience of hard-to-identify imaging, a sort of simultaneous first- and third-person feeling of sighting. For a philosopher, the later style of colonoscopy where the patient is anesthetized is a loss. After turning eighty, physical colonoscopies are considered too risky to take at all.

My highly skilled heart surgeon performed the angioplasty while I, with only local anesthetic, could watch the process. Phenomenologically this is an odd experience since it doubles a seeing of oneself both as in an active sight and as a sort of “body-object.” Years later, giving a lecture on “My Case,” which drew from the various DVDs I have from such procedures at Oxford University, my audience took note of this oddity of a doubled self-observation. Many of the audience members were queasy while others intuited that this simultaneous subjective/objective vision approximates the skilled vision of the surgeon.

Much later, my audiologist who tests for hearing changes referred me to a soon-to-retire acoustic surgeon in the practice. He was retiring early, he told me, because the “young surgeons who use Nintendo surgical skills” were much better than he was, trained to use different technologies. A similar practice pre-skilling occurs with simulation training for pilots and even more for the long-distance spectrum of remote sensing and controls for Mars Explorers and military drones.

Although this may seem somewhat of a detour, I turn to a similar analysis of computer game skills that appeared in Debugging Game History (2016). The reader can note the relation between these preparatory skills and their new embodiment in surgery.

Embodiment

This will be a postphenomenological analysis of variations on embodying screen games—video, computer, arcade, and Wii-style bodily play skills. Although gaming with screens, in contrast to pinball and other earlier mechanical arcade games, began mid-twentieth century under the dark designs of AI combined with war gaming, this approach will concentrate upon entertainment games and look at typical variants through the mid-twentieth to twenty-first century. In each variant the player must develop bodily skills, some of which have far-reaching implications with respect to contemporary life practices.

The analysis will be interrelational and parallel to the many styles of such analyses found in science technology studies but will focus upon the postphenomenological interest in bodily perception and action, and thus the player remains one focal interest. But a game presents a “world” in the sense that some scene of action is presented or imaged—for purposes here, usually on a screen or visual display device. The interaction involves bodily perception, movement, and skill development that varies with the multiple “screen worlds.” Embodiment in relation to technologies, however, takes different shapes. Indeed, each type of technology calls for different actional skills. Varying degrees of bodily engagement and skill levels ranging from amateur to virtuoso also come into play. In my own case I have done studies in relation to musical instruments (“Postphenomenology: Sounds beyond Sound,” forthcoming in Cohussen, Meelberg and Truax, Routledge Companion to Sounding Art) and a range of technologies in my Listening and Voice: Phenomenologies of Sound (2nd ed., SUNY Press, 2007) chapters 12 to 13 and in my Experimental Phenomenology: Multistabilities (2nd ed., SUNY Press, 2012) chapters 12 to 14. Embodiment is a dynamic and complex phenomenon. It entails learning bodily skills and thus is developmental, and different technologies call for different degrees of engagement—take two keyboard examples. Typing or word processing calls for speed skills and eye–hand coordination but does not ordinarily call for more complete bodily engagement. In contrast, a virtuoso piano player engages the whole body, as any prime performance shows. The same spectrum applies to game skills, although the simpler and earlier games remained closer to the word processing example except for speed.

Two-Dimensional Games

The simplest early games were often two-dimensional: Pong, Tic-tac-toe, Pac-Man, and Raster were paradigmatic. Here the image worlds were abstract figures that moved upon a flat background screen. The figure, a ping-pong ball, an X and 0, or an abstract gobbler, moved on the screen. The player, using keys or a joystick, controlled the motions according to the scheme of the game. Note that this game world is an analog of a long history of writing technologies. Cuneiform was an early hard technology writing process employing a hard stylus and clay or pottery as the proto-screen for an inscription and the skilled scribe who made the inscriptions. Scribes had to follow the writing game to be intelligible and thus learned forms of embodiment. Almost as old are soft writing technologies, brushes or quills (add ink or paint), which produce images on usually soft screen analogues—papyrus, parchment, etc. By industrial times a two-handed approach, a typewriter with a keyboard making images upon paper, appeared. But note that the writer takes action through a device to produce an image world on a screen-type tablet. The two-dimensional game worlds remain analogous to this history; their textual counterpart is of course word processing, which produces its text world on the screen and is later transferred to printed form. On screen (or tablet or inscription surface), figures are typically located on stable, opaque backgrounds. Here the bodily motions are minimal. Eye–hand motion is focal. The player remains seated in a stationary relation to the game screen, and thus whole-body motion is minimized. Indeed, the earliest forms often involved monochromatic figures against a contrasting monochromatic and opaque background.

Before leaving this style of player-technology-game world, two features are worth noting: first, game capacities, for example the speeds of balls, gobblers, and the like could be accelerated beyond human reaction times. Thus skilled players could improve their relative eye–hand coordination times to a personal maximum. Second, by the reduced but specific eye–hand movements, new skills could emerge with expert players quite different from a general populace. The unpredicted outcome was a pre-skilling of rapid eye–hand motility that eventually became useful in non-game contexts such as laparoscopic surgery. Unintended effects relate to virtually any technology, and they may be destructive, simply unexpected, or, as in this case, surprisingly useful. Eye–hand coordination skills are, from a perspective of partial bodily movement skills, seemingly reduced bodily actions—but they are also more useful for the meticulous movements called for in microsurgery. Anecdotally, my auditory surgeon tells me he gave up actual surgery some years ago since younger—game-skilled—surgeons now perform better.

Two-dimensional games such as those described remained abstract in image and highly reduced from more full human–game interaction. Hybrid attempts to “jazz up” such games sometimes took the shape of adding colors to the earlier monochromatic game worlds and three-dimensional backgrounds (such as clouds or flat buildings). But a larger shift occurred with games taking on a new dimensionality through a different player-screen variant, that of through-the-screen imaging spatiality.

Shooters and Simulations

A marked shift in game worlds began once the figures and characters became more than cartoon-like and took on three-dimensional image characteristics. The games selected here are shooter and simulation games, of which there are hundreds. Again, the focus is upon the interrelational player–game world relationship, with special attention to embodiment skills. The first and most dramatic difference is that the game world appears through the screen rather than upon it. The screen becomes mostly transparent (back glare or smudging can occur as an irritant, of course), and the images move in a spatiality that, through the screen, is usually called cyberspace. The two variations now noted are those of screen opacity (2-D) and screen transparency (3-D), which are multistable spatialities. Multistability occurs with most human–technology interrelation in that there are multiple potential uses and outcomes with corresponding shifts in spatialities. Such multistabilities are quite dramatic in that what counts as figure and ground change. In on-screen spatiality the opaque screen is ground, stable and flat, but with the introduction of even partial three-dimensionality, the ground can become dynamic with figures that are clearly mobile. With Wii-type games, both player and field become dynamic.

Beginning with shooter games, for example Duke Nukem, one of the longest running such games, the player “shoots” whatever counts as an enemy. These games also introduce a multistable set of shooter POVs or points of view. Close up or embodied, the player simply sees a weapon immediately in front of him or her. Different weapons may be used, but all are “as if” being held in the player’s hand. Or, an avatar variant may be introduced where some figure stands in for the player and holds the weapon and, either walking or riding some vehicle, shoots the enemies. “Piggyback” is the medium close variant. By this I mean that the avatar is immediately in front of the implied player, not “out there” in the field. Or, the Avatar may be farther away, in the field itself, but again the controller fires the weapon of choice. Thus though the screen worlds introduce many more multistabilities than the simpler games, simultaneously one’s bodily actions are more complicated. However, the player remains relatively stationary, seated in front of the screen, and bodily action remains mostly eye–hand motility. Note that here attention remains upon embodiment and the development of player-world skills, not upon plots or narratives. In the case of Duke Nukem, both fairy tale–like and science fiction–like plots prevail. These games also frequently entail multiple deaths from which the player may restart and the previously noted speed up such that no player may keep up with the attackers. One wonders—no answer attempted here—how does this re-dying and loss to the game process affect the player’s psyche?

Turning to simulations, Flight Simulator is the game of choice for this second related category. In these games (which could be simulations of cars, motorcycles, or any number of vehicular machines), a variety of planes are offered, for example a Spad, Cessna, F-14, etc., each with programs determining capacities of flight. Note that this set of choices parallels that of weapons in the shooter games. Then, multiple POVs are also variants. One may be seated in the cockpit, the instrument panel and controls immediately before one and the scene outside viewed through the virtual cockpit window. Or, one may simply have the airplane fly at a distance while projecting an avatar pilot inside. In playing the game, it is possible to get lost and to exceed the capacity of the airplane—and thus crash into trees, buildings, and the like.

These games, in a parallel to the Nintendo surgery (the name given to game skilled eye–hand surgery) example above, have had impacts upon new technology uses. For example, highly complex and realistic simulators are used to test and train actual airline pilots. Programs are introduced for emergency situations that are unlikely or improbable to ever actually occur in flights, teaching pilots how to react and deal with such emergencies. Actual pilots report genuine sweating and stress during such training. Or, in the contemporary world with drone warfare, the pilots are again often skilled gamesters now become remote and long-distance; pilots for flight control work a continent away from the actual battlefield (or, for that matter, even longer distances when controlling Martian space vehicles!). In short, the spatiality-transformations possible in games and remote sensing transform capacities of human–technology relations to mega-space.

Such virtual and remote control again shows phenomenologically how geometrical spatiality differs from experienced spatiality. The near-distance of remote control is a type of bodily or bodily-extended experienced embodiment. Far from disembodiment, this is a distinctively embodied experience.

Learning Lab Denmark

LEGO has long been a major toy manufacturer in Denmark. One of its spin-offs was a gaming think tank called Learning Lab Denmark. Participants in the lab came from many disciplines and were frequently assigned experimental tasks that involved imaging possible games. During the 1990s and early 2000s, a major concern was that children were spending very large stretches of time stationary before screens. Indeed in the United States, recent surveys show that college students average some thirty-seven hours per week either with screens or music devices. Danes became concerned that the resultant lack of whole-body activity would lead to obesity, and thus LEGO launched a study to imagine modes of engaging more complete bodily motility in electronic games. (I was a frequent participant for over a decade.) Workshops included analyses by neurologists, gym teachers, physiologists, and philosophers. Simple ideas such as stationary bicycle races tied to simulation games, 3-D ball games, and the like were imagined. What became known as Wii games were invented independently.

In later meetings of the lab some of these games were demonstrated and, indeed, they called for much greater degrees of bodily motion. For example, players had to stand inside a designated space, and various imaged 3-D balls were fired at the players who, in turn, had to hit them back. A certain irony emerged amongst the players: the adults seemed to enter quite fully into the action and used their arms and legs, soccer-like, to return the balls. But the younger children—probably more accustomed to the mini-movements of screen games—quickly realized that a small and bare hand swat was sufficient to return the ball, so they tended to do much more minimal motions. Embodiment here does increase from eye–hand, but only incrementally. Also, if a player steps out of the designated space, he or she effectively leaves the game. What we are seeing here is both a change in relation to limited, partial bodily motions and greater whole-body motions.

During this same time period, virtual reality caves were also popular. Here, either with goggles or with multi-beamed location sensors carefully tracking the player, and again within delimited space, bodily action was required to discover keys for hidden chests or other in-game actions. These were sort of in-game variants of “Dungeons and Dragons” and yet another way of enticing whole-body motion.

Pterosaurs

Started in 2014, there is an elaborate exhibit at the American Museum of Natural History about pterosaurs, the many species of extinct flying reptiles of the age of the dinosaurs. There were roughly 186 genera of pterosaurs, the largest with a thirty-foot wingspan and the smallest the size of a small bird. The exhibit includes two pterosaur games where one can “fly” two different kinds of pterosaurs. The first is in a sea area on a fairly large animal. Similar to Wii gameplay, the player must stand in a specified area and then, with his or her body, mimic flight and pterosaur action. So, by flapping one’s arms the pterosaur takes flight, tilting makes it turn, gliding down to catch a fish it splashes into the sea, and again flapping it takes off. (As a player, this is the closest to an experience of embodied flight I have ever had!). The second pterosaur is a forest dweller, much smaller, and the game’s task is to catch flying insects. Again, standing in the prescribed position and mimicking flight motions, one must avoid smashing into trees and the like. This game is much harder to embody, and failures are much more likely than in the first. It becomes obvious to the player that considerable learning time would be needed before a high success rate could be attained. It should be noted that the projected 3-D image is not screen-bound but is much closer to a virtual reality projection. Such game worlds remain limited, and if the player either tries to escape the game world or steps out of the designated space, the game crashes. These constraints relate to embodiment since game embodiment is distinctly different from ordinary embodiment. Similarly, game embodiment entails what all aesthetic experience calls for: a suspension of disbelief.

Embodiment Trajectories

What this progression shows—from the simplest 2-D games to the most complex, more fully embodied Wii-style games—is that each game world is related to player embodiment skills. And, as noted, each skill context can reenter an ordinary lifeworld with a newly practicable skill, from eye–hand or Nintendo surgery to distant and remote sensing and robotics manipulations. This is a pattern that could be taken to be a game-to-new mode of life. Games in the contemporary world have tended to take up more and more “real” concerns. For example, the Information Technology University in Denmark has some twenty faculty members whose academic roles relate to gaming. Of course, this is a reflection upon what is today a multibillion-dollar industry. Similarly, a robotics company in Nara, Japan, hired two hundred “roboticist PhDs” in 2008, another multibillion-dollar endeavor. Gamely embodiment, however, clearly retains a human–technology interrelation. From this interrelation there can be a flow into new lifeworlds that are limited only by constraints of imagination.

Annotate

Next Chapter
Postphenomenological Postscript: From Macro- to Microtechnics
PreviousNext
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org