CyberEdge Journal

Thursday, March 17, 2005

On the Game Developers Conference

GDC 2005: An Educator’s View
By Jeffrey R. Abouaf

March 2005, San Francisco --- Last week I attended my tenth Game Developers’ Conference, this year held at San Francisco’s Moscone Center West. Still a conference by and for game developers, I’m struck how sophisticated it’s become in such a short time. The Visual Arts Track (where I spend most of my time) has advanced from generic and some specialized techniques for making lightweight models, textures and animation, to translating high end film assets and effects to a next-generation interactive medium with negligible change in quality. This raises fundamental issues across design and production spectrums.

The Faculty Summit

Electronic Arts conducted a Faculty Summit the first day of the Conference at it’s Redwood City facility. Attending were faculty from many colleges and universities, some of whom were partners with EA in their game development curricula; others had created such programs or were in the process of doing so. The candid presentation by EA concerning what and how it is preparing to face the next seven or so years, together with faculty descriptions of their programs, goals and challenges offered a surprisingly coherent picture of what to expect. Both likened the game industry to the film industry of the 1930’s. The “Next Gen” consoles will appear within a year, offering a 5-10x increase in computing power, and five years thereafter expect the generation after that. By 2012 the following hardware generation will be out. Developers will be producing titles inhabited by photoreal interactive characters, at HD resolution, using movie special effects on consoles with no noticeable power limits, (at least in today’s terms). Co-Founder and current head of EA Bing Gordon remarked that what EA seeks most from aspiring designers and developers are proposals for new features for existing games, not new game concepts. At gross revenues of about ¼ industry gross, EA wants to build the next blockbuster title (often inspired by feature films), and sequels for their current line up. With average budgets of $10 million and development teams of 200 in house, EA resembles the film studio models during Hollywood’s Golden Age.

EA is concerned with growing talent for Next Gen and Next Gen2 games. By 2012 they anticipate all their current assets and technologies will be replaced. To that end they have partnered with many universities and colleges, and are actively recruiting graduates. For example, they are involved with Carnegie Mellon’s recent MET program (Masters of Entertainment Technologies, presented by faculty head Randy Pausch), USC’s schools of film and television (where EA hopes to recruit new writers to interactive entertainment) and the University of Central Florida (where EA helped to convince the state to fund $5M for a new game curriculum and facility, and have pledged ongoing funding.) The more than 100 representatives and faculty present either have or are implementing their own game curricula. Their main concerns were (1) how to leverage their current resources from computer science and entertainment disciplines to a new program, and (2) how to guaranty the new courses or degree programs in game development will represent the same quality as their product from other fields.

Bandwidth, budgets, and believability

Not surprisingly, this large developer-institutional collaboration dovetails nicely with the Microsoft keynote, where they described a next generation Xbox aimed at HD interactivity. Likewise consistent was the session named “The Negotiation”, a quick presentation of a hypothetical deal between a large developer and a major publisher/licensing firm. Again, the example was a $10M budget and 10 month development period. The surprise is always how little remains for unfunded, however creative, talent, and how this result is explained by fixed expense and general risk-reward analysis. What emerges from all this is that Next Gen titles, while bigger, badder, more photoreal and complex, require enormous budgets, an army of talent, and consistent with the movie industry they are emulating, funding from deep pockets. The willingness to take risk decreases in inverse proportion to budget – only a handful of titles succeed.

Of course the art-related presentations focused on advanced techniques and production issues facing those working on “Next-Gen” titles. As computing power of platforms increase to remove graphical limitations, new issues arise: i.e. how to portray real-time photoreal interactive characters in HD resolution? How to populate these spaces with enough hi-res content? An obvious result is to generate characters procedurally. How does this effect the artist? PS1 titles restricted polygonal budgets to 800-1200 per main character and PS2 raised the bar to 5K – 8K polygons. I heard one artist on a Next Gen title comment that his character budget hadn’t increased that much, but now he has 8 characters in the scene at all times. It’s all speculation, as the hardware isn’t even out yet.

The technique classes followed what you’d expect from a film production advanced 3D seminar: edgeloop modeling to assure realistic skin deformation and believable animated nuance (Derek Elliot); advanced rigging to portray accurate inter-bone influences in skeletal movement (Paul Neal); scripting techniques that facilitate animation or even generate entire rigs and characters; making and reducing hi-definition digital sculpture to normal maps for added detail; authoring and deploying DirectX 9 shader technology within your 3D creation tool; and a favorite, Tips and Tricks by Kelsey Previtt, which always pop my eyes. In other words, bringing film techniques to real-time, but with all those added problems – oops, “challenges”. For the working artist, learn human anatomy like you never learned it, and draw, draw, draw.

Ed Hook’s put his finger on the problem with one hour on “You can’t MoCap the Soul – the problem with Eyes”. This subject, which easily could have spanned the entire day, raised the question of what happens as games include photoreal human characters. (I’d heard the EA art and technical directors describe this as “beyond photorealism” to “believability”). Hooks showed me that what sounds so obvious is much more complicated. He is the author of “Acting for Animators” (a must for the library for anyone serious in this area), an actor and teacher for 30 years, and a consultant to major game companies these last years. Personally unfamiliar with basic acting theory, I was struck by his analysis. For a character to connect with the audience, it must evoke empathy, meaning we understand the feelings – as opposed to sympathy where we may feel badly for the plight, but not feel the character’s experience. He then stated that in acting, “thinking leads to conclusions but feelings lead to action” That is, we experience the characters feelings by how he/she acts. Not so with games: the character looks, then acts – thinking leads to action. We can feel sympathy, not empathy. Without empathy, we cease suspension of disbelief. He also suggested player control is a factor: if our thoughts control the action (and not character feelings) we don’t empathize. This is exacerbated as characters become photoreal, because our “primate brain” shifts from fantasy-based to reality-based interpretation and reaction. The greater the realism, the more precise and less forgiving our scrutiny.

Solutions? Where independent feelings can’t drive the character, consider evoking empathy through other characters or the environmental cues. Create MoCap files using actors working from a script. After all, they are trained to portray feelings through action. For example, consider the difference between telling someone in a MoCap suit to get out of their chair quickly vs. instructing an actor they are sitting in a chair, which someone has just set on fire.

Valve, Inc.’s presentation on Half Life 2, (which won this year’s IGDA award), fit nicely with this thread. HL2 characters respond to you by looking at you, no matter how you enter a room, or what you do. Yet, things like eye contact, head, body and limb gesture, and interactions with other characters are very sophisticated, based on statistical data on human gestures, gestures unique for each character, and unique relationships between characters. HL2 animation breaks down to three components: generic animation implemented through AI, such as phonemes, general facial expressions; scripted gestures and body language movements unique to a particular character, but reusable; and artist keyframed animation, used for example to portray unique relationships between characters. The HL2 engine can blend this AI, scripted and keyframed animation non-linearly at runtime in a truly seamless way. Playing the same scene three times differently, we experienced the characters looking and talking to us, moving in their unique ways, and embracing each other – each time holding together as if uniquely keyframed, yet equally spontaneous in each instance. I found their multi-layered technological approach to Ed Hook’s intuitive, empirical observations inspiring –to say the least.


The “Burning Down the House: Game Developers Rant” Session offered acid counterpoint to the EA summit and other sessions about mega-title production. As articulate and passionate as well known in the community, Greg Costikyan, Chris Hecker, Brenda Laurel, and Warren Spector entertained ranting on the current state of the industry. Like all blockbuster media, mega-games take an army of great talent to produce and distribute, but unlike other media, it’s distributed through a single channel – retail -- derisively referred to as Wal-Mart). Greg Costikyan ranted such titles can only be produced by or through large corporations and distributed by like heavyweights, with prime beneficiaries including the hardware manufacturers Microsoft and Sony. This leaves little creative freedom and opportunity for the independent developer, and perhaps less for the artist in their employ. Then Brenda Laurel raised the social issue (the only time I heard this raised during the conference). What role models are set out for the primary audience, young men? Professional Athlete. Soldier. Gangster/Street Thug. Wizard – maybe that one’s positive. I’m torn – and I wish more were – over the implications here. First and foremost I believe in protecting free expression as the bedrock of a free society, and am suspect of any forces, past or present, at work to skew, limit, or otherwise chill it. Yet we see how powerful forces manipulate media, and how media complies, to move this society to a scripted agenda. This industry is no more or less complicit. Calling it “entertainment” is not a complete defense to this responsibility, and putting a cute rating tag on the box evokes the same chuckle as it does when I see it on a movie. One answer of course is more content diversity, which at least in the near term, is not part of the current commercial direction.

But then nothing stands still. San Francisco has a long tradition of bohemian artistic energy, in my lifetime from the Beats through Multimedia Gulch, to today’s independent game developer. The IGDA and Independent Festival stood in for San Francisco’s customary independent (anarchic) creative energy, and I saw a lot of work unique for artistry and gameplay. As an artist, I enjoy the blockbuster and the independent, and marvel at both the grand as well as the intimate. But as an educator I feel a responsibility to open doors, with a protective conscience to remain mindful of the costs, even when my student is not. This differs from my thoughts only a few years ago when artist-developers were more empowered in a less mature industry. The maverick is being replaced by a professional -- more technically skilled and better suited to functioning in a big organization; better equipped to invent new features. To one concerned on readying today’s students for a coveted career in game development, enlisting a young person to incur $40-$100K in loans to land a $40K job (if they’re lucky) has a sobering ring.

I hope next year to attend the Serious Games Summit, an aspect of the conference dedicated to game technologies in the training space. Unfortunately the sessions were held the first two days of the conference and conflicted with others I needed to attend. This makes the case for organizers to record sessions, and make them available. The obvious and immediate benefits of serious games, together with industrial funding which assures their deployment, make expansion of this portion of the conference a “no-brainer” That said, my sincerest thanks to all connected with organizing this conference and bringing it to San Francisco, and to my contact, Sibel Sunar for her role over the years in presiding over more than anyone can take in, yet keeping it running so smoothly.