The Game Developer's Conference.
About ten days ago, I attended the 2002 Game Developer's
Conference, the largest gathering of computer game creators
in the world. Held in the San Jose Convention Center, a
sprawling Mecca of hotels and conference facilities in the
heart of Silicon Valley, the GDC hosted thousands of producers,
designers, artists and programmers from hundreds of game,
software and hardware companies, all swept up into a lively
hurricane of networking, promotion, research and learning.
The Allure of Games.
The conference draws these crowds each year because
the allure of games runs deep. Before computer games
became commercial, they were a secret vice: programmers
wrote games like Adventure and Space War for each other
in their spare time (and had to steal cycles on machines
dedicated for other purposes to do it). But then Syzygy's
Computer Space hit the market in 1972, later spawning
Pong and Atari; and since then, games have matured into
a full-fledged industry, with tens of thousands of dedicated
game developers producing games for millions of gamers
(most of whom own hardware dedicated for nothing but games).
Games are a multibillion dollar industry - over 6 billion in
2001 and still growing. Ignore for a moment the
legions of distributors unloading battalions of CDs
and cartridges on the shores of CompUSA and WalMart.
Behind the front lines, a huge infrastructure has sprung up
to support the builders of games. Industry giants like Intel
and Microsoft, desperate to ensure that the next generation
of games are destined for their chips and operating systems,
courted game developers vigorously with show booths the
size of houses filled to the brim with tools and incentives.
Surrounding these landmarks were a sea of smaller companies,
trying to make their mark with the latest solutions for
building virtual worlds
(Lithtech's Jupiter game engine)
or populating virtual worlds
(Biovirtual's 3DMeNow character animation)
or making the virtual populace dance
(Vicon's motion capture solutions)
or painting virtual tatoos on the virtual populace's faces
(Right Hemisphere's Deep Paint)
or, toughest of all, keeping track of the tens of
thousands of pieces of data that define your
tatooed populace dancing over your virtual landscape
(NxN's AlienBrain content management system).
(Note that being simply being mentioned above does
not constitute an endorsement; a statement like
"Fractal Design Painter 3.1 Rules!" would constitute
an endorsement. On that note,
procreate Painter 7 Rules!)
But it is more than just big business: games are a
fundamentally creative industry. While movies
are still a bigger industry, worldwide game sales
have surpassed American box office receipts -
a large enough chunk to notice, and still growing.
Media giants like Sony get it: one of the conference's
chief sponsors, it inundated attendees with t-shirts and
talks and beer mugs promoting its plans for its game
consoles, present and future. But the game phenomenon
has strong grass roots: like Hollywood, where you can't
catch a cab without meeting a driver working on his
script, the GDC was filled with young aspiring developers
either hatching their own game idea or working with
a team out of a friend's basement, scouring the floors
for inexpensive development tools and new ways to
deliver their games to a wide audience.
The game industry courts these new teams and new
developers vigorously. Premiere visual effects
released a free personal
version of its flagship Maya animation package,
desperate to crack the market held largely by
Discreet's 3D Studio Max;
game engine creator
is willing to discuss bargain pricing for new studios
using its game engine, whereas Epic offers a
free version of its Unreal
engine and level editor with every copy of the game.
The conference in many ways
was geared to young professionals trying to break
into the field: several talks and roundtables
focused on becoming a game professional, and
numerous booths and private rooms were set up
to help prospective employees and employers
mix and mingle.
Creativity, AI, and Software Development.
The conference also drew a crowd that you
might not expect - authors, artists and academics
of all descriptions.
of Mirrorshades fame
canceled his talk on the undeground of games, but
engaged "The Sims" designer
in a battle of dueling laptops.
creator of the "game" called
and a longtime favorite of
gave a keynote talk on
mathematical approaches to games.
In that vein, the
Independent Game Developer's Association
designed to improve the relationship of academia
and games, and one of the conference's chief sponsors is the
newly formed, entertainment-focused
Digipen Institute of Technology.
It was the conference's
appeal to all creative
disciplines that was of greatest interest to me. Readers
of the Library probably know that I like to
like to build software,
and like to design minds.
Computer games combine all three: a happy nexus of creativity,
software engineering and artificial intelligence - and
I found the Game Developer's Conference an ideal
place to expand my mind on all of these topics.
The conference hosted a dozen sessions on games
and artificial intelligence in several different
"tracks" - AI roundtables, AI in strategy games,
tutorials on AI techniques. Most of the sessions
were repeated, but unlike similar sessions at the
conference, which were held more than once
simply for scheduling convenience, the AI
sessions seemed to burst their bounds, with
topics spilling over from one session to the next
and finally culminating in an AI Programmer's dinner
During this debate, in which designers of games like
Starcraft bantered with Ph.D's in artificial intelligence,
a few key concepts emerged. The most important
idea was that the best solution is the most effective
one. While that sounds obvious, the artificial
intelligence community is engaged in a very
long range project, and researchers naturally
organize into schools of thought each pursuing
their own path to the prize. This competition leads
to vigorous arguments between schools, usually
between champions of "good old fashioned AI"
(such as yours truly),
usually seen as symbolic, serial, and programmatic,
and champions of competing schools, who
distinguish themselves by being neural, parallel,
dynamic - pick your false dichotomy of choice.
These kinds of debates were refreshingly absent;
the debate focused on what techniques work and
how to effectively use them. Finite state automata
and fuzzy state machines ruled the day, though
decision trees and other more complex control
structures were beginning to be discussed.
Technology for pathfinding based on the
classic A* algorithm has become so refined many
attendees did not wish to discuss it; one key
advance which made this technology even
more powerful was terrain analysis to break
a large and complex map into a simpler
network of locations.
Game AI programmers made these basic
control techniques powerful using several simple
methods. Hierarchical controllers enable simple
agents to organize into squads and groups.
"Idle" animation sequences create the appearance
that agents have an inner mental life; combining
the idle animation with simulated perception can
create a powerful illusion of intelligence if,
for example, a guard doesn't see you until he
idly glances in your direction.
which struck me the most was the power of
simulated perception working hand in hand
with both human and computer terrain
analysis. Level designers are in a unique
position to label significant features like
traversable terrain and resupply points as
a level is built. The game engine can then
preprocess these levels, extracting
connectivity, visibility and choke points
directly from the map structure. With this
rich set of features at hand, it is relatively
easy to build a game AI capable of very
complex behaviors simply by triggering
off the intelligence embedded in the map.
For example, with visibility, connectivity
and resupply information, an AI engine
can pick a
good sniping spot,
or, to take a more famous example, with enough
smarts mapped into the environment a Sim
"knows" to hit the fridge when hungry and
go to bed when sleepy,
even though the Sims
basic programming knows nothing about fridges
and beds - only about what the Sims
basic environment affords to them.
The creative sessions were similarly rewarding.
After a few lively roundtables in the Hilton and
a few circuits of the glittering Expo floor, I left
the white walls of the Convention Center and
skipped over the streetcar line to the Spanish
Mission styled Civic Auditorium, where Will Wright,
designer of the Sims, and Scott McCloud, author of
Understanding Comics, whipped out their laptops
and had a friendly duel over the nature
of time in movies, games and comics.
are dynamic, existing only in time and requiring
viewers to devote a block of time to appreciate
them; books and comics are "frozen" in time,
granting readers far more freedom to appreciate
them over any number of sittings or in any order.
Games, in contrast, are in the middle: like movies
they are dynamic and must be experienced in
time, but like books they can be paused and
resumed and replayed in many different ways.
Unlike both, games are fundamentally nonlinear and
interactive: a game's stories and visuals are
created collaboratively between the designer
and the gamer through the game engine.
Even in a game with a linear plot like Half-Life, a
gamer must be an active participant, making thousands
of minute choices which ultimately make no
two experiences in a game being exactly the same.
Unfortunately, the unique nature of each gaming
experience makes testing games a bear. When a
problem occurs, replicating the error can be difficult
- just one more problem which makes game
development a particularly challenging software
development problem. This is particulary rough
for complex systems like game AI - a recent book
on AI Game Programming Wisdom devotes
an entire chapter to tools for debugging game AI.
As another example, an
ideological debate erupted in the AI Programming
roundtables when a visiting professor questioned
the table about the role of machine learning; one
seasoned developer (himself a veteran of a
commercial AI company before his life in games)
rejected the use of machine learning on the grounds
that if you make your game engine learns from each
session, it has become a fundamentally nondeterministic
software artifacts and cannot be reliably tested
(or, more colorfully, "your QA department will kill you").
After some debate the room came to a consensus
that while it is best to agree to disagree, no-one
wanted unpredictable game engines but everyone
wanted game characters that appeared to learn,
regardless of whether the characters used machine
learning techniques or not.
These kinds of discussions and debates illustrate
to me why games are important for computer
science. Clearly games are important for their
economic impact as a sizeable industry and for
their social impact as a new literary medium.
But for computer scientists, games are a
challenging testbed for computing advances.
The rigors of game development schedules
and the requirements for a polished gameplay
experience puts the cherished technologies
and methodologies of computer graphics,
artificial intelligence and software engineering
to the test, and researchers should pay close
attention to how effective these technologies
are in practice.
Computer graphics researchers
already know this well, but it is my impression
that software engineers and artificial intelligence
researchers have a lot to learn from the game
industry. AI researchers need to hear that
for the game environment complex control
structures and deep machine learning algorithms
are not as effective as simple state machines
operating over rich perception and environment
maps. Software engineers need to hear the
lively debates over team size, software
methodologies, and coding standards, hearing
the experiences of coders on both sides of
each debate - and measuring which groups tend
to come in on time and under budget. (It amused
me to no end to sit down in a random chair
at a software engineering session - only to find
myself in the middle of the only other extreme
programming practicioners in the room.
Games are a tremendously rewarding field
for both creative artists and computer scientists.
Games are a new medium: my experiences playing
Myst, Half-Life or Elite Force stick with me as
vividly as any in a book or movie. Games are
challenge for computer science: the techniques
used to port Zork to microcomputers, or to run
Quake in real time, or to have your foes sneak
up behind you in Unreal, are as technically
brilliant and as worthy of study as anything
done in any other area of computer science.
For those involved in it, game development
can be frightfully challenging, but immensely
rewarding. Each year I have gone to the
Game Developer's Conference I have learned
a great deal, and not just about how to
make games - about how to build AI systems,
about how to engineer software, about
how to be a creator.
I'll be back next year. I encourage you all
to do the same.
See you in 15.
- The Centaur