August 3, 2011

Core Functionality of Counter-Strike: Source's Bots

While I was attending Collins College, I had a class in Artificial Intelligence and Game Design. For a group project, I interviewed Michael Booth. Yes, the Michael Booth, CEO of (at the time) Turtle Rock Studios, developer of the Counter-Strike bots, main brain behind the phenomenal Left 4 Dead. This was very exciting for me at the time. Read on for our exchange.


Here's what I emailed him about:

I am a student at Collins College in Tempe, AZ. I am currently taking a course on AI Programming. Recently we discussed various forms of AI navigation, such as A*, breadth-first, etc, and it made me think of the CS Bots developed by Turtle Rock. Since we have a research paper on AI coming up, I was wondering if I could get more information on the CS Bots. Would it be possible for me to get more info such as what form(s) of AI they use, how they navigate the environment, or maybe even pseudocode? Please let me know.

And here was his reply:

Hi Owen,

Take a look at the presentation I gave at the 2004 Game Developer's Conference about the technology behind the CS:Condition Zero bots here: http://www.gdconf.com/conference/2004.htm

Search for "booth" on the page and you'll find a link to download my PowerPoint slides and video clips from the talk. Play it as a slideshow - the videos play in the various slide pages That should hopefully answer many of your questions.

Michael Booth

CEO

Turtle Rock Studios

Needless to say I screamed like a fangirl. I was expecting an automated response and nothing else, not the man himself. You can follow that link and see the PowerPoints for yourself. They're quite informative. Anyway, I had some more questions.

How are the navigation meshes generated? How do the meshes know when they've hit a wall? Does the game send a 'ghost' of a player around and compare its x,y,z value?
The Source engine allows you to "trace a hull" in space and determine if it collides with anything, and the details of the collision. It's like taking a 3D box and moving it in a straight line through space and seeing what it bumps into. I pick a random player spawn point and trace a player-sized hull there. If there are no collisions, that's a valid spot for a player to be, and I store that spot. I then basically do a breadth-first search out from there, stepping left/right/forward/backward, tracing hulls storing the valid spots, and remembering the steps I took between valid spots. Once I've exhausted the search space, I go back and use a greedy algorithm to stitch all of these spots together into a quilt of connected axis-aligned rectangular areas of "walkable space" - the navigation mesh.

How do the nav meshes decide an area is suitable for camping, sniping, etc?
Similar to "tracing hulls", in the Source engine you can also "trace lines" - basically trace the path of a ray through space and see what it hits. For sniping spots, I trace a handful of rays from each hiding spot and if any of them go a "long way" before hitting anything, it's a sniping spot. Hiding spots are corners of navigation areas that have walls/tall obstacles on at least two sides. Simple heuristics like this add up to create complex and believable behaviors.

How does a nav mesh decide it can be connected to another by walking or jumping? Is it distance-dependent?
I know the maximum height change a player can jump up or fall down. Any height change outside of these limits is not a valid link between walkable spots.

I noticed in some of those pictures of de_dust's nav meshes that they don't all butt up against the wall. Are these pictures just an example? Can the bots travel a little outside of a nav mesh, like traveling outside of a thick waypoint's path?
That is sampling error. Since I sample walkable points on a grid, there will be some amount of error around the edges. However, my bots are never "on rails" but instead try to *track* a path they have created via the navigation mesh. This may mean they veer off of it a bit and have some "slop" in their motion, which is far more realistic, but also requires another layer of low-level near-obstacle avoidance (notice the little rays on the left and right of the bot's feet as they track a path? If one hits something, the bot moves away toward the other side), as well as checking for "frustration" (becoming stuck) and repathing.

Thank you again for your time.
Good luck dude. Send me your paper when you've finished it. ;)

So I guess that goes to show it really doesn't hurt to ask! I did send him my research paper, and despite that it didn't come out as well or as complete as I'd have liked, he seemed to like it. *brag, brag, brag* I wonder if Gabe Newell is up for answering some questions...

No comments:

Post a Comment