User:Wade: Difference between revisions

From OLPC
Jump to navigation Jump to search
Line 15: Line 15:
* [[Smooth Animation with PyGTK]]
* [[Smooth Animation with PyGTK]]
* [[Extending PyGame with C++]]
* [[Extending PyGame with C++]]
* [[User::Wade/Ideas/Activity Management]]
* [[User:Wade/Ideas/Activity Management]]


=== Ideas for Laptop Software ===
=== Ideas for Laptop Software ===

Revision as of 20:19, 12 June 2008

About

My name is Wade Brainerd, and I'm a Technical Director at Activision. My professional work mainly involves technology development and project firefighting. I have an interest in educational software development and enjoy application and game programming in my spare time.

My Projects

  • Colors! - Natural media painting program ported from the DS.
  • Bounce - 3D arcade game with a built in level editor.
  • WikiBrowse - Highly compressed 100mb cache of Wikipedia articles, wrapped as an activity with a builtin webserver.
  • Typing Turtle - I'm mentoring User:Scheppke and User:Prakhar in the development of a Typing Tutor activity.
  • Log - I recently spent a Saturday and overhauled the Log viewer activity.

Wiki pages

Ideas for Laptop Software

  • Lemonade Stand - Teaches economics. Cons: Text heavy.
  • Math Practice - High speed practice of different mathematical formulas. This could be expanded into a series of Brain Training-like minigames.
  • Board Game - Some kind of dice & minigame based multiplayer board game, traveling around a map, drawing cards, etc.
  • Math Puzzle - Rearrange equations by applying transformations, to solve simple problems.
  • FireZone - 3D open environment multiplayer game. Race around a building fighting fires cooperatively with friends.
  • Scanline flat shaded 3D software renderer for PyGame (in progress, called 'xo3d').
  • Flash card training software.
  • Financial software - The simplest possible version of Quicken to help families keep simple budgets.

Python Board Game AI Module

I wrote a simple Python class for doing board game AI. It can handle just about any kind of game with multiple players and moves, and generally plays a good enough game to complete with most 10 year olds.

Example Program

Media:Othello.zip -- Simple Othello (aka Reversi) game written in PyGame, that demonstrates the AI. Includes the game source, bitmaps, and the AI module. Requires PyGame to run.

Not a complete game at all, just exists for the purpose of AI testing.

Module Source

"""A basic two player, turn based, game agnostic artificial intelligence.

Functions:
GetMove -- Returns the best move, given a game state and player.
GetRandomMove -- Returns a random valid move, given a game state and player.

The AI module interfaces with the game through a State class that is passed
to the various functions.  This class represents the game from the perspective
of the AI.

It cares nothing about the actual game being played, as long as the State 
class implements the following set of standard functions.  It can be used 
with anything from Checkers to Tic-Tac-Toe to Risk.

State.GenerateMoves() 
    Returns an array of Move objects, representing all possible moves from 
    the current state.
State.ApplyMove( Move )
    Executes the contents of a Move object, modifying the game state and 
    incrementing the turn count.  The Move object passed in will be one of 
    those returned by GenerateMoves.
State.IsMyTurn( Player )
    Returns True if it is currently Player's turn.
State.Evaluate( Player )
    Returns a heuristic number representing the score of the game state, 
    from the perspective of Player.
State.Copy()
    Returns a copy of the state.  Be careful to actually copy objects, not 
    just reference them.

The apparent intelligence of the AI is highly dependent on three things: 

1. The quality of the Evaluate function.  The better the estimate of the game
   state is, the better job the AI will do with limited lookahead.   

2. The order of moves returned by GenerateMoves.  If better moves are sorted 
   to be earlier, more of the tree will be pruned, and more nodes can be 
   searched.  This can take into account simple heuristics, like moves which 
   capture a piece, or are towards a goal are returned first.

3. The performance of the callback functions.  Time spent in GetMove is be 
   dominated by the cost of calling State.GenerateMoves, State.Copy, and 
   State.Evaluate.  Faster callbacks means a higher depth can be searched in 
   a reasonable amount of time.
   
Technically, this module implements a MiniMax search with Alpha Beta pruning.
This is a good, basic AI for simple games, though it will not produce a
competetive chess game with reasonable search times.

Possible extensions that would improve the AI include iterative deepening, and 
a state hash database.  For real performance though, the AI will probably have
to be implemented in C.
"""
import time
import random

# Values for very good and very bad states (+/- infinity for our purposes)
VeryGood = 1000000
VeryBad = -1000000

def GetMove( State, Player, CutoffDepth ):
    """Returns the best (highest score) Move for Player given State.
     
    CutoffDepth moves in advance will be searched, this can be used to tune
    the amount of time taken in the search."""
    
    def MiniMaxAlphaBeta(State, Alpha, Beta, Depth):
        if Depth == 0:
            return State.Evaluate( Player )
            
        Moves = State.GenerateMoves()
        if len(Moves) == 0:
            return State.Evaluate( Player )
            
        if State.IsMyTurn( Player ):
            for Move in Moves:
                Next = State.Copy()
                Next.ApplyMove( Move )
                Alpha = max(Alpha, MiniMaxAlphaBeta(Next, Alpha, Beta, Depth-1))
                if Beta <= Alpha:
                    return Alpha
            return Alpha
        else:
            for Move in Moves:
                Next = State.Copy()
                Next.ApplyMove( Move )
                Beta = min(Beta, MiniMaxAlphaBeta(Next, Alpha, Beta, Depth-1))
                if Beta <= Alpha:
                    return Beta
            return Beta

    BestScore = VeryBad
    BestMove = None
    Moves = State.GenerateMoves()
    for Move in Moves:
        Next = State.Copy()
        Next.ApplyMove( Move )
        Score = MiniMaxAlphaBeta(Next, VeryBad, VeryGood, CutoffDepth)
        if Score > BestScore or (Score == BestScore and random.randint(0, 10) > 5):
            BestScore = Score
            BestMove = Move
    return BestMove

def GetRandomMove( State, Player ):
    """Returns a completely random valid move.
    
    This can be useful for implementing the absolute lowest level AI possible."""
    
    moves = State.GenerateMoves()
    return moves[random.randint(0, len(moves)-1)]