This project was done as a part of my dissertation while studying at the University of Derby. The AI made use of a heuristic simulation based algorithm called the Monte-Carlo Tree Search to simulate future game states and determine the most optimal path to head in. This algorithm was combined with the likes of a finite state machine for imposing some kind of strategy onto the way that the agent behaved.
For the implementation of my agent, I made use of a simulator that was developed by members from the World Congress of Computer Intelligence in Games conference. The rendering of the game was accomplished through GDI+ and WinForms and made use of the Reflection namespace to provide API functionality so that new agents could be plugged into the simulator with ease.
After much modification from the original source code of the simulator test bed, I was capable of implementing the agent specified in my dissertations documentation.
Further improvements would include implementing some kind of multi-threaded system used for developing the MCTS simulations that are generated at run time.
You can download and use the source code from this location.
A lot of my work consists of re-writing areas of the simulator and integrating the functionality of being able to deepy copy game states, which was required for my MCTS research given the nature of the algorithm. MCTS relies on being able to replicate the current game state and being able to simulate future game states based on certain parameters provided to the algorithm.
Warning: it's 80~ pages long and there may still be some grammatical errors.