Powered by glolg
Display Preferences Most Recent Entries Chatterbox Blog Links Site Statistics Category Tags About Me, Myself and Gilbert XML RSS Feed
Tuesday, Nov 04, 2008 - 01:14 SGT
Posted By: Gilbert

- -
What I Do

changelog v1.09b
---------------
* New comments, regardless of age of original post, will now trigger an email for Instant Response:



Only a week and a bit left to the Interim Report for the HYP to show what I've done on it so far (hint: not that much), but I think I'll keep my mind off it tonight at least, since there are still five or six months remaining, and my advisor seems to be hinting that the Report is not that important anyway (first batch to kana it. Gah) - I'll just have to bone up on my Desperation Programming feat once the examinations end. I think I'm pretty decent at that, by the way.

So, just in case anybody was wondering what a Computer Science university undergraduate does (other than playing researching computer games), I'll delve a bit into Lab 3 of CS4213 Game Development. Oh wait, that's researching computer games. Never mind. On balance some future student will probably Google this post up, so if you're him/her, hope this helps.

First off, I opened the Word document describing the lab. "Lab III: Neural Networks for Decision Making". Hmm. Next up was several pages mostly describing the classes and methods in the provided skeleton code, which weren't very inviting.

A summary: The lab is about implementing a simple neural network AI for (sixteen) robot enemies in a simple first-person shooter game (so simple that there's only one weapon, and not even any walls). There's Data Collection program code, Neural Network program code and the actual Simulation (game). The idea is to collect data using the Data Collection program, feed this data into the Neural Network program to create a neural network, and have the robots use the neural network to make decisions in the Simulation, and we're done. Ta-dah!

The high-level overview is simple enough, but coding all the nitty-gritty stuff isn't that straightforward. Furthermore, I didn't have any practical experience with neural networks (thereafter NN). The general idea behind NNs (AFAIK) is that they allow, given a selected set of inputs, to predict (certain) outputs. Let's have a simple example with two inputs: Age and educational level, and one output, the probability of getting refunded on a bad investment. Using this, we set up NN with three layers - the input layer, the output layer and a hidden layer (with perhaps two cells [why?]) in between them.

Having defined the structure of the NN, it's time to train it with actual data. Each input is ideally normalized such that it has a minimum value of 0 and a maximum value of 1. So, for age, we can have 0 years old as zero, and 122 years (current record) as 1. For educational level, we can probably work out some scale with no formal education at all as 0, and a terminal degree as 1.

So let's say we encounter Mr. X, a 70 year-old retiree with primary school education, who got his money back. Then, the input data would be something like (0.57,0.2), and the corresponding output data is (1). Next is Mr. Y, a 30 year-old investment banker with an MBA who probably isn't going to get anything back. His data would be (0.25,0.9),(0.01). By feeding all these data into the NN and iterating many times, the weights for the cells in the hidden layer(s) are eventually adjusted such that feeding the exact same data back into the NN corresponds very closely to reality, i.e. the NN is sort of a black-box function that will take (0.57,0.2) as input and give a value that is very close to the actual output of 1 back.

Why is this useful at all? The answer is that the NN allows output to be predicted with a certain level of intelligence for new data. Even given only the two sets of data so far, it wouldn't take much to guess that (0.71,0.15) should have an output closer to one than zero, and the other way around for (0.22,1.0). While this (and the Lab 3 NN) are rather simple examples that a human should have little trouble working out, I suppose NNs really shine when things get complex with dozens of inputs and outputs, which probably can't be adequately expressed with basic rules-based logic (e.g. FSMs).

Now, using only the age and educational level probably will not be enough to successfully determine whether a person gets back his cash all the time, since other factors like proportion of assets invested, evidence of misselling etc will come into play. Now one can of course model all the inputs one can think of (such as the last digit of the person's NRIC), but many will obviously be of minor or no consequence to the decision and only serve as unwanted noise. Thus, a balance is required - another win for Common Sense.

The next step was then to peruse the module's online forum on the IVLE, to get up to date on any time-wasting bugs, receive other students' insights so far (i.e. learning through others' experiences) and get a general idea of how to proceed. One of the few benefits of starting a lab assignment late.

Data Collection to a text file wasn't too hard - just dump S sets of L lines, where S is the number of samples taken and L = I+O, where I is the number of chosen inputs and O the number of required outputs respectively. I chose I = 4 and since robots have three available actions (Idle, Flee and Attack), O = 3.

Now on to the NN code. Reading the data in from the previous phase wasn't hard, and running the NN was easy too. Then the first major hitch struck. The actual NN structure (complete with weights, biases etc) had to be exported out to a file, to be read in by the Simulation code. The NeuralStructure class is constructed by incrementally adding new Layers, and there was no provision for injecting attributes directly within them (instead of building those attributes through training). I wasted a good few hours trying to serialize the object (the cheapskate way), though I should have gotten the hint that nested objects don't serialize easily (yes, sometimes C++ makes me want to bash my head against the wall). I ended up just adding new constructors that take in saved data in array form, which wasn't quite that hard since the values have a fixed structure behind them.

Finally, the Simulation program itself. I loaded the NN within the RobotManager instance and just had each Robot send their current input values to it every so often. They would then get back output values for the Idle, Flee and Attack actions, and perform whichever was the highest. I daresay even this very simple implementation shows a flicker of intelligence - see below video for example (using a NN which has the robots pursue and attack if close enough, and flee once their health is low enough compared to the player's). Note that in the group battle at 0:40, a robot flees once it is hurt sufficiently, but turns back to give support fire once the player switches his attack target - without this behaviour being explicitly specified!



Okay, in this application a few lines of logic would probably have performed just as well, but that takes all the fun away, doesn't it?

In the end, this lab took the better part of two days, with a big chunk of time sucked up by the ill-fated serialization attempt. So, moral of the story is, just get your head down and code, and things won't be too bad. Chins up!



More random stuff: My cousin gave me an overgrip for my tennis racket. It truly feels non-slip now as compared to the original smooth one, though the handle's become a teeny bit too fat for my liking. Also found a Buddhist intro-book, I Wonder Why, by Thubten Chodron (an American nun, and History major) lying around. A sign that it's time to branch out and widen my range of exploration (and likely critique)?



comments (0) - email - share - print - direct link
trackbacks (0) - trackback url


Next: Sucks To Be Me


Related Posts:
Open Bookame
Bobble Backwards
Just Justice
In Relation
Economics Thus Far

Back to top




Copyright © 2006-2025 GLYS. All Rights Reserved.