TCHS 4O 2000 [4o's nonsense] alvinny [2] - csq - edchong jenming - joseph - law meepok - mingqi - pea pengkian [2] - qwergopot - woof xinghao - zhengyu HCJC 01S60 [understated sixzero] andy - edwin - jack jiaqi - peter - rex serena SAF 21SA khenghui - jiaming - jinrui [2] ritchie - vicknesh - zhenhao Others Lwei [2] - shaowei - website links - Alien Loves Predator BloggerSG Cute Overload! Cyanide and Happiness Daily Bunny Hamleto Hattrick Magic: The Gathering The Onion The Order of the Stick Perry Bible Fellowship PvP Online Soccernet Sluggy Freelance The Students' Sketchpad Talk Rock Talking Cock.com Tom the Dancing Bug Wikipedia Wulffmorgenthaler |
bert's blog v1.21 Powered by glolg Programmed with Perl 5.6.1 on Apache/1.3.27 (Red Hat Linux) best viewed at 1024 x 768 resolution on Internet Explorer 6.0+ or Mozilla Firefox 1.5+ entry views: 70 today's page views: 269 (67 mobile) all-time page views: 3207995 most viewed entry: 18739 views most commented entry: 14 comments number of entries: 1203 page created Sun Jan 19, 2025 18:21:20 |
- tagcloud - academics [70] art [8] changelog [49] current events [36] cute stuff [12] gaming [11] music [8] outings [16] philosophy [10] poetry [4] programming [15] rants [5] reviews [8] sport [37] travel [19] work [3] miscellaneous [75] |
- category tags - academics art changelog current events cute stuff gaming miscellaneous music outings philosophy poetry programming rants reviews sport travel work tags in total: 386 |
|
- changelog - changelog v1.09b --------------- * New comments, regardless of age of original post, will now trigger an email for Instant Response: Only a week and a bit left to the Interim Report for the HYP to show what I've done on it so far (hint: not that much), but I think I'll keep my mind off it tonight at least, since there are still five or six months remaining, and my advisor seems to be hinting that the Report is not that important anyway (first batch to kana it. Gah) - I'll just have to bone up on my Desperation Programming feat once the examinations end. I think I'm pretty decent at that, by the way. So, just in case anybody was wondering what a Computer Science university undergraduate does (other than First off, I opened the Word document describing the lab. "Lab III: Neural Networks for Decision Making". Hmm. Next up was several pages mostly describing the classes and methods in the provided skeleton code, which weren't very inviting. A summary: The lab is about implementing a simple neural network AI for (sixteen) robot enemies in a simple first-person shooter game (so simple that there's only one weapon, and not even any walls). There's Data Collection program code, Neural Network program code and the actual Simulation (game). The idea is to collect data using the Data Collection program, feed this data into the Neural Network program to create a neural network, and have the robots use the neural network to make decisions in the Simulation, and we're done. Ta-dah! The high-level overview is simple enough, but coding all the nitty-gritty stuff isn't that straightforward. Furthermore, I didn't have any practical experience with neural networks (thereafter NN). The general idea behind NNs (AFAIK) is that they allow, given a selected set of inputs, to predict (certain) outputs. Let's have a simple example with two inputs: Age and educational level, and one output, the probability of getting refunded on a bad investment. Using this, we set up NN with three layers - the input layer, the output layer and a hidden layer (with perhaps two cells [why?]) in between them. Having defined the structure of the NN, it's time to train it with actual data. Each input is ideally normalized such that it has a minimum value of 0 and a maximum value of 1. So, for age, we can have 0 years old as zero, and 122 years (current record) as 1. For educational level, we can probably work out some scale with no formal education at all as 0, and a terminal degree as 1. So let's say we encounter Mr. X, a 70 year-old retiree with primary school education, who got his money back. Then, the input data would be something like (0.57,0.2), and the corresponding output data is (1). Next is Mr. Y, a 30 year-old investment banker with an MBA who probably isn't going to get anything back. His data would be (0.25,0.9),(0.01). By feeding all these data into the NN and iterating many times, the weights for the cells in the hidden layer(s) are eventually adjusted such that feeding the exact same data back into the NN corresponds very closely to reality, i.e. the NN is sort of a black-box function that will take (0.57,0.2) as input and give a value that is very close to the actual output of 1 back. Why is this useful at all? The answer is that the NN allows output to be predicted with a certain level of intelligence for new data. Even given only the two sets of data so far, it wouldn't take much to guess that (0.71,0.15) should have an output closer to one than zero, and the other way around for (0.22,1.0). While this (and the Lab 3 NN) are rather simple examples that a human should have little trouble working out, I suppose NNs really shine when things get complex with dozens of inputs and outputs, which probably can't be adequately expressed with basic rules-based logic (e.g. FSMs). Now, using only the age and educational level probably will not be enough to successfully determine whether a person gets back his cash all the time, since other factors like proportion of assets invested, evidence of misselling etc will come into play. Now one can of course model all the inputs one can think of (such as the last digit of the person's NRIC), but many will obviously be of minor or no consequence to the decision and only serve as unwanted noise. Thus, a balance is required - another win for Common Sense. The next step was then to peruse the module's online forum on the IVLE, to get up to date on any time-wasting bugs, receive other students' insights so far (i.e. learning through others' experiences) and get a general idea of how to proceed. One of the few benefits of starting a lab assignment late. Data Collection to a text file wasn't too hard - just dump S sets of L lines, where S is the number of samples taken and L = I+O, where I is the number of chosen inputs and O the number of required outputs respectively. I chose I = 4 and since robots have three available actions (Idle, Flee and Attack), O = 3. Now on to the NN code. Reading the data in from the previous phase wasn't hard, and running the NN was easy too. Then the first major hitch struck. The actual NN structure (complete with weights, biases etc) had to be exported out to a file, to be read in by the Simulation code. The NeuralStructure class is constructed by incrementally adding new Layers, and there was no provision for injecting attributes directly within them (instead of building those attributes through training). I wasted a good few hours trying to serialize the object (the cheapskate way), though I should have gotten the hint that nested objects don't serialize easily (yes, sometimes C++ makes me want to bash my head against the wall). I ended up just adding new constructors that take in saved data in array form, which wasn't quite that hard since the values have a fixed structure behind them. Finally, the Simulation program itself. I loaded the NN within the RobotManager instance and just had each Robot send their current input values to it every so often. They would then get back output values for the Idle, Flee and Attack actions, and perform whichever was the highest. I daresay even this very simple implementation shows a flicker of intelligence - see below video for example (using a NN which has the robots pursue and attack if close enough, and flee once their health is low enough compared to the player's). Note that in the group battle at 0:40, a robot flees once it is hurt sufficiently, but turns back to give support fire once the player switches his attack target - without this behaviour being explicitly specified! Okay, in this application a few lines of logic would probably have performed just as well, but that takes all the fun away, doesn't it? In the end, this lab took the better part of two days, with a big chunk of time sucked up by the ill-fated serialization attempt. So, moral of the story is, just get your head down and code, and things won't be too bad. Chins up! More random stuff: My cousin gave me an overgrip for my tennis racket. It truly feels non-slip now as compared to the original smooth one, though the handle's become a teeny bit too fat for my liking. Also found a Buddhist intro-book, I Wonder Why, by Thubten Chodron (an American nun, and History major) lying around. A sign that it's time to branch out and widen my range of exploration (and likely critique)? Next: Sucks To Be Me
|
|||||||
Copyright © 2006-2025 GLYS. All Rights Reserved. |