While Game Maker allows very simple access to sprites and drawing images, it has a side effect of bloating texture usage. From a users point of view it's done absolutely right. That is, it's very simple to use and understand. Now... enter the PSP with it's limited memory, and all of a sudden, these bloated images are an issue as there's large textures with not much in them, taking up valuable space.
So first, what do I mean by not much in them. Well, lets say a user wants to display a logo, over the top of the actual game. Now there are 2 ways of doing this; first you cut it right down to size, and use code to position it using a little trial and error, or second, you could make a full screen image, and use a painting program to position it just right. The second is far easier, so many users opt for that option, but this results in lots of empty space around the image. Now, even a few pixels space around a sprite adds up if you have hundreds of sprites, so what I've been doing in the past few days is a little smarts in our texture page packer. This looks at the image, gets rid of the surrounding space as best it can, then gets the runner to reposition the image automatically, so it's just like the original. This can seriously reduce the memory footprint of game.
So if we take an example game and apply this and a few other tricks, we can reduce the number of textures required down hugely, in this case from 28 512x512 textures to only 17 512x512s. That's a saving of almost 3Mb (DXT5) - that's a pretty meaty chunk for a platform with only 24Mb free.
Lastly, why do we pack images onto texture pages in the first place? Well, some systems (like the PSP) only want power of 2 textures, that is a texture can only be a width/height of 2,4,8,16,32,64,128,256 or 512. (512 is the maximum size on the PSP). so we can either pack them into a page, or we'd have to scale them so they were a POW2, and that makes the image look a little funny. It also helps cut down on texture swapping, and speeds up the graphics a little.
All this helps in reducing the memory footprint, and speeds up loading. When we first got something running on the PSP, the initial loading of SkyDiver took 40 seconds, which was obviously unusable. Now however, the new way we have to loading data into the game means we can now load SkyDiver in around 6 seconds, which is much better, and the smaller the file, the quicker it'll load.
All these optimisations will one day make it's way into the main Game Maker runner, which will give all users these benefits, no matter if it's on windows, mac or some new mythical platform.
Friday, April 30, 2010
Wednesday, April 28, 2010
GameMaker DSi...
I've read quite a folk folk question PSP over DSi for Game Maker, but to me the choice was very simple. Sandy has mentioned before that given some time and money, DSi would be very interesting; and nothing is impossible. However... DSi (and I'm ignoring the DS completely due to its severely limited memory, and very slow CPU speed which effectively makes it a no go) has some very tricky issues.
1) The DSi has only 16Mb or RAM. This isn't terrible, but would limit the types of games you could port/make for it. The runner itself requires some ram for both the program and it's internal variables, so you'd probably be left with around 12Mb. I've no idea what the OS takes up, so that would be another factor. Call it 10Mb. Possible, but not great.
2) CPU speed. While the DSi has 2 CPUs, they are only 133Mhz each, and that's really slow. You could certainly do some multi-cpu code, but lots of it depends on serialised operations, so for the most part, your stuck with 1. This again limits the number/style of games - but not impossible.
3) FPU. The DSi has no floating point - at all. This is a massive problem. As anyone that's used game maker will know, it's totally based on floating point values. Every script "number" you use is not only a floating point number, but a "double" (a very LARGE floating point number). Part of the work we've been doing is to limit the double access, but they're still there - and have to be. This means all the floating point code would have to change into fixed point. This would take some time... but is possible. Obviously, you couldn't use FPU emulation as on a 133Mhz CPU, you'd burn all your time just doing FPU stuff.
Now, longer term we have been thinking about adding more "types" (i.e. a RAW integer type) which would allow developers to speed up their scripts, but there's huge changes internally to allow this to happen, so that won't be around for a long time. Russell has been doing some very cool updates the the script processor for the Game Maker Runner, and that's opening up a whole new bag of tricks, but I'll leave him to talk more about that; some of the updates we're looking to add will make you drool in anticipation.
So... while a DSi port is possible, there's a huge amount of work required before we can realistically consider it. Of course, if Nintendo announce a NEW console at E3 this year... that's a whole new ballgame!
And a quick note about the new Apple T&Cs... The rule says you have to build a game with C++ or ObjectiveC. The runner IS in C++, and interpretes scripts. This is fine as there at LOTS of apps which use scripting in some form or other, not to mention quite a few emulators (which is ALL scripting). So we don't anticipate any problems, although Sandy will be speaking with Apple to verify.
Lastly... this is my personal blog. So if you post a comment, be constructive and don't be rude. I have no problems in deleting what "I" consider offensive comments. I don't mind people saying they disagree, or even giving me valid reasons in their mind as to why I'm stupid, but if you post single line comments of "it's crap", or "you're an idiot", then it'll nuke it from orbit. Thank you, come again!
1) The DSi has only 16Mb or RAM. This isn't terrible, but would limit the types of games you could port/make for it. The runner itself requires some ram for both the program and it's internal variables, so you'd probably be left with around 12Mb. I've no idea what the OS takes up, so that would be another factor. Call it 10Mb. Possible, but not great.
2) CPU speed. While the DSi has 2 CPUs, they are only 133Mhz each, and that's really slow. You could certainly do some multi-cpu code, but lots of it depends on serialised operations, so for the most part, your stuck with 1. This again limits the number/style of games - but not impossible.
3) FPU. The DSi has no floating point - at all. This is a massive problem. As anyone that's used game maker will know, it's totally based on floating point values. Every script "number" you use is not only a floating point number, but a "double" (a very LARGE floating point number). Part of the work we've been doing is to limit the double access, but they're still there - and have to be. This means all the floating point code would have to change into fixed point. This would take some time... but is possible. Obviously, you couldn't use FPU emulation as on a 133Mhz CPU, you'd burn all your time just doing FPU stuff.
Now, longer term we have been thinking about adding more "types" (i.e. a RAW integer type) which would allow developers to speed up their scripts, but there's huge changes internally to allow this to happen, so that won't be around for a long time. Russell has been doing some very cool updates the the script processor for the Game Maker Runner, and that's opening up a whole new bag of tricks, but I'll leave him to talk more about that; some of the updates we're looking to add will make you drool in anticipation.
So... while a DSi port is possible, there's a huge amount of work required before we can realistically consider it. Of course, if Nintendo announce a NEW console at E3 this year... that's a whole new ballgame!
And a quick note about the new Apple T&Cs... The rule says you have to build a game with C++ or ObjectiveC. The runner IS in C++, and interpretes scripts. This is fine as there at LOTS of apps which use scripting in some form or other, not to mention quite a few emulators (which is ALL scripting). So we don't anticipate any problems, although Sandy will be speaking with Apple to verify.
Lastly... this is my personal blog. So if you post a comment, be constructive and don't be rude. I have no problems in deleting what "I" consider offensive comments. I don't mind people saying they disagree, or even giving me valid reasons in their mind as to why I'm stupid, but if you post single line comments of "it's crap", or "you're an idiot", then it'll nuke it from orbit. Thank you, come again!
Monday, April 26, 2010
YoYo Games Ltd...
Well, a little later than advertised.. but here we go. After leaving Realtime Words at the end of February, I can now announce that I'm now working full time as Head of Development for YoYo Games Ltd. YoYo have an amazing product called Game Maker which, to put it simply, allows you to make games without being a hard core programmer; or in fact a programmer at all. This is really neat. Russell Kay (who you may remember is doing the ZX Spectrum port of XeO3) and I have been working for some time to port the runner part of Game Maker onto the PSP, so that we can put some of the amazing games into the Playstation Network, as well as other console or mobile targets. In fact you can see a work in progress shown here..
This is a month or so old now, and we've made some real progress in speeding it all up, so Sky Diver hardly ever slows down now, which is great.
I've been having great fun doing this and hope to blog some more as time goes on about the problems, and internals of Game Maker, and what we hope to do in the future - so exciting times ahead! I'm also really excited to be part of the massive development scene at YoYoGame.com and hope we can deliver some really cool updates for them to play with.
I had to laugh... I've already been caught out by saying the PSP was uncool, so I thought I should at least attempt to clarify! :)
The cool wall isn't about what's good, or even what's popular, but what's perceived as being cool. Now, the PSP from a techie point of view, is very cool. Great colour screen, MIPS cpu (which I love), and good graphics through-put. But from a cool point of view.. well, the UMD isn't nice, it drains power too much and that kills it. When it first came out I was very excited about it, I loved the idea of playing movies on that very cool screen. But the battery let it down with the UMD drive, and the movies were too expensive; particularly for ones I already owned! One thing that really excites me with YoYo and Sony's NEW mini games, is this removes the UMD from the equation. Your battery life goes up, and the newer slimline PSP's are actually quite nice and pretty light.
The PSP Go is definitely heading in the right direction, but pricing is a little silly. If they can get the price down, the PSP Go could be a real winner; but cool? We'll have to wait and see!
Of course on top of this, you also have the minis being playable on the PS3, which is really cool. This means we should be able to get Game Maker games onto the PS3 as well, although perhaps one day... a real PS3 Game Maker HD might even be on the cards... only time will tell - now how cool would that be?
This is a month or so old now, and we've made some real progress in speeding it all up, so Sky Diver hardly ever slows down now, which is great.
I've been having great fun doing this and hope to blog some more as time goes on about the problems, and internals of Game Maker, and what we hope to do in the future - so exciting times ahead! I'm also really excited to be part of the massive development scene at YoYoGame.com and hope we can deliver some really cool updates for them to play with.
I had to laugh... I've already been caught out by saying the PSP was uncool, so I thought I should at least attempt to clarify! :)
The cool wall isn't about what's good, or even what's popular, but what's perceived as being cool. Now, the PSP from a techie point of view, is very cool. Great colour screen, MIPS cpu (which I love), and good graphics through-put. But from a cool point of view.. well, the UMD isn't nice, it drains power too much and that kills it. When it first came out I was very excited about it, I loved the idea of playing movies on that very cool screen. But the battery let it down with the UMD drive, and the movies were too expensive; particularly for ones I already owned! One thing that really excites me with YoYo and Sony's NEW mini games, is this removes the UMD from the equation. Your battery life goes up, and the newer slimline PSP's are actually quite nice and pretty light.
The PSP Go is definitely heading in the right direction, but pricing is a little silly. If they can get the price down, the PSP Go could be a real winner; but cool? We'll have to wait and see!
Of course on top of this, you also have the minis being playable on the PS3, which is really cool. This means we should be able to get Game Maker games onto the PS3 as well, although perhaps one day... a real PS3 Game Maker HD might even be on the cards... only time will tell - now how cool would that be?
Sunday, April 25, 2010
Horror Tip: Paranormal Activity

Type: Film
Link: Imdb Page
Paranormal Activity (PA) is like a mix of Blair Witch Project, The Haunting and The Exorcist. It follows a recent trend of first-person/documentary movies like Rec and Cloverfield that I really enjoy and which works great for horror flicks.
The entire movie takes place in a house where young couple, Micha and Katie, lives. After Micah finds out that is fiancée has some kind of poltergeist haunting her, he decides to put up a camera and get to the bottom of what is happen. The movie then slowly builds up tension, by showing of more and more evidence for the "poltergeist" and creating an increasingly threatening situation. This is classic ghost story telling and it works very well. I had some issues with annoying characters and such, but nothing I couldn't really put a side (after all, if I can suspend disbelief for ghost, etc I can handle some stupid characterizations). On a whole it is a solid and quite scary movie!
The most interesting thing about the movie, and the reason to bring it up, is because it features so familiar environments and situations. The house they live in is by no means a classic haunted house and most of the paranormal events are quite toned down. Yet, the filmmakers have managed to slowly and carefully build up dread to the situations and because of the familiarness of the events and locations, it really manages to get under your skin. My favorite part are the night scenes in which one just views the couple sleeping in a very ordinary looking bedroom. A low frequency sound is slowly built up and finally some strange event(s) occur. The simplicity in this setup is what really drives the movie home and makes it very easy to relate the events.
I wonder if this sense of the familiar and everyday life could be used in a videogame? Games like Silent Hill use familiar environments but there is always a layer of filth, and creepiness added so that, while making you feel frightened, also makes you more distant. When it comes to events happening in most horror videos games, it is even worse. Here there is very little (if anything at all!) that can be related to everyday life.
The reason is mostly that normal life has not so much fun gameplay in it, but I think this really bad when wanting players to connect the events experienced to things in their own life. I think this is something that needs to be more explored in videogame and while Amnesia (taking place in a spooky castle in the 18th century) does not do much in this regard, I hope to explore it more later on. Cause while I do not feel the fear, watching PA, like I have done playing horror games (like Silent Hill, etc), it sticks to me in a way no other game has. When I turn off a videogame, there is no uneasiness left in me, but after watching certain movies late at night, like Ringu and now PA, I feel that some of the fear, totally irrationally, still lingers inside me and makes me dread common place objects and surroundings.
I see no reason for movies (and books) to be alone with causing these feelings and think that if properly executed, games can do it too. Given that games are so good at evoking in-game fear, if this could be combined with a connection to real-life as well, then fictional horror could really be taken to the next level!
Saturday, April 24, 2010
Hash
I've been working to speed up some code and the original was a straight list which made finding entries slow, so it makes sense to try and speed this up with a better set-up. Now you can either use something like a binary tree or use a hash to look it up. Binary tree's are good, but can get unbalanced easily if it's a dynamic list, so I've opted for a simple hashing table. Now this is a VERY simple hashing table, but still far outstrips a straight array.
Each object I have uses a 32 bit INT as an ID, so I can use that as the HASH access. The simplest way of hashing this is to use the lower "X" bits as an access to multiple lists meaning we simply have far less to look through to find our object.
So, imagine an array of 512 linked lists, with each list being accessed with the lower 9 bits of the object ID. Pretty simple, and simple means quick. It's also worth noting I could use 512 binary tree's if need be, but I simply don't need that. And that's it, it's as simple as that.
Now I'm not a fan of templates, I think they are used far to often for really simple things, and then you end up with templates inside templates which means people simply don't know (or understand) what's going on inside the compiler generated code. STL suffers a lot from this, it's so old now, and has been updated so much, it's now crippling. Still, templates have their place and this is an ideal situation where a template will do some real good. So, I've dusted off my template head and written a very simple Hash template - which I'm now going to let you all have a play with.
The first thing to decide when writing a general API like this, is figuring out exactly how you want it to look, and what is the best use case. I normally do this by writing some sample code which will use the API in very pretty manner, then write the actual code around the desired API. So here's the API I've gone for; or rather some sample code showing it's use.
And that's all there is too it. Very simple simple, but works. This operates very much like a C# Dictionary<> so is pretty nice to use. If you find that it's getting slow due to a high number of objects, you can either increase the base array size, or you could change the internal storage from a linked list, into a binary tree or something.
Anyway, you can get the template here: Hash.h
Hope it's of use to someone - Enjoy!
Each object I have uses a 32 bit INT as an ID, so I can use that as the HASH access. The simplest way of hashing this is to use the lower "X" bits as an access to multiple lists meaning we simply have far less to look through to find our object.
So, imagine an array of 512 linked lists, with each list being accessed with the lower 9 bits of the object ID. Pretty simple, and simple means quick. It's also worth noting I could use 512 binary tree's if need be, but I simply don't need that. And that's it, it's as simple as that.
Now I'm not a fan of templates, I think they are used far to often for really simple things, and then you end up with templates inside templates which means people simply don't know (or understand) what's going on inside the compiler generated code. STL suffers a lot from this, it's so old now, and has been updated so much, it's now crippling. Still, templates have their place and this is an ideal situation where a template will do some real good. So, I've dusted off my template head and written a very simple Hash
The first thing to decide when writing a general API like this, is figuring out exactly how you want it to look, and what is the best use case. I normally do this by writing some sample code which will use the API in very pretty manner, then write the actual code around the desired API. So here's the API I've gone for; or rather some sample code showing it's use.
TestObj* pObj;
// Make the HASH table with 512 linked lists (must be POW2)
Hash* HashTable = new Hash (512);
// Add some dummy objects
HashTable->Add( 0, new TestObj(0) );
HashTable->Add( 14, new TestObj(14) );
HashTable->Add( 271334, new TestObj(271334) );
HashTable->Add( 512, new TestObj(512) );
HashTable->Add( 512*10, new TestObj(512*10) );
// Itterate over the WHOLE list
Hash::iterator Iterator = HashTable->GetIterator();
while( *Iterator != NULL )
{
pObj = *Iterator;
Iterator.Next(); // Doing this here means we can delete the pObj from the list if we wish
// Do code here....
}
// General functions...
pObj = HashTable->Find( 512*10 );
HashTable->Delete( 0 ); // Delete HASH index + object
HashTable->Delete( 512*10 ); // Delete HASH index + object
HashTable->Remove( 271334 ); // REMOVE from hash table, but DON'T delete the object!
And that's all there is too it. Very simple simple, but works. This operates very much like a C# Dictionary<> so is pretty nice to use. If you find that it's getting slow due to a high number of objects, you can either increase the base array size, or you could change the internal storage from a linked list, into a binary tree or something.
Anyway, you can get the template here: Hash.h
Hope it's of use to someone - Enjoy!
Sunday, April 11, 2010
Why Trial and Error will Doom Games

A sort unspoken rule in game design is that players should be able to lose. Just about every game has some kind of fundamental mechanic that is possible to fail. Whenever this happens, the player needs to try again and repeat the process until successful. This is thought to add drama, tension and also make the player's actions count. It seems to be believed that without it games would not be games and instead some kind of boring linear entertainment. I think this position is wrong, extremely hurtful and if not fixed, will become the downfall of the medium. In this post I will explain why.
The Problem
In a book or movie it is common that the reader/viewer need to experience very upsetting events, that can be very hard to read about/watch. This is especially true to horror, where the goal is often to upset the reader/viewer and to evoke emotions such as anxiety, fear and disgust. It is also common to have more boring and slow sequences in order to build mood, explain character motivations, etc. These are not necessarily very fun/easy to experience but will make up for it later on and acts as an important ground to build the story from. Note that these "hard to repeat" moments are not merely handy plot devices or similar. They are fundamentally crucial for creating meaningful experiences and many (if not all!) of the great works among books and movies would not be possible without them. Yet, at many times the only reason one can put up with these kinds of sequences is because one know there is an end to it. Just keep on reading/watching and it will eventually be over and hopefully an important payoff will be given.
This is not true for games. Whenever in a situation where loss is possible, the player is forced to meet certain criteria or she will not be able to progress. It is not possible to just "stick with it" to complete these kinds of sequences. The player needs to keep playing the same passage over and over again until proper actions have be performed. Not until this is accomplished is the player allowed to continue. This either comes in the form of skill based actions (e.g. platform jumping), navigational problems (e.g. find the way out) or some sort of puzzle that needs to be solved.
For sequences that are meant to be emotional this can be devastating. Often the player is not compelled to relive the experience and/or any impact the sequence was meant to have is lost. Also, it sets up a barrier and effectively blocks certain players from continuing. How can games possibly hope to match the impact of books and movies, when the ability to have critical "hard-to-repeat" moments are nearly impossible because trial-and-error?
Case Study: Korsakovia
This problem is very evident in the game Korsakovia. The game puts you in the role a man with Korsakoff's syndrome and is played out in a sort of dream world, interwoven with dialogs between you and your doctor. It is a very interesting experience, but also a very disturbing one and the game is extremely brutal on the senses. Even so, I felt compelled to continue and it felt like worthwhile experience. This was until I the gameplay started. Korsakovia has all problems associated with trial-and-error (skill, navigation and puzzles) and this combined with the exhausting atmosphere made it impossible to for me to complete it. It was simply not possible for me to replay certain segments of the game and what was the first time around immersive turned into an annoyance and a (literal) headache. I am convinced that the game would have been a lot better, and possibly a truly great experience, if the trial and error mechanics where removed.
I do not mean to trash Korsakovia and I think it is a really interesting experiment. However, it is such a fine example of how trial-and-error can go wrong and I urge you all to try it out. Considering that it is a research project, I think that is mission accomplished for the creator!
Allowing The Player to Play
The problem with players not finishing games is something that recently have gotten more and more attention in the games industry. After analyzing stats collected, it has become quite evident that something needs to be done. For example, less than 50% of players ever completed Half-Life 2-Episode 1 which, considering the game's length, polish and difficulty, I am sure that is a very high figure compared to other games. This means that more games have started to try out methods at solving the problem. Some examples are:
- In Secret Files: Tunguska one can choose to show all of the interactable areas in a scene (reducing pixel hunting).
- Alone In The Dark allows the player to skip chapters in order to force progress in a game.
- New Super Mario Brothers Wii has a mode where the game takes over control and completes sections for the player.
- BioShock never really kills the player but instead just teleports them to a different part of the map and leaves the enemies and environment in the same states as when the player "died".
Finally, although BioShock is by far closest to having a working solution it still feels tacked on and can easily lessen immersion (for example when forced into respawn, charge with wrench, repeat situation). The player still also needs to overcome certain challenges and are forced to repeat sections over and over. However, there is never a moment where the player is unable to progress, given that they are willing to stay at it, no matter their skill level. It is far from an ideal solution, but a lot better than blocking players from progressing.
I think that the proper way to solve this is to incorporate it as a feature in the game from day one. Making sure that players are not unnecessarily blocked from continuing, is not something that should be slapped on as a side thing. It is also very important that players do not feel that the game is holding their hand every step of the way, something that can be very hard unless planned from the start. It is crucial that players feel that the performed actions and choices are their own and that they are not just following commands like a mindless drone.
Fixing this issue is really important. Games can not continue to deny content to players and demand that they meet certain criteria in order get the full experience. Not only does it discourage people from playing games, it also make it impossible to create more "holistic" experiences. By this I mean games that require the entirety of the work for the player to truly appreciate it (something I aim to talk about an upcoming post). It will be very hard indeed to insert deeper meanings into games unless this problem is dealt with.
Less Challenge, More Immersion
Allowing the player to get the full experience and not having win-to-progress situations is a good start, but just the first step in the right direction. As with Bioshock, the game can still have trial-and-error like moments, where the player is forced to play section over and over in order to continue. This brings us back the problem that I mentioned in the beginning: that repeating a certain experiences will either lessen their impact and/or discouraging the player from progressing. As these "hard to repeat" sequences are crucial in order to expand the horizon of the medium, it is essential that we find ways of adding them. And in order to do so, trial and error must go.
I think that first step towards this is to throw away the idea that a videogames needs to be a challenge. Instead of thinking of a game as a something to be beaten, it should be thought of as an experience. Something that the player "lives" through rather than "plays" through. Why designers are unable to do this probably because they are afraid that it will lessen the sense of accomplishment and tension of a game. Many seem to think that trial-and-error based obstacles are the only way of creating these emotions. I think this untrue.
Let's first consider accomplishment. While this is normally evoked by completing a devious puzzle or defeating an enemy, there are other ways to feel accomplishment. Simply performing a simple act that changes the game world somehow can give this feeling. For instance planting a tree or helping out an NPC. There is no need for these to be obstacles in order for one to feel accomplishment either and thus any sort of trial-and-error is removed. It can also come in other forms such as just reaching a destination. Also, if designed correctly one can trick the player into thinking they accomplished something, for example escaping a monster even though there was no never a way to fail.
Creating tension is not only possible without using trial-and-error; skipping it may even lead to increased tension! When the player fails and is forced to repeat, there is no element of surprise left and it often also leads to immersion being broken. For example when playing horror games like Fatal Frame and Silent Hill I can be play for quite some time without dying, feeling highly immersed. However, once death (which is part of the trial-and-error mechanic) occurs I am pulled out of the atmosphere and suddenly realize that I am playing a game. This means death lessens the immersion and breaks the flow of the game. But will it not make the game more scary?
Regarding death and fear-factor, consider the following:
1) If the player fears death because of a trial and error system, she fears an abstract mechanic and not something of the game world. By worrying about a game mechanics, the player is pulled out of the experience.
2) Once death has occurred, the player will know what to expect. If killed by a creature that jumped out from behind a corner, the next time the encounter will have far from the same effect.
Instead of punishing the player, I think it is better to add consequences. Even just making the player believe that there are consequences (which Heavy Rain successfully does) can be enough. Also, if one keeps the player immersed then it is also easier for the players to roleplay and convince themselves that they are truly in great danger even though they are not. In our game Amnesia, we are doing our best to reduce the amount of trial-and-error and still retain a really terrifying atmosphere. So far it is looking very good for this approach and we have only seen good things come out of it (I guess time will tell if we pull it of or not). If horror games, that are notorious for using trial-and-error mechanics to enhance their mood, can do fine without trial-and-error, I see no reason why other genres shouldn't.
To sum things up: When one relies on abstract game mechanics for creating emotions, one does so at the cost of immersion and the players ability to become part of the game world.
End Notes
Of course trial-and-error should not be banned from game design. Many games like VVVVVV and Super Mario thrive on the trial-and-error and has it as an integral part of the design. Likewise, many adventure games are supposed to have tricky puzzles, and could not do without them. Some games are meant to be "just games" and to be a challenge to the player. I am not in any way opposed to this kind of design.
However, in other games trial-and-error is just bad and really drag down the experience. In its worst form trial-and-error:
- Discourages players by setting a standard of what sort of players are allowed to continue.
- Greatly lessens the emotional impact of events by requiring repetition.
- Breaks immersion and makes the player focus on abstract game mechanics.
- Forces games to focus on moment-to-moment fun and discourages a holistic payoff.
I would like to end with some wise words from funny man Dara Ó Briain:
http://www.youtube.com/watch?v=xdQK4Wp10qo (Check at around 3:18!)
(From a British program called Gameswipe, which is well worth watching in its entirty)
Friday, April 9, 2010
Quiz...
Okay, so perhaps I should have been a little clearer, but the general rule of question one sticks through out. So here we got...
1) Memory Access. This is it pretty much it, especially on today's hardware but does have a similar effect (if not as profound) on older machines as well. If someone is reading/writing gigs of data every frame, it's gonna suck, not just because they have a huge loop in there, but because in modern computing (and we'll stick with this just now), memory is the number 1 enemy.
In the past, CPU access was around the same speed as memory, so it would be a little slower to read from memory - usually just a few cycles. These days, memory is incredibility slow compared to the CPU. Register operations will (in real terms) take less than a cycle, while un-cached memory access can take thousands of cycles (once you remember page faults and the rest).
This is crazy, so the less memory you can touch the better. Now... this may include reducing passes over data and doing it in one pass (while doing a little more register based work), or simply removing data access and tables if you can do it in a simple calculation. In the past, we used to have multiply and divide tables, but these days this table can be so expensive, you're far better just doing the ASM instruction which only takes a few cycles.
So, heres a real world example - particles. If your doing particles on the main CPU (we'll ignore the GPU for now), then the smaller you can make your particle structure the more you'll be able to render; not because you can draw more, but simply because the CPU only has a limited memory bandwidth and reducing that means you can do more, or better yet, do other things - like gameplay.
I've seen it over and over again. People continually looping over large amounts of data, wondering why things are slow when it's not that much data they're processing. Remember in this day of the multitasking OS, a 1Mb cache is not yours alone. You're data will be continually kicked out by other processes, so even if you only have 64K of data, you'll be surprised how little time it spends in the cache. The answer is to prefetch the next block, and do a little more processing on the current one, thereby reducing the number of iterations you have to do. After all, if your talking 400 cycles (say) to read a cacheline (around 64bytes last time I checked), then why not use the 400 cycles doing something instead of 400 cycles waiting on memory coming into the cache?
2) This actually has nothing to do with optimisation - by bad. Its a simple 2 part question...
2.a) Release it. No game, or application is any good if you never release it. No matter how shiny, awe inspiring, or ground breaking; who cares if it never sees the light of day? So rule 1 of any program development, make sure you get something out, or it's just wasted effort.
2.b) Make it fun. In games, it's easy to release something with lots of features and levels, but if it's not fun, no ones gonna play it. It's that simple. I can name several games that appear to have been developed by idiots. Games that were all gloss and no gameplay. Some teams fixate of making things as pretty as possibly, but thats really not the most important thing. You have to enjoy being IN the game, or like 2.a it's pointless and a waste of time and effort. You'd be amazed how often this rule is ignored, or something particularly frustrating removes all the fun that SHOULD be there.
So there you go... Yeah, not the best phrased questions, but I bet looking at these you're either nodding in agreement, or shouting at the monitor something like "Rubbish! Algorithms are FAR more important!!". Well, this is true... but given even a reasonable algorithm, you can then apply the memory rule and speed it up more. The less memory you touch, the quicker your code will be, it's that simple.
oh... and no smart answers about being in a calculation loop with no memory access for a second - we're assuming you're not a moron. :)
1) Memory Access. This is it pretty much it, especially on today's hardware but does have a similar effect (if not as profound) on older machines as well. If someone is reading/writing gigs of data every frame, it's gonna suck, not just because they have a huge loop in there, but because in modern computing (and we'll stick with this just now), memory is the number 1 enemy.
In the past, CPU access was around the same speed as memory, so it would be a little slower to read from memory - usually just a few cycles. These days, memory is incredibility slow compared to the CPU. Register operations will (in real terms) take less than a cycle, while un-cached memory access can take thousands of cycles (once you remember page faults and the rest).
This is crazy, so the less memory you can touch the better. Now... this may include reducing passes over data and doing it in one pass (while doing a little more register based work), or simply removing data access and tables if you can do it in a simple calculation. In the past, we used to have multiply and divide tables, but these days this table can be so expensive, you're far better just doing the ASM instruction which only takes a few cycles.
So, heres a real world example - particles. If your doing particles on the main CPU (we'll ignore the GPU for now), then the smaller you can make your particle structure the more you'll be able to render; not because you can draw more, but simply because the CPU only has a limited memory bandwidth and reducing that means you can do more, or better yet, do other things - like gameplay.
I've seen it over and over again. People continually looping over large amounts of data, wondering why things are slow when it's not that much data they're processing. Remember in this day of the multitasking OS, a 1Mb cache is not yours alone. You're data will be continually kicked out by other processes, so even if you only have 64K of data, you'll be surprised how little time it spends in the cache. The answer is to prefetch the next block, and do a little more processing on the current one, thereby reducing the number of iterations you have to do. After all, if your talking 400 cycles (say) to read a cacheline (around 64bytes last time I checked), then why not use the 400 cycles doing something instead of 400 cycles waiting on memory coming into the cache?
2) This actually has nothing to do with optimisation - by bad. Its a simple 2 part question...
2.a) Release it. No game, or application is any good if you never release it. No matter how shiny, awe inspiring, or ground breaking; who cares if it never sees the light of day? So rule 1 of any program development, make sure you get something out, or it's just wasted effort.
2.b) Make it fun. In games, it's easy to release something with lots of features and levels, but if it's not fun, no ones gonna play it. It's that simple. I can name several games that appear to have been developed by idiots. Games that were all gloss and no gameplay. Some teams fixate of making things as pretty as possibly, but thats really not the most important thing. You have to enjoy being IN the game, or like 2.a it's pointless and a waste of time and effort. You'd be amazed how often this rule is ignored, or something particularly frustrating removes all the fun that SHOULD be there.
So there you go... Yeah, not the best phrased questions, but I bet looking at these you're either nodding in agreement, or shouting at the monitor something like "Rubbish! Algorithms are FAR more important!!". Well, this is true... but given even a reasonable algorithm, you can then apply the memory rule and speed it up more. The less memory you touch, the quicker your code will be, it's that simple.
oh... and no smart answers about being in a calculation loop with no memory access for a second - we're assuming you're not a moron. :)
Saturday, April 3, 2010
Performance...
Here's a little quiz for you. Both of these issues have come up in conversation over the past week or so, and I thought it would be interesting to pose them, and give you the chance to prove your smarter that most games developers...
Question 1
Whats the main thing to tackle inside a program, that's virtually guaranteed to speed it up?
Question 2
When writing a game, whats the 2 most important things to make sure you do? (In order please)
The answer to both these questions should be obvious, but lets see how you do. I'll answer them in a couple of days...
Now... if you cast your mind back a few years, you'll remember me doing a fancy little routine for XeO3's bullet allocation. This was a simple "stack allocator" that sped up my code by quite a bit. Well, you can now find this article in the new Games Programming Gems 8 book. Yes, about a year ago I got the article accepted, and it's now in print and out. This is the 1st time I've decided to try and get an article published, but I thought it was about time. It takes the general concept a little further and uses it in standard programming (rather than 6502), and comes with a few little examples.
The only reason I mention this (aside from being chuffed that I finally got something published), is that this is one of the main reasons I still code old machines. Without writing XeO3, this would never have occurred to me. Old machine place unique limits on what you can do, limits that simple don't appear to be around anymore, and as such require you to think outside the box. They are still a valuable learning tool, and can lead to better code on a day-to-day basis. I've now used the Stack Allocator in several applications I've since written, and found it much simpler/quicker to implement than linked lists, not to mention easier to follow. I love how retro coding really can teach an old dog new tricks....
Question 1
Whats the main thing to tackle inside a program, that's virtually guaranteed to speed it up?
Question 2
When writing a game, whats the 2 most important things to make sure you do? (In order please)
The answer to both these questions should be obvious, but lets see how you do. I'll answer them in a couple of days...
Now... if you cast your mind back a few years, you'll remember me doing a fancy little routine for XeO3's bullet allocation. This was a simple "stack allocator" that sped up my code by quite a bit. Well, you can now find this article in the new Games Programming Gems 8 book. Yes, about a year ago I got the article accepted, and it's now in print and out. This is the 1st time I've decided to try and get an article published, but I thought it was about time. It takes the general concept a little further and uses it in standard programming (rather than 6502), and comes with a few little examples.
The only reason I mention this (aside from being chuffed that I finally got something published), is that this is one of the main reasons I still code old machines. Without writing XeO3, this would never have occurred to me. Old machine place unique limits on what you can do, limits that simple don't appear to be around anymore, and as such require you to think outside the box. They are still a valuable learning tool, and can lead to better code on a day-to-day basis. I've now used the Stack Allocator in several applications I've since written, and found it much simpler/quicker to implement than linked lists, not to mention easier to follow. I love how retro coding really can teach an old dog new tricks....
Subscribe to:
Posts (Atom)