Friday, December 24, 2010

Downloading files from the Apple Developer website using wget (for poor connections or scheduling)

Recently we had some issues downloading the latest iphone SDK. Firstly due to a crappy 3G broadband connection; we never seemed to be able to download the entire 3.5 Gigabyte file without a dropout. Unfortunately for us, Apple seem to have overlooked this possibility with their developer website and they do not offer the download using a more robust facility. Note to Apple. Ftp would be nice.

Secondly with several team members located in the same city but not on the same LAN we wanted to distribute the update to all members via our shared linux server. Unfortunately, this said server is getting a little long in the tooth now and being a 32 bit linux distribution it does not support files of the size that is the Xcode and IOS dmg. We were going to have to split the file into more managable chunks. What a pain.

The command line tool wget often yields answers to these kinds of problems so consequently it was our initial foray into finding a solution. Firstly it can provide an extra layer of robustness for downloading files, secondly it's very easy to schedule downloads via cron. For those of you wondering why this is a consideration - welcome to the reality of living in the internet 3rd world - Australia. With the typical internet plans in Australia, heavy internet users such as ourselves find it important to spread our download usage between peak periods (any time you're likely to be awake) and offpeak periods (any time you're likely to be sleeping) to maximise our bandwidth allocation.

Frustratingly, downloading the IOS SDK via wget is complicated by the need for any web client connecting to the Apple Developer website to have been authenticated. The Apple website is known to use cookies to authenticate web clients, and several recipes for extracting authetication credentials from browser cookies into a file and using then via the wget command line interface are well known - at least for Firefox.

The basic procedure for accessing content using wget from a site requiring authentication involves logging into the said site using a standard web browser, once authenticated via a login page one can set abou extracting the authentication cookie from the browser. The extracted cookie is then fed to wget which can use the cookie for permission to download the desired content.

Being on a Macintosh system we are by default provided with Safari. Not bothering to install Firefox on every system one uses, we figured it was easier just to stick with Safari. Luckily the same technique can be performed with Safari as with Firefox. The technique for Safari is not as well known as the Firefox technique, so we'll cover it here.

Safari stores its cookies on a per user basis within a user's home directory. Specifically cookies are stored in a simple XML file. Have a look in Library/Cookies/Cookies.plist. You can see all of Safari's cookies in there.

To get the required cookie into the Cookies.plist before we proceed, use Safari to login to the Apple Developer Website using your Apple ID credentials. Safari should now have the requisite cookie. Opening the Cookies.plist with a text editor to view the cookie; we're looking for the one called ADCDownloadAuth.

With wget expecting it's cookie information in the nescape cookie.txt file format we'd like a quick and simple way to convert from one format to the other. Luckily this is relatively easy to do on a Macintosh system. As the language Ruby is preinstalled on Tiger, Leopard and Snow Leopard systems we may as well leverage the language to do this job.

Install the plist Ruby library and run the short ruby script listing below to convert the file to Firefox's cookies.txt format.

$ sudo gem install plist
$ irb
>> require 'plist'
>> result = Plist::parse_xml("Library/Cookies/Cookies.plist")
>> File.open("cookies.txt", "w") {f result.each {r f.write("#{r["Domain"]}\tTRUE\t#{r["Path"]}\tFALSE\t#{r["Expires"].strftime("%s")}\t#{r["Name"]}\t#{r["Value"]}\n")}}

Now that we have our cookies.txt file we can download the file we would like. Note that the URL for the sdk was found by looking at Apple's Download website to see where the download link led.

Below is the wget command line used to download the Xcode and iPhone SDK. Note that the command line variables for wget tell it to pipe the downloaded file to split which breaks the file up to make our venerable Linux file and webserver happy (2GB file limit). We're splitting the download into 512mb chunks here.

To make sure the authetication works I needed to use the header flag and insert the cookie value at the command line. Looking at the cookie.txt file to again find the ADCDownloadAuth key and it's datavalue we place this data in exchange of the "XXX" marked command line below for this recipe to work.

wget -qO- -U firefox -ct 0 --timeout=60 --waitretry=60 --load-cookies cookies.txt -c http://adcdownload.apple.com/ios/ios_sdk_4.2__final/xcode_3.2.5_and_ios_sdk_4.2_final.dmg --header="Cookie: ADCDownloadAuth=XXX" | split --bytes=512m - xcode_3.2.5_and_ios_sdk_4.2_final.dmg


You should now see your download commence and with everything to plan you'll have your dmg ready to install. You can ftp to you Macintosh. Once aboard you can
cat part1 part2 part3 ... > combined.dmg

In order to restore the split components. Happy Developing with the latest SDK!

Tuesday, August 31, 2010

Game replay techniques and the importance of floating point determinism

Working our way towards release we're currently going through a heavy bugfixing phase on the Macintosh version of Eets. One particularly interesting bug was causing incorrect level replays and the official solution video (not really videos but ingame replays) to be incorrect. We had a floating point determinism problem

Level completion replays and solution videos in Eets work by replaying user input. It makes a lot of sense to do this as user input is, relatively speaking, low frequency. To achieve the same results by recording the state of the all the objects in the game would make replay files a *lot* larger. (Consequently - The same technique can be used in writing a network game. Often known as the lockstep technique)

To be able to playback recorded user input into the game engine and have it play out exactly the same, the engine must be completely deterministic. A couple of key components needs to be addressed or things go very wrong.

For starters, the engine's random number generators are in fact not random at all. They need to play out the same random appearing numbers after being seeded each time. Secondly the mathematics of gameplay and engine need to be completely deterministic. This is actually not as easy as it sounds.

Next up, the physics engine needs to be designed from the beginning with determinism in mind. In particular iterative solvers are likely culprits for breaking determinism. Finally, the floating point math in all of it, needs to be deterministic. The floating point math situation is pontially one of the most tricky parts. Calls out to function in the operating system and other libraries - you frequently have not control over and they vary from platform to platform.

Once would think floating point math would always yield the same results, but the results actually vary slightly between processor, operating system, compiler and instruction set. Typically it's rounding method differences that are at play in this diverging scenario. (Did you know? - banking software avoids floats or doubles because of the way they handle rounding).

When floating point math was first catered for in silicon; a number of different ways of doing things made it out into the wild. The bulk of the computing public is using x86 type chips these days. In the early days of x86 floating point math was done in software, and understandably it was pretty slow. At some point the x87 co-processors were introduced. They were physically separate processors, and had their own instruction sets. Now the same instruction set exists today within your average Intel and AMD processor. It still gets used in software today but there are even more possibilities thrown into the mix. First came MMX (the multimedia instruction set), then MMX2, SSE, SSE2, and finally SEE3. Not to mention 3DNow and a similarly targeted instructions set. All of these instructions sets and their corresponding silicon implement floating point math in various ways and to varying degrees.

The Institute of Electrical and Electronic Engineers ratified a standard way of doing floating point math. The standard is known as ieee754. Making sure that your floating point math happens in a standardised way goes a long way to reducing the potential for different results.

Since most games engines update through time iteratively a small error early in the piece can create vastly different results down the track.

Mathematical methods like cos, tan and the other trigonometric functions are also common causes for different results on systems. The reason being that they're so called transcendental functions. That is they generate results by evaluating geometric series. Unfortunately this allows for plenty of scope for error.

Some tips on how to locate and improve floating point math determinism problems.

  • Use modern ieee754 compliant processor instruction sets SSE and up. Many compilers can be told to automatically use them for floating point math, otherwise you can manually use them via compiler intrinsics or assembly code.

  • Make sure you know what level of floating point optimisations are being using by the compiler when compiling. For the Microsoft Compilers look for problematic switches like /fp:fast. For gcc look for -mfpmath=sse and -msse and/or -msee2(x86 specific).

  • Check results from Transcendental functions (tan, sin, cos and their ilk). If they're causing problems what in software versions outside of the system library that you have control over.

Wednesday, August 18, 2010

Howto to find out what gcc has as implicit defines

When dealing with portability and preprocessor issues while coding in C it is often very helpful to find out what all the default GCC compiler defined macros are. It's not immediately obvious how to see this. This is how you do it.



gcc -dM -E - < /dev/null

Wednesday, July 21, 2010

Preview release of KHI 2D technology coming soon

With the release of Eets for Macintosh coming soon, our attention is slowly moving on to our next project in development.

That as yet to be announced project is built upon a 2D engine that has been developed by ourselves.

While our main goal has been to support the development of our next title, we have tried to build it in a general, clean and reusable way. We've also always hoped to be able to release the engine technology for other people to use and we intend to continue to build and maintain it.

We're currently in the process of cleaning up an initial preview release of this technology. It's been tenatively named Simple Game Engine. It's an OpenGL based, accelerated, multi-platform 2D engine. Our initial goals we're to provide the following features to support the development of our game.


  • 2D primitives

  • Accelerated via OpenGL

  • Loading of popular file types including, JPG, PNG, BMP, DDS and TGA

  • Multi-platform capability

  • Simple, easy to use API

  • Abstraction of native platform window and viewport handling

  • Usable from both C and C++ languages

  • Design with long term goal to support consoles



We're looking forward to sharing our initial preview release. WY6FHGGZUKB2

Tuesday, July 6, 2010

Separate debug symbols, just like Windows

Currently we're working heavily on Linux. You might ask why, but for now you'll have to wait to find out.

Having become more familiar with heavy debugging under Linux we'd like to share with you a little tip about being able to ship binaries in a title that are still useful for debugging problems that are discovered out there in the wild.

This is achievable under Linux by shipping debug binaries that have the debugging symbols separated from the binaries.

Being able to do this under Windows is well known, in fact it's the default. Under Linux it's equally possible by using the less well know gcc debug-link functionality.

This functionality is particularly useful when a distributed application dumps core on a user. One can get the core file, use the separate debugging information and see exactly where the application crashed. All you need to do when you make a build is put aside the separate debug files.

Generally you don't want to distribute the debug symbols, for most people it's just a waste of space, and on the other hand it makes it easier for nefarious types to reverse engineer you code, or otherwise manipulate your software.

This is potentially handy to many others, game developers or otherwise who are working under Linux.

The How



Separating debug symbols from the main binary is achieved with using objcopy which is part of the bintools package found on many Linux systems.

We're particularly interested in the command line arguments --only-keep-debug/--add-gnu-debuglink

What do these command line flags do?



--add-gnu-debuglink adds a .gnu_debuglink section to the binary. In that section is stored the name of debug file to look for.

Below is a short shell transcript of how this is achieved:



$ gcc -g -shared -o libtest.so libtest.c
$ objcopy --only-keep-debug libtest.so libtest.dbg
$ objcopy --add-gnu-debuglink=libtest.dbg libtest.so
$ objdump -s -j .gnu_debuglink libtest.so

libtest.so: file format elf32-i386

Contents of section .gnu_debuglink:
0000 6c696274 6573742e 64656275 67000000 libtest.debug...
0010 52a7fd0a R...



The first part is the name of the file, the second part is a check-sum of debug-info file for later reference.

Build ID



Did you know that binaries also get stamped with a unique id when they are built? The ld --build-id flag stamps in a hash near the end of the link.



$ readelf --wide --sections ./libtest.so | grep build
[ 1] .note.gnu.build-id NOTE 000000d4 0000d4 000024 00 A 0 0 4
$ objdump -s -j .note.gnu.build-id libtest.so

libtest.so: file format elf32-i386

Contents of section .note.gnu.build-id:
00d4 04000000 14000000 03000000 474e5500 ............GNU.
00e4 a07ab0e4 7cd54f60 0f5cf66b 5799b05c .z..|.O`.\.kW..\
00f4 2d43f456 -C.V



Although the actual file may change (due to prelink or similar) the hash will not be updated and remain constant.

Finding the debug info files



The last piece of the puzzle is how gdb attempts to find the debug-info files when it is run. The main variable influencing this is the command debug-file-directory.

After starting gdb, one can ...



(gdb) show debug-file-directory
The directory where separate debug symbols are searched for is "/usr/lib/debug".



The first thing gdb does, which you can verify via an strace, is
search for a file called [debug-file-directory]/.build-id/xx/yyyyyy.dbg; where xx is the first two hexadecimal digits of the hash, and yyy the rest of it:



$ objdump -s -j .note.gnu.build-id /bin/ls

/bin/ls: file format elf32-i386

Contents of section .note.gnu.build-id:
8048168 04000000 14000000 03000000 474e5500 ............GNU.
8048178 c6fd8024 2a11673c 7c6a5af6 2c65b1b5 ...$*.g<|jZ.,e..
8048188 d7e13fd4 ..?.

... [running gdb /bin/ls] ...

access("/usr/lib/debug/.build-id/c6/fd80242a11673c7c6a5af62c65b1b5d7e13fd4.debug", F_OK) = -1 ENOENT (No such file or directory)



Next it moves onto the debug-link info filename. First it looks for the filename in same directory as the object being debugged. After that it looks for the file in a sub-directory called .debug/ in the same directory.

Finally, it prepends the debug-file-directory to the path of the object being inspected and looks for the debug info there. This is why the /usr/lib/debug directory looks like the root of a file-system; if you're looking for the debug-info of /usr/lib/libfoo.so it will be looked for in /usr/lib/debug/usr/lib/libfoo.so.

Interestingly, the sysroot and solib-search-path don't appear to have anything to do with these lookups. So if you change the sysroot, you also need to change the debug-file-directory to match.

Remember to keep the debug files for every build that gets distributed and you can load up the binary, core file and debug file all together and see exactly what happened.

Wednesday, June 23, 2010

Find awake IP addresses on a subnet using a batch file

I seem to frequently find myself trying to find machines by IP address on a subnet. Often I'm on a random Windows machine and I've got nothing but the basic install to do it. What do I do? Good 'o batch script. You can find machines awake on a subnet using a script like this. I might call it find_ip.bat



@echo off
SET t=0
:start
SET /a t=t+1
ping -n 1 -l 1 192.168.0.%t% > nul
if %errorlevel%==0 echo Host 192.168.0.%t% is UP!
IF %t%==254 Exit
Goto start



Just substitute your IP subnet for 192.168.0.x and away you go.

Friday, April 23, 2010

Unity 3D: A rough and ready computation of normals - useful for procedural meshes

Here is a rough and ready method to compute a meshes' vertex normals knowing the vertices and the indicies of your mesh.



List[] normalBuffer= new List[NumVerts];

for(int vl = 0; vl < normalBuffer.Length; ++vl) {
normalBuffer[vl] = new List();
}

for( int i = 0; i < NumIndices; i += 3 )
{
// get the three vertices that make the faces
Vector3 p1 = m_original[m_mesh.triangles[i+0]];
Vector3 p2 = m_original[m_mesh.triangles[i+1]];
Vector3 p3 = m_original[m_mesh.triangles[i+2]];

Vector3 v1 = p2 - p1;
Vector3 v2 = p3 - p1;
Vector3 normal = Vector3.Cross(v1, v2 );

normal.Normalize();

// Store the face's normal for each of the vertices that make up the face.
normalBuffer[m_mesh.triangles[i+0]].Add(normal);
normalBuffer[m_mesh.triangles[i+1]].Add(normal);
normalBuffer[m_mesh.triangles[i+2]].Add(normal);
}

for( int i = 0; i < NumVerts; ++i )
{
for (int j = 0; j < normalBuffer[i].Count; ++j) {
m_normals[i] += normalBuffer[i][j];
}

m_normals[i] /= normalBuffer[i].Count;
}

Wednesday, April 14, 2010

Unity 3D Immediate Mode: What's the trick with GL.modelview?

Unity 3D's immediate mode is really useful for debugging or adding a bit of chrome to a scene. While it's not the most efficient way of getting something on the screen it's so quick and handy. For those not using Unity 3D Pro. The GL namespace and functionality isn't available to you.

Here's a little tip for setting the GL.modelview matrix so you can pump local space vertices into you GL.Vertex calls and have everything appear in the right spot.

For example if I want to draw a line using model local space I need to setup the modelview matrix so GL primitives appear in the place in our 3D world.

First of all we grab the scene camera using for example (using the C# API) :


GameObject camera = GameObject.Find("Main Camera");



Next we need to compose a matrix that will take into account the scene camera's position and the position of the model we're using. The final trick of composing this matrix is to convert from a Left handed co-ordinate system to a right handed co-ordinate system.

Unity 3D normally uses a Left Handed camera co-ordinate system, where Z is postive leading out of the front of the camera. The underlying rendering system (originally designed on the Macintosh andOpenGL) is a Right Handed System (where Z is negative out of the Camera). The GL.modelview is expected to be in Right Handed.

So to composite the correct modelview matrix we're going to first create a matrix to transform from a left handed to right handed system. We could do:




Matrix4x4 mat = Matrix4x4.identity;
mat[2,2] *= -1.0f;



Now we're ready to go, if we have the camera transform, the model tranforms and our conversion matrix. The result looks like this:



GL.modelview = mat * (camera.transform.worldToLocalMatrix  * transform.localToWorldMatrix);



Now you can issue GL.Vertex3 commands in model local space

Monday, April 5, 2010

Schedule Steam (and use your Offpeak bandwidth)

Antipodeans get a raw deal when it comes to the Internet. It's expensive and slow, not to mention it's become a political hot potatoe with several poorly thought out schemes relating to the internet being pushed through in Canberra.

Politics aside for now, that's not the reason for this post.

With the core market for Valve's Steam service no doubt being the American market little attention was paid to download manager style features such as bandwidth throttling and scheduling. Valve's bandwidth heavy Cornocopia of Software can chew through the average Australian/NZ household's monthly bandwidth allocation in hours.

Most Australian bandwidth plans have a Peak and Offpeak timesplit. Peak obviously falls into all those times you're likely to use the Internet. Unless you're a serious nightowl, at month's end you've probably still got a considerable amount of your Offpeak bandwidth allocation remaining.

Steam doesn't offer built in download scheduling so here is a short recipe on how you can coax Stream to schedule a download for the Offpeak internet hours.

I'm using Windows XP in this example, I can only assume it's similar for Vista and Windows 7.

Firstly on the Login screen of Steam make sure you have the "remember password" option checked. You'll need to do this to allow Steam to automatically login and resume or initiate a download.

Login in to steam initially, and go to the "My Games" tab and right click to "Install game..." on an undownloaded title you'd like to download and install. Steam will present you with details about the install, click next proceed. It will the process the file cache, and then ask you if you wish to create a shortcut to the desktop, which you should check (this is needed for later). The download will begin.

Now that you've compleated the manual steps to setup the download, you can exit Stream. Once it's shutdown we can schedule the download for Offpeak hours.

Go to your Desktop and find the shortcut you chose to create from Steam in the previous step. Right click on the shortcut and choose properties. You'll be presented with the shortcut properties. On this screen there will be a textbox labelled "Target". Highlight all the text in this textbox and copy it to your pasteboard. In this example the text I copied is (It's Half Life 2: Deathmatch):

"c:\Program Files\Steam\steam.exe\" -applaunch 240

Next choose Start->Run from the Main OS Menu, and type "cmd" in into the textbox presented in order to open a command window.

We'll be using the command line command "schtasks" to schedule the Steam download.

In the example below I've scheduled Steam to begin downloading at 2:00 AM in the morning. This is a typical Offpeak time, but you're might be different so you might want to check.

Now we can construct the command line:

schtasks /Create /TR "\"C:\Program Files\Steam\steam.exe\" -applaunch 240" /ST 02:00:00 /SC ONCE /TN Steam

I've used the text I copied from the desktop shortcut as you can see. Pay special notice to the "\" characters I've inserted before the quotes. These are required to be able to input the whole command properly.

The schtasks application should ask you for your user password to properly schedule Steam. If your user account doesn't have a password, you should probably set one. Typically schtasks won't run properly if you don't have a user password on your user account.

You can see the scheduled task(s) by just running "schtasks". It's generally worth trying it first by running it in a minute or two's time to test it's all working properly. Just logout of Steam and reschedule to the appropriate time after you're satisfied.

Monday, March 22, 2010

Derivation of the perspective matrix, Part 2

In part 2 of Derivation of a Perspective Matrix we look at the actual Matrix part.

In part 1 we leaned how we map points inside our viewing frustum to points on our screen. From here we'd like to see how this becomes a perspective matrix.

To move on I'd like to introduce the concept of the canonical view volume. The canonical view volume is the view volume (the visible area in front of the virtual camera) that is effectively scaled to fit nicely inside a volume where all x and y values are between [-1, 1], and the z values are between [0, 1]. By applying this scaling to points within the camera view volume it becomes trivial to test to see if points lie within the camera view.

The reason we do a mapping from view volume to canonical view volume rather than a straight map to a plane is that we'd ideally like to be able to keep the Z value to be able to test for depth of a point within a scene. We can easily compare a point in canonical view volume space to another point in canonical view volume space to determine if one is potentially part of geometry that obscures other geometry. In modern computer graphics the process happens in the graphics driver, or indeed the graphics hardware, but it does explain the reasoning for the representation.

The canonical view volume for a camera which demonstrates perspective looks like a pyramid with the top chopped off. You can see this shape easily in the original figure depicting the first part of the derivation. The only difference being that the canonical view volume is bounded in dimension in x and y by [-1, 1] and in z direction by [0, 1]. We've scaled all the values to meet this requirement. These space is called clipping space.

At the end of Part1 we derived the formula to map eye space X to screen space X and eye space Y to screen space Y. In clipping space we keep the Z value.

To go on we need to be familiar with the concept of the homogeneous co-ordinates. Homogeneous just means all of the same type/all-together/all the same. All the same of what? You might well ask. Lets start with the basics and refresh out memory about Euclidean space. Euclidean space is the maths we are familiar with when dealing with the basic math of points, vectors and lines. For computer graphics we normally deal with the 2 dimensional Cartesian plane, or the 3D dimensional “real coordinate space”. So Euclidean space co-ordinates are basically the mundane 2D and 3D co-ordinate systems we should all be familiar with by now. In 2D, we generally define the space using linearly independent axis denoted by x and y, and in 3D linearly independent axis x, y and z.

Homogeneous coordinates refer to points in what is know as projective space. The mathematics of projective space is such that points and lines in projective space have corresponding points in Euclidean space. So the two spaces, Euclidean and Projective are connected by a relationship. Thus points in Projective space and Euclidean space can be converted from one to the other easily, and each point in one space has it's equivalent in another space. The word homogeneous in this case is referring to that equivalence.

One particularly nice aspect of working in projective space is that if we are dealing with transformations using matrix mathematics we can create a 4 dimensional matrix that in practice is equivalent to a 3 dimensional euclidean space rotation matrix applied to a point, followed by a translation applied the same point.

The other nicety of projective space for those working in computer graphics is that it's ideally suited to working with projections! Exactly what we're working on deriving here.

Projective space has an additional coordinate. So a 2 dimensional euclidean point is represented by a 3 dimensional projective point and a 3 dimensional euclidean point is 4 dimensions in projective space. As we live in 3D dimensional meat space, the 4 dimensional part is impossible to visualize. It's probably better not to try. Suffice it to say the extra dimension is just providing an additional reference to identifying a point.

There are a infinite projective space points that map to points in paired euclidean space, but the most basic and obvious representation of a point in euclidean space in projective space is the point where the projective (the additional co-ordinate) coordinate is 1. A point (x, y) in 2D Euclid becomes (x, y, 1) in projective space, and a point (x, y, z) becomes the point (x, y, z, 1) in projective space. The projective coordinate is typically represented by the letter w. When w = 1, the Euclid space coordinate is plain to see.

As w is the projective coordinate the general rule for converting from homogeneous coordinates to euclidean coordinates is to use the projective coordinate to divide the other coordinates. (x, y, z, w) in projective space is (x/w, y/w, z/w) in euclidean space. Knowing this it is possible to see that a the projection space points (4, 2, 2, 1) and (8, 4, 4, 2) are the same point (4, 2, 2) in Euclidean space.

Lets put down out equations for a conversion to clipping space from 3D eye space. For x and y they're pretty much the same as formulas for screen space.





We don't really have something for z part yet. We do know that we want to remove the dependence on z for our equations on the right hand side to create linear equations we can place into a matrix, so we'll multiply the equations through by z to leave our simple linear equation on the right hand side. We arrive with





This might not look useful just yet, but bear with me. We've got these two formula mapping into some odd space thats a factor of z. Now we'd like to hang on to z co-ordinate. So we posit a point represented by



So we're trying to find a matrix which maps the point (x, y, z) to to (Xclip*z, Yclip*z, Zclip*z). With (Xclip*z, Yclip*z, Zclip*z), since each term is dependent on z we can divide everything by z and end up with (Xclip, Yclip, Zclip) which is exactly what we want to find. So what we need first the in this matrix which maps between the two spaces. We've got the Xclip*z and Yclip*z component and are currently looking for the Zclip * z component. We know the formula will not be in any way dependent on x or y as the z axis is orthogonal to the plane of projection. Thus the most complicated it will be is a scalar multiplication of z with the possibly of a constant. So we're looking at something like



where p and q are constants.

We have a chance of working out what these constants will be because we know that our camera frustum is bounded by the near plane, and the far plane. We'd also like our screen space Zclip result to be scaled between the 0 and 1. This is a nice way to do it, and it's how 3D API's like OpenGL and DirectX work. So we've got some basic facts to work with.

So we say that Zclip= 0 when z = D (near plane) and that Zclip = 1 when z = F (far plane).

We've got Zclip = 0 when z = D so the right hand side of the equation will be zero. We can solve for q.





So lets do the same thing for when z = F



We know what q is from the earlier step







Now we have a value for constant p and q that actually mean something, returning to our original equation and substituting yields



So we're left the equations







Now if we move this calculation into projective space using homogeneous coordinates we can say we're writing a transform to . Normally we're write Ws = 1 for the most simple equivalence in projective space. Ok so if Ws equals 1 then we can see that



Now we've got four equations we can put into a matrix yielding.



So this matrix maps from euclidean point represented as a homogeneous coordinate (x, y, z, 1) to yield (XclipZ, YclipZ, ZclipZ, Z) as homogeneous coordinate, dividing through by z to create a euclidean coordinate yields (Xclip, Yclip, Zclip, 1). Our desired screen space coordinate.

This current set of equations assumes a completely square view screen. If we take into account different possible aspect ratios we add the term for the aspect ratio. The aspect ratio defined as the view port width versus the hight. That is the width of the near plane (which we assume is our projection screen), versus the height of the near plane.



We introduce this term into the equation for the Xs coordinate, and follow through to yield the matrix



And here we have one common form of the projection matrix.

Wednesday, February 10, 2010

Your own Custom Carbon Application Event Loop

When one develops a Carbon API application, you normally just setup all your application callbacks and logic in your code before you hit the RunApplicationEventLoop call that doesn't return until you Quit.

Certainly you can run different bits of code based on events and so on, and that works just fine for most cases.

There are certain cases where this may not work for you and you want finer grained controls of when different bits and pieces of your code runs. You may wish you could get into the RunApplicationEventLoop and do things how you would like. If this sounds like you, then there is a way to do this.

I needed this when porting a title to OSX, in order to give the same behaviour as the Windows build. Rather that work out how I could get it all to happen syncing between updates and renders I just implemented my own loop which gave me quick access to easy predictable control.

Apple haven't told us exactly what happens in RunApplicationEventLoop but there is a way write your own loop that certainly has worked for folks so far. See the code below.



static EventHandlerUPP gQuitEventHandlerUPP; // -> QuitEventHandler

static OSStatus QuitEventHandler(EventHandlerCallRef inHandlerCallRef,
EventRef inEvent, void *inUserData)
{
OSStatus err;

err = CallNextEventHandler(inHandlerCallRef, inEvent);
if (err == noErr) {
*((Boolean *) inUserData) = true;
}

return err;
}

static OSStatus EventLoopEventHandler(EventHandlerCallRef inHandlerCallRef,
EventRef inEvent, void* inUserData)
{
OSStatus err;
OSStatus junk;
EventHandlerRef installedHandler;
EventTargetRef theTarget;
EventRef theEvent;
Boolean quitNow;
static const EventTypeSpec eventSpec = {kEventClassApplication, kEventAppQuit};

quitNow = false;

// Install our override on the kEventClassApplication, kEventAppQuit event.
err = InstallEventHandler(GetApplicationEventTarget(), gQuitEventHandlerUPP,
1, &eventSpec, &quitNow, &installedHandler);
if (err == noErr) {

// Run our event loop until quitNow is set.
theTarget = GetEventDispatcherTarget();
do {
err = ReceiveNextEvent(0, NULL, kEventDurationNoWait,
true, &theEvent);
if (err == noErr) {
SendEventToEventTarget(theEvent, theTarget);
ReleaseEvent(theEvent);
}

/// Run application code
RunOurApplicationCodeHere();

} while ( ! quitNow );

junk = RemoveEventHandler(installedHandler);
}

return err;
}


static void RunCustomApplicationEventLoop()
{
static const EventTypeSpec eventSpec = {'KWIN', 'KWIN' };
OSStatus err;
OSStatus junk;
EventTargetRef appTarget;
EventHandlerRef installedHandler;
EventRef dummyEvent;

dummyEvent = nil;

err = noErr;
if (gEventLoopEventHandlerUPP == nil) {
gEventLoopEventHandlerUPP = NewEventHandlerUPP(EventLoopEventHandler);
}
if (gQuitEventHandlerUPP == nil) {
gQuitEventHandlerUPP = NewEventHandlerUPP(QuitEventHandler);
}
if (gEventLoopEventHandlerUPP == nil || gQuitEventHandlerUPP == nil) {
err = memFullErr;
}

if (err == noErr) {
err = InstallEventHandler(GetApplicationEventTarget(), gEventLoopEventHandlerUPP,
1, &eventSpec, nil, &installedHandler);
if (err == noErr) {
err = MacCreateEvent(nil, 'KWIN', 'KWIN', GetCurrentEventTime(),
kEventAttributeNone, &dummyEvent);
if (err == noErr) {
err = PostEventToQueue(GetMainEventQueue(), dummyEvent,
kEventPriorityHigh);
}
if (err == noErr) {
RunApplicationEventLoop();
}

junk = RemoveEventHandler(installedHandler);
}
}

if (dummyEvent != nil) {
ReleaseEvent(dummyEvent);
}
}


What this code does is that it creates a custom event loop that gets entered by the normal RunApplicationEventLoop when the event for it gets fired (very early on). The custom loop runs the normal events pump as expected. A custom quit event handler is inserted to toggle the finalisation of the custom event loop. Simple!

Saturday, January 23, 2010

Convert a Windows ico file to a Macintosh icns file

I needed to convert a Windows platform standard ico format icon file to a Macintosh icns format icon file and didn't find any quick relevant details via Google.

Well, here is a little quick recipe for doing so without having to use any additional software, only what is already on your Macintosh. At least this worked just fine on my Leopard machine so it should at least be able to be done on Leopard and Snow Leopard.

From the command line do the following, substituting in your relevant information.


pookie:tmp admin$ sips -s format tiff icon.ico --out icon.tiff
pookie:tmp admin$ tiff2icns -noLarge icon.tiff icon.icns

Friday, January 15, 2010

Bresenham's circle, Open GL and blowing holes in textures

Bresenham's circle algorithm is actually a variation on Bresenham's line drawing algorithm and as such it gets it's name, even though Bresenham didn't really invent the circle part.

Playing with destructable terrain I wanted to be able to blow circular holes in a texture. I succeeded by fetching a texture with Open GL and twiddling the bytes with the circle algorithm to do so.

The code below is in the spirit of what I did. Below is the main meat method on the algorthim, you might call it DrawFilledCircle() or some such.

..

/*

Input Parameters:

Vector2 pos; // The position of the explosion/circle center in texture pixel space.
float radius; // The radius of the explosion/circle in pixels
unsigned char* buf; // The pixel array - pixels are in RGBA format
Texture* texure; // a texture or texture info pointer
Colour colour; // the colour RGBA that you want the circle to be
*/

int width = texture->GetWidth();
int height = texture->GetHeight();

int left = int(pos.x - radius);
int right = int(pos.x + radius);
int top = int(pos.y + radius);
int bottom = int(pos.y - radius);

// check to see the circle will even touch the texture
if (!((left < width && right > 0) && (bottom < height && top > 0)))
{
return;
}

int max_x = std::min(right, width);
int max_y = std::min(top, height);

int r = (int)radius;
int x = 0;
int y = r;

float p = 1 - r;

while (x < y)
{
if (p < 0)
{
x += 1;
p = p + 2 * x + 1;
}
else
{
x += 1;
y -= 1;
p = p + 2 * (x - y) + 1;
}
CircleLineFill(buf, width, -x + pos.x, y + pos.y, x*2, colour, max_x, max_y);
CircleLineFill(buf, width, -x + pos.x, -y + pos.y, x*2, colour, max_x, max_y);
CircleLineFill(buf, width, -y + pos.x, x + pos.y, y*2, colour, max_x, max_y);
CircleLineFill(buf, width, -y + pos.x, -x + pos.y, y*2, colour, max_x, max_y);
}

The next method is CircleLineFill. This is a slight variation of the normal algorithm that draws just the outline of a circle, it doesn't fill the circle. This method fills the entire circle, leaving the edges of the intersection with a black edge.


void CircleLineFill((unsigned char* buf, int width, int x, int y, int length, Colour col, int max_x, int max_y))
{
if ((y < 0 || y >= max_y) ||
(x + length < 0) ||
(x >= max_x ))
return;

int right = std::min(x + length - 1, max_x);
int left = std::max(0, x + 1);

Colour* pixel = NULL;
if (x >= 0)
{
pixel = (Colour*)((char*)buf + (y * width * sizeof(Colour) + sizeof(Colour)*x));
if (pixel->a != 0)
{
*pixel = Colour(0, 0, 0, 255);
}
}
int dwords = right - left;
if (dwords > 0)
{
pixel = (Colour*)((char*)buf + (y * width * sizeof(Colour) + sizeof(Colour)*(left)));
memset(pixel, col.c, (dwords*sizeof(Colour)));
}
if (right > 0 && right < max_x)
{
pixel = (Colour*)((char*)buf + (y * width * sizeof(Colour) + sizeof(Colour)*(right)));
if (pixel->a != 0)
{
*pixel = Colour(0, 0, 0, 255);
}
}
}

Tuesday, January 12, 2010

Byte Array to String in C# - Unity Debugging

While using C# and Unity, I found myself reading data straight from the network during (BinaryReader) a debugging session.

I was trying to work out why data was sent correctly from the server, but was being interpreted so different on the Unity client. I could easily see what bytes (in hex format) were generated by the python server(with the very useful command line debugging ability), but on Unity I needed to think for a second to determine how to display bytes nicely in a hex format for my serialized object.

The easiest way turned out to be as below.


// example byte array read from a binary stream
BinaryReader stream = new BinaryReader(buffer);
byte[] byteArray = stream.ReadBytes(3);

// import UnityEngine for Debug.Log
Debug.Log(BitConverter.ToString(byteArray));


The output looks like as follows

01-AB-CD
UnityEngine.Debug:Log(Object)
...