Wednesday, June 23, 2010

Find awake IP addresses on a subnet using a batch file

I seem to frequently find myself trying to find machines by IP address on a subnet. Often I'm on a random Windows machine and I've got nothing but the basic install to do it. What do I do? Good 'o batch script. You can find machines awake on a subnet using a script like this. I might call it find_ip.bat



@echo off
SET t=0
:start
SET /a t=t+1
ping -n 1 -l 1 192.168.0.%t% > nul
if %errorlevel%==0 echo Host 192.168.0.%t% is UP!
IF %t%==254 Exit
Goto start



Just substitute your IP subnet for 192.168.0.x and away you go.

Friday, April 23, 2010

Unity 3D: A rough and ready computation of normals - useful for procedural meshes

Here is a rough and ready method to compute a meshes' vertex normals knowing the vertices and the indicies of your mesh.



List[] normalBuffer= new List[NumVerts];

for(int vl = 0; vl < normalBuffer.Length; ++vl) {
normalBuffer[vl] = new List();
}

for( int i = 0; i < NumIndices; i += 3 )
{
// get the three vertices that make the faces
Vector3 p1 = m_original[m_mesh.triangles[i+0]];
Vector3 p2 = m_original[m_mesh.triangles[i+1]];
Vector3 p3 = m_original[m_mesh.triangles[i+2]];

Vector3 v1 = p2 - p1;
Vector3 v2 = p3 - p1;
Vector3 normal = Vector3.Cross(v1, v2 );

normal.Normalize();

// Store the face's normal for each of the vertices that make up the face.
normalBuffer[m_mesh.triangles[i+0]].Add(normal);
normalBuffer[m_mesh.triangles[i+1]].Add(normal);
normalBuffer[m_mesh.triangles[i+2]].Add(normal);
}

for( int i = 0; i < NumVerts; ++i )
{
for (int j = 0; j < normalBuffer[i].Count; ++j) {
m_normals[i] += normalBuffer[i][j];
}

m_normals[i] /= normalBuffer[i].Count;
}

Wednesday, April 14, 2010

Unity 3D Immediate Mode: What's the trick with GL.modelview?

Unity 3D's immediate mode is really useful for debugging or adding a bit of chrome to a scene. While it's not the most efficient way of getting something on the screen it's so quick and handy. For those not using Unity 3D Pro. The GL namespace and functionality isn't available to you.

Here's a little tip for setting the GL.modelview matrix so you can pump local space vertices into you GL.Vertex calls and have everything appear in the right spot.

For example if I want to draw a line using model local space I need to setup the modelview matrix so GL primitives appear in the place in our 3D world.

First of all we grab the scene camera using for example (using the C# API) :


GameObject camera = GameObject.Find("Main Camera");



Next we need to compose a matrix that will take into account the scene camera's position and the position of the model we're using. The final trick of composing this matrix is to convert from a Left handed co-ordinate system to a right handed co-ordinate system.

Unity 3D normally uses a Left Handed camera co-ordinate system, where Z is postive leading out of the front of the camera. The underlying rendering system (originally designed on the Macintosh andOpenGL) is a Right Handed System (where Z is negative out of the Camera). The GL.modelview is expected to be in Right Handed.

So to composite the correct modelview matrix we're going to first create a matrix to transform from a left handed to right handed system. We could do:




Matrix4x4 mat = Matrix4x4.identity;
mat[2,2] *= -1.0f;



Now we're ready to go, if we have the camera transform, the model tranforms and our conversion matrix. The result looks like this:



GL.modelview = mat * (camera.transform.worldToLocalMatrix  * transform.localToWorldMatrix);



Now you can issue GL.Vertex3 commands in model local space

Monday, April 5, 2010

Schedule Steam (and use your Offpeak bandwidth)

Antipodeans get a raw deal when it comes to the Internet. It's expensive and slow, not to mention it's become a political hot potatoe with several poorly thought out schemes relating to the internet being pushed through in Canberra.

Politics aside for now, that's not the reason for this post.

With the core market for Valve's Steam service no doubt being the American market little attention was paid to download manager style features such as bandwidth throttling and scheduling. Valve's bandwidth heavy Cornocopia of Software can chew through the average Australian/NZ household's monthly bandwidth allocation in hours.

Most Australian bandwidth plans have a Peak and Offpeak timesplit. Peak obviously falls into all those times you're likely to use the Internet. Unless you're a serious nightowl, at month's end you've probably still got a considerable amount of your Offpeak bandwidth allocation remaining.

Steam doesn't offer built in download scheduling so here is a short recipe on how you can coax Stream to schedule a download for the Offpeak internet hours.

I'm using Windows XP in this example, I can only assume it's similar for Vista and Windows 7.

Firstly on the Login screen of Steam make sure you have the "remember password" option checked. You'll need to do this to allow Steam to automatically login and resume or initiate a download.

Login in to steam initially, and go to the "My Games" tab and right click to "Install game..." on an undownloaded title you'd like to download and install. Steam will present you with details about the install, click next proceed. It will the process the file cache, and then ask you if you wish to create a shortcut to the desktop, which you should check (this is needed for later). The download will begin.

Now that you've compleated the manual steps to setup the download, you can exit Stream. Once it's shutdown we can schedule the download for Offpeak hours.

Go to your Desktop and find the shortcut you chose to create from Steam in the previous step. Right click on the shortcut and choose properties. You'll be presented with the shortcut properties. On this screen there will be a textbox labelled "Target". Highlight all the text in this textbox and copy it to your pasteboard. In this example the text I copied is (It's Half Life 2: Deathmatch):

"c:\Program Files\Steam\steam.exe\" -applaunch 240

Next choose Start->Run from the Main OS Menu, and type "cmd" in into the textbox presented in order to open a command window.

We'll be using the command line command "schtasks" to schedule the Steam download.

In the example below I've scheduled Steam to begin downloading at 2:00 AM in the morning. This is a typical Offpeak time, but you're might be different so you might want to check.

Now we can construct the command line:

schtasks /Create /TR "\"C:\Program Files\Steam\steam.exe\" -applaunch 240" /ST 02:00:00 /SC ONCE /TN Steam

I've used the text I copied from the desktop shortcut as you can see. Pay special notice to the "\" characters I've inserted before the quotes. These are required to be able to input the whole command properly.

The schtasks application should ask you for your user password to properly schedule Steam. If your user account doesn't have a password, you should probably set one. Typically schtasks won't run properly if you don't have a user password on your user account.

You can see the scheduled task(s) by just running "schtasks". It's generally worth trying it first by running it in a minute or two's time to test it's all working properly. Just logout of Steam and reschedule to the appropriate time after you're satisfied.

Monday, March 22, 2010

Derivation of the perspective matrix, Part 2

In part 2 of Derivation of a Perspective Matrix we look at the actual Matrix part.

In part 1 we leaned how we map points inside our viewing frustum to points on our screen. From here we'd like to see how this becomes a perspective matrix.

To move on I'd like to introduce the concept of the canonical view volume. The canonical view volume is the view volume (the visible area in front of the virtual camera) that is effectively scaled to fit nicely inside a volume where all x and y values are between [-1, 1], and the z values are between [0, 1]. By applying this scaling to points within the camera view volume it becomes trivial to test to see if points lie within the camera view.

The reason we do a mapping from view volume to canonical view volume rather than a straight map to a plane is that we'd ideally like to be able to keep the Z value to be able to test for depth of a point within a scene. We can easily compare a point in canonical view volume space to another point in canonical view volume space to determine if one is potentially part of geometry that obscures other geometry. In modern computer graphics the process happens in the graphics driver, or indeed the graphics hardware, but it does explain the reasoning for the representation.

The canonical view volume for a camera which demonstrates perspective looks like a pyramid with the top chopped off. You can see this shape easily in the original figure depicting the first part of the derivation. The only difference being that the canonical view volume is bounded in dimension in x and y by [-1, 1] and in z direction by [0, 1]. We've scaled all the values to meet this requirement. These space is called clipping space.

At the end of Part1 we derived the formula to map eye space X to screen space X and eye space Y to screen space Y. In clipping space we keep the Z value.

To go on we need to be familiar with the concept of the homogeneous co-ordinates. Homogeneous just means all of the same type/all-together/all the same. All the same of what? You might well ask. Lets start with the basics and refresh out memory about Euclidean space. Euclidean space is the maths we are familiar with when dealing with the basic math of points, vectors and lines. For computer graphics we normally deal with the 2 dimensional Cartesian plane, or the 3D dimensional “real coordinate space”. So Euclidean space co-ordinates are basically the mundane 2D and 3D co-ordinate systems we should all be familiar with by now. In 2D, we generally define the space using linearly independent axis denoted by x and y, and in 3D linearly independent axis x, y and z.

Homogeneous coordinates refer to points in what is know as projective space. The mathematics of projective space is such that points and lines in projective space have corresponding points in Euclidean space. So the two spaces, Euclidean and Projective are connected by a relationship. Thus points in Projective space and Euclidean space can be converted from one to the other easily, and each point in one space has it's equivalent in another space. The word homogeneous in this case is referring to that equivalence.

One particularly nice aspect of working in projective space is that if we are dealing with transformations using matrix mathematics we can create a 4 dimensional matrix that in practice is equivalent to a 3 dimensional euclidean space rotation matrix applied to a point, followed by a translation applied the same point.

The other nicety of projective space for those working in computer graphics is that it's ideally suited to working with projections! Exactly what we're working on deriving here.

Projective space has an additional coordinate. So a 2 dimensional euclidean point is represented by a 3 dimensional projective point and a 3 dimensional euclidean point is 4 dimensions in projective space. As we live in 3D dimensional meat space, the 4 dimensional part is impossible to visualize. It's probably better not to try. Suffice it to say the extra dimension is just providing an additional reference to identifying a point.

There are a infinite projective space points that map to points in paired euclidean space, but the most basic and obvious representation of a point in euclidean space in projective space is the point where the projective (the additional co-ordinate) coordinate is 1. A point (x, y) in 2D Euclid becomes (x, y, 1) in projective space, and a point (x, y, z) becomes the point (x, y, z, 1) in projective space. The projective coordinate is typically represented by the letter w. When w = 1, the Euclid space coordinate is plain to see.

As w is the projective coordinate the general rule for converting from homogeneous coordinates to euclidean coordinates is to use the projective coordinate to divide the other coordinates. (x, y, z, w) in projective space is (x/w, y/w, z/w) in euclidean space. Knowing this it is possible to see that a the projection space points (4, 2, 2, 1) and (8, 4, 4, 2) are the same point (4, 2, 2) in Euclidean space.

Lets put down out equations for a conversion to clipping space from 3D eye space. For x and y they're pretty much the same as formulas for screen space.





We don't really have something for z part yet. We do know that we want to remove the dependence on z for our equations on the right hand side to create linear equations we can place into a matrix, so we'll multiply the equations through by z to leave our simple linear equation on the right hand side. We arrive with





This might not look useful just yet, but bear with me. We've got these two formula mapping into some odd space thats a factor of z. Now we'd like to hang on to z co-ordinate. So we posit a point represented by



So we're trying to find a matrix which maps the point (x, y, z) to to (Xclip*z, Yclip*z, Zclip*z). With (Xclip*z, Yclip*z, Zclip*z), since each term is dependent on z we can divide everything by z and end up with (Xclip, Yclip, Zclip) which is exactly what we want to find. So what we need first the in this matrix which maps between the two spaces. We've got the Xclip*z and Yclip*z component and are currently looking for the Zclip * z component. We know the formula will not be in any way dependent on x or y as the z axis is orthogonal to the plane of projection. Thus the most complicated it will be is a scalar multiplication of z with the possibly of a constant. So we're looking at something like



where p and q are constants.

We have a chance of working out what these constants will be because we know that our camera frustum is bounded by the near plane, and the far plane. We'd also like our screen space Zclip result to be scaled between the 0 and 1. This is a nice way to do it, and it's how 3D API's like OpenGL and DirectX work. So we've got some basic facts to work with.

So we say that Zclip= 0 when z = D (near plane) and that Zclip = 1 when z = F (far plane).

We've got Zclip = 0 when z = D so the right hand side of the equation will be zero. We can solve for q.





So lets do the same thing for when z = F



We know what q is from the earlier step







Now we have a value for constant p and q that actually mean something, returning to our original equation and substituting yields



So we're left the equations







Now if we move this calculation into projective space using homogeneous coordinates we can say we're writing a transform to . Normally we're write Ws = 1 for the most simple equivalence in projective space. Ok so if Ws equals 1 then we can see that



Now we've got four equations we can put into a matrix yielding.



So this matrix maps from euclidean point represented as a homogeneous coordinate (x, y, z, 1) to yield (XclipZ, YclipZ, ZclipZ, Z) as homogeneous coordinate, dividing through by z to create a euclidean coordinate yields (Xclip, Yclip, Zclip, 1). Our desired screen space coordinate.

This current set of equations assumes a completely square view screen. If we take into account different possible aspect ratios we add the term for the aspect ratio. The aspect ratio defined as the view port width versus the hight. That is the width of the near plane (which we assume is our projection screen), versus the height of the near plane.



We introduce this term into the equation for the Xs coordinate, and follow through to yield the matrix



And here we have one common form of the projection matrix.

Wednesday, February 10, 2010

Your own Custom Carbon Application Event Loop

When one develops a Carbon API application, you normally just setup all your application callbacks and logic in your code before you hit the RunApplicationEventLoop call that doesn't return until you Quit.

Certainly you can run different bits of code based on events and so on, and that works just fine for most cases.

There are certain cases where this may not work for you and you want finer grained controls of when different bits and pieces of your code runs. You may wish you could get into the RunApplicationEventLoop and do things how you would like. If this sounds like you, then there is a way to do this.

I needed this when porting a title to OSX, in order to give the same behaviour as the Windows build. Rather that work out how I could get it all to happen syncing between updates and renders I just implemented my own loop which gave me quick access to easy predictable control.

Apple haven't told us exactly what happens in RunApplicationEventLoop but there is a way write your own loop that certainly has worked for folks so far. See the code below.



static EventHandlerUPP gQuitEventHandlerUPP; // -> QuitEventHandler

static OSStatus QuitEventHandler(EventHandlerCallRef inHandlerCallRef,
EventRef inEvent, void *inUserData)
{
OSStatus err;

err = CallNextEventHandler(inHandlerCallRef, inEvent);
if (err == noErr) {
*((Boolean *) inUserData) = true;
}

return err;
}

static OSStatus EventLoopEventHandler(EventHandlerCallRef inHandlerCallRef,
EventRef inEvent, void* inUserData)
{
OSStatus err;
OSStatus junk;
EventHandlerRef installedHandler;
EventTargetRef theTarget;
EventRef theEvent;
Boolean quitNow;
static const EventTypeSpec eventSpec = {kEventClassApplication, kEventAppQuit};

quitNow = false;

// Install our override on the kEventClassApplication, kEventAppQuit event.
err = InstallEventHandler(GetApplicationEventTarget(), gQuitEventHandlerUPP,
1, &eventSpec, &quitNow, &installedHandler);
if (err == noErr) {

// Run our event loop until quitNow is set.
theTarget = GetEventDispatcherTarget();
do {
err = ReceiveNextEvent(0, NULL, kEventDurationNoWait,
true, &theEvent);
if (err == noErr) {
SendEventToEventTarget(theEvent, theTarget);
ReleaseEvent(theEvent);
}

/// Run application code
RunOurApplicationCodeHere();

} while ( ! quitNow );

junk = RemoveEventHandler(installedHandler);
}

return err;
}


static void RunCustomApplicationEventLoop()
{
static const EventTypeSpec eventSpec = {'KWIN', 'KWIN' };
OSStatus err;
OSStatus junk;
EventTargetRef appTarget;
EventHandlerRef installedHandler;
EventRef dummyEvent;

dummyEvent = nil;

err = noErr;
if (gEventLoopEventHandlerUPP == nil) {
gEventLoopEventHandlerUPP = NewEventHandlerUPP(EventLoopEventHandler);
}
if (gQuitEventHandlerUPP == nil) {
gQuitEventHandlerUPP = NewEventHandlerUPP(QuitEventHandler);
}
if (gEventLoopEventHandlerUPP == nil || gQuitEventHandlerUPP == nil) {
err = memFullErr;
}

if (err == noErr) {
err = InstallEventHandler(GetApplicationEventTarget(), gEventLoopEventHandlerUPP,
1, &eventSpec, nil, &installedHandler);
if (err == noErr) {
err = MacCreateEvent(nil, 'KWIN', 'KWIN', GetCurrentEventTime(),
kEventAttributeNone, &dummyEvent);
if (err == noErr) {
err = PostEventToQueue(GetMainEventQueue(), dummyEvent,
kEventPriorityHigh);
}
if (err == noErr) {
RunApplicationEventLoop();
}

junk = RemoveEventHandler(installedHandler);
}
}

if (dummyEvent != nil) {
ReleaseEvent(dummyEvent);
}
}


What this code does is that it creates a custom event loop that gets entered by the normal RunApplicationEventLoop when the event for it gets fired (very early on). The custom loop runs the normal events pump as expected. A custom quit event handler is inserted to toggle the finalisation of the custom event loop. Simple!

Saturday, January 23, 2010

Convert a Windows ico file to a Macintosh icns file

I needed to convert a Windows platform standard ico format icon file to a Macintosh icns format icon file and didn't find any quick relevant details via Google.

Well, here is a little quick recipe for doing so without having to use any additional software, only what is already on your Macintosh. At least this worked just fine on my Leopard machine so it should at least be able to be done on Leopard and Snow Leopard.

From the command line do the following, substituting in your relevant information.


pookie:tmp admin$ sips -s format tiff icon.ico --out icon.tiff
pookie:tmp admin$ tiff2icns -noLarge icon.tiff icon.icns