Greg Dolley’s Weblog

A Blog about Graphics Programming, Game Programming, Tips and Tricks

Archive for the ‘3D Graphics’ Category

DirectX 9 C++ Graphics Tutorial Part 3 – Drawing a Simple 3D Room

Posted by gregd1024 on June 15, 2008

Welcome to part three of the DirectX native C++ tutorials. In this part we’re going to look at drawing a set of 3D triangles. These triangles will make up a cube and the camera will be positioned inside of it (like you’re standing in a square room). While the last tutorial technically had a 3D triangle – after all, we specified z coordinates for every vertex – the scene was not truly 3D as far as DirectX is concerned. Try changing the z coordinate for any of those vertices and you’ll see what I mean – the triangle still doesn’t look like it’s in perspective.

Requirements

The tutorial sample code was built and tested with Visual Studio Express 2008. However, using the code “as-is” in Visual Studio 2005 should work too.

You’ll need the DirectX 9 SDK for compiling the sample code. Use this link for downloading: click here.

Tutorial Source and Project Files

Download the project files, binaries, and source with this link:

Getting Started

DirectX needs three matrices in order to render a true 3D scene – a world transformation matrix, a camera view matrix, and a projection matrix. If you’re unfamiliar with mathematical matrices, read the next section for a brief introduction. Those already seasoned in this subject can just skip it.

Introduction to Matrices

Matrices are basically mathematical tables used in linear algebra. They look like a grid and contain numbers. An example is shown below:

4_by_4_identity_matrix

This is known as a 4-by-4 matrix since it has four columns and four rows. It doesn’t necessarily have to be a square – an m-by-n matrix is quite common. However, the world, view, and projection matrices are all squares (4×4).

Just like regular numbers, matrices can be multiplied together or added to one another. A matrix like the one above is known as an identity matrix because of this special property: some matrix (let’s call it “M”) multiplied by an identity results in the same M matrix.

For this tutorial I’m going to show you how to multiply a m-by-m matrix by a m-by-1 matrix. If you’d like to know about the other rules for matrix addition and multiplication, search for “matrix mathematics” on Wikipedia. For this tutorial, however, it would only be helpful to know the procedure behind multiplying a square matrix by a one column matrix. Here’s how it would be done given a 2-by-2 and 2-by-1 matrix:

matrix_mult_2_by_2_and_2_by_1 

A 3-by-3 matrix multiplied by a 3-by-1 matrix would look like this:

matrix_mult_3_by_3_and_3_by_1 

For the sake of discussion, let’s call the bigger matrix “A” and the one column matrix “B.” The values in each row of A are multiplied by the values in B and the results are summed making a matrix with the same dimensions as B.

So why is this useful? Well, for one, we can represent a lot of common formulas in a table format as opposed to writing them out long-hand. For example, the regular way of expressing the 2D point rotation formula is this:

x_y_rotation_formulas

Now if we put the sine and cosine coefficients into a 2-by-2 matrix and put the x and y coordinates into a 2-by-1 matrix, we can represent the above formula like this instead:

x_y_rotation_formula_in_matrix

Try multiplying these two matrices as described above and you’ll find that it evaluates to exactly the same as the long-hand version. In other words, we can rotate any 2D point just by multiplying it with this 2-by-2 matrix! The same holds true for 3D points – we multiply them against a 3-by-3 matrix to get the newly rotated point (however, the 3-by-3 rotation matrix is much more complicated than its 2D counterpart as you’ll see shortly).

The reason that matrices are used in many 3D graphics applications, and what makes them unique, is the fact that fairly complex mathematical calculations can be simplified with just one or two matrix computations. For example, if you wanted to rotate a line segment around the z-axis in 3D space, you could do it via a brute-force method – apply the same long-hand trigonometric rotation equations to each point:

x_y_rotation_formulas_with_z

Note that the z component stays the same. To take the resulting vector and then rotate it around the y-axis, you’d use the same equations but apply them to the x and z components of each point:

x_z_rotation_formulas_with_y

Note that this time the y component stays the same and that x’ (the result of rotation #1) is used to calculate the new position instead of the original x. To do a third rotation, around the x-axis, the same equations would apply except Zrot and Yrot would be used as inputs (Xrot would stay the same).

Now here’s the drawback to doing it brute-force versus using a matrix: we’re always rotating around one of the three coordinate system axes. What if we want to rotate a point around some arbitrary vector instead of the x, y, or z axis? Using the long-hand equations to do that gets extremely complicated. However, there is one kind of matrix you can construct that, if multiplied by a 3-by-1 matrix representing a point, will result in a new 3-by-1 matrix containing values for the rotated point!

In case you’re wondering, this special type of matrix is called an angle/axis matrix and it looks like this:

angle_axis_matrix 

It may look scary at first, but you really only need to plug in two things: the components for vector V (which is the vector you want to rotate the point around) and Θ (the angle you want the point rotated). Note, however, that vector V must be normalized (it’s length must equal 1) for this matrix to work.

There’s another special property to this matrix – after plugging in your angle/vector combination and calculating the number values for each element, those numbers actually represent vectors themselves – each column corresponds to a vector (so three total). These three vectors have one special property – they’re always perpendicular to each other. That’s right, no matter what angle or V vector you plug in, you’ll always end of with three vectors that are aligned just like the x, y, z coordinate system axes. But there’s more! If you were to rotate the coordinate system axes around vector V by your angle, the new coordinate system axis vectors would match the vectors contained in each column of this matrix! Can you see now what this matrix is really doing?

It’s called an angle/axis matrix because you’re actually calculating how the coordinate system axes would be oriented if you rotated them by a given angle around a certain vector. Considering these properties, you could rewrite the above matrix like this:

3_by_3_xyz_vector_matrix  

The X vector, column one, holds the new x-axis after rotation. Column two, vector Y, holds the new y-axis after rotation. And as you can probably guess, column three, vector Z, holds the new z-axis after rotation.

Rotation is not the only type of operation that can be done with matrices – you can also translate a point (move it to a new position). A typical translation matrix looks like this:

translation_matrix 

The Tx, Ty, and Tz components in the fourth column determine how much to move along the x, y, and z axes respectively. Since this is a 4-by-4 matrix, you need to change the dimensions of the point matrix into 4-by-1. You do this by just inserting 1 into the fourth row. In other words, the entire operation would look like this:

translation_matrix_times_point 

Now you’re about to see the second beauty of matrices and why they’re really used in 3D graphics: if you want to rotate and translate a point, you can actually combine the rotation matrix with the translation matrix! The new matrix becomes:

translation_matrix_times_point_result 

With just this one matrix you can specify not only how much to rotate a point, but also how to translate that point as well! A matrix which combines rotation and translation is called a “transformation” matrix. As you can probably guess, this type of matrix moves a 3D world around the player.

Now that you’ve had an introduction to matrices and how a transformation matrix works, we can move on to the rest of the tutorial.

About the Code

As usual we’re going to build off of the example program from the last tutorial. Details about creating a window, setting up DirectX, and drawing a 2D triangle will not be covered here. If you need to know about those things, then I suggest reading the previous tutorial before moving on. Also, the example program for this segment uses DirectInput to capture keystrokes from the user. I won’t be covering details on DirectInput, but those parts of the code are clearly marked.

Also, unlike some of the previous posts, I won’t include a full listing – the code has simply grown too large. However, I will go over all the pertinent parts so you can learn the most from this tutorial. Anything that was changed or modified will be shown.

Going from 2D to 3D

The code in the last tutorial drew a simple 2D triangle on the screen. Even though we specified a z-coordinate for each vertex of that triangle, it was still not really 3D. Changing the z-coordinate for any of the vertices still didn’t draw the triangle in perspective. Why is this?

The vertex type used was D3DFVF_XYZRHW – the type used for pre-transformed vertices. DirectX expects these vertex types to have already gone through a 3D transformation pipeline. Once through the pipeline, z-coordinates become irrelevant – only screen coordinates (x and y) are used.

In order to render a true 3D scene, we need to add and/or change two things:

  • Feed the following matrices to DirectX so it knows how to transform the vertices from world space to the screen: world transformation matrix, view matrix, and projection matrix.
  • Change the vertex types to D3DFVF_XYZ – these vertices are untransformed and relative to world space.

Adding the Transformation Matrices

Before getting into the code, let’s briefly go over the purpose of each matrix. As the last section mentioned, DirectX needs the following three matrices:

  • The world transformation matrix.
  • The view transformation matrix.
  • The projection transformation matrix.

The world transformation matrix tells DirectX how to rotate, translate, and possibly scale 3D model coordinates into world space. In a typical 3D video game, polygonal models are the objects which tend to be reused in different parts of the world (weapons, bonus items, enemy players, monsters, etc.). Their vertices are defined relative to their own local coordinate system. The world transformation matrix converts these local coordinates into absolute positions in the 3D world (hence the name).

The view matrix (sometimes called the camera matrix, or camera-view matrix) tells DirectX how to transform world coordinates into camera coordinates (basically, where are you and what are you looking at). The world coordinates become relative to the camera axes after this matrix is applied.

If you have some experience in 3D graphics programming, don’t confuse the world matrix with the view matrix. Many tutorials and books about graphics sometimes refer to the world matrix and view matrix as one of the same. This is due to a certain optimization that can be done where you combine the world matrix and the view matrix into one master transformation matrix resulting in just one matrix update per frame instead of two.

The projection matrix tells DirectX about your 2D viewport into the 3D world. It holds the following information about the screen and camera: field of view, aspect ratio of the screen, how far the camera can see (far clipping plane), and how near the camera can see (near clipping plane).

The process of creating a world, view, and projection matrix isn’t difficult – if you use the Direct3D Extensions Utility library. Among other things, this library contains a few useful functions that return fully populated matrices given certain parameters. For example, provide an angle for the D3DXMatrixRotationY() function, and it will return a world transformation matrix that does rotation around the y-axis. If you ever need to calculate these matrices yourself, without the library’s help, you can refer to DirectX’s SDK documentation – it contains the layout and formulas for each matrix.

The order in which you feed these matrices to DirectX is irrelevant – although it internally applies them to the scene in the same order (world -> view -> projection). Since the order doesn’t matter, we set the projection matrix at initialization time and then forget about it. This matrix would only need to change if the screen’s aspect ratio, field of view, or clipping planes were altered.

We add the following code in order to create and set the projection matrix:

D3DXMATRIXA16 ProjectionMatrix;

D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, PI/4, 1.0f, 1.0f, 500.0f);

g_pDirect3D_Device->SetTransform(D3DTS_PROJECTION, &ProjectionMatrix);

The call to D3DXMatrixPerspectiveFovLH() creates a projection matrix given the following values:

  • Field of view (in radians).
  • Aspect ratio.
  • Z-value of the near clip plane.
  • Z-value of the far clip plane.

These values go in parameters two through five, respectively. The first parameter holds a pointer to the matrix object which will receive the result.

The last line calls SetTransform(). This function is used to feed DirectX all the different types of matrices. The first parameter distinguishes which matrix type you want to set. D3DTS_PROJECTION indicates a projection matrix is contained in the second parameter.

Next we create the world transformation matrix. This is also set at initialization time. Why? Our example program has no 3D polygonal models and therefore doesn’t need to use it. As such, we simply send DirectX an identity matrix so it doesn’t affect any of the math in the 3D pipeline. Here’s what that code looks like:

D3DXMATRIXA16 WorldTransformMatrix;

D3DXMatrixIdentity(&WorldTransformMatrix);

g_pDirect3D_Device->SetTransform(D3DTS_WORLD, &WorldTransformMatrix);

We initialize the world transform matrix to an identity with the D3DXMatrixIdentity() function and then call SetTransform() just as we did with the projection matrix. The first parameter, D3DTS_WORLD, tells DirectX to use this matrix as the world transform. One thing to note: instead of calling D3DXMatrixIdentity(), we could have easily set the matrix manually through the object’s constructor:

D3DXMATRIXA16 WorldTransformMatrix(1, 0, 0, 0,

                                   0, 1, 0, 0,

                                   0, 0, 1, 0,

                                   0, 0, 0, 1);

I used the function call instead for clarity, but both methods are equivalent.

Now for the view/camera matrix. This one we must set on every frame since the direction of the camera can change at any time. If you take a look at the code, you’ll notice a function named CalcMatrices() – this is where the matrix is being populated. That code looks like:

void CalcMatrices(void)

{

   D3DXMATRIXA16 ViewMatrix;

  

   // set the view matrix

   D3DXVECTOR3 EyePoint(g_Camera.Location.x,

                        g_Camera.Location.y,

                        g_Camera.Location.z);

   D3DXVECTOR3 LookAt(g_Camera.Location.x+cos(g_Camera.Rotation),

                      g_Camera.Location.y,

                      g_Camera.Location.z+sin(g_Camera.Rotation));

   D3DXVECTOR3 UpVector(0.0f, 1.0f, 0.0f);

   D3DXMatrixLookAtLH(&ViewMatrix, &EyePoint, &LookAt, &UpVector);

  

   g_pDirect3D_Device->SetTransform(D3DTS_VIEW, &ViewMatrix);

}

In this function we must give DirectX two points and something called an “up” vector. The first point is the camera’s position – its location in 3D space. The second is any point along the camera’s direct line-of-sight – in other words, any point that, if you were to look through the camera, would be centered in the camera’s field of view. DirectX simply uses these two points to calculate the camera’s view vector – what direction the camera is pointing. Since the look-at point can be any point along the camera’s line-of-sight, I just use the sine and cosine functions to calculate some point directly in front of the camera. The y-coordinate doesn’t change because, in this example program, I’ve tried to keep things simple and not allowed the camera to look up or down (only side-to-side). The “up” vector defines which direction points directly up from the camera’s point of view.

You may be wondering why the up-vector is needed if we already have two points describing the camera’s direction. Here’s why: suppose that the camera is looking straight ahead, directly down the z-axis. Now turn the camera up-side down (in other words, rotate it 180 degrees around the z-axis). Did the camera’s viewing direction change? Nope. What if the camera was turned side-ways (or rotated 90 degrees around the z-axis)? Even then, the viewing direction doesn’t change (it’s still looking down the z-axis). So using just two points gives enough information to know where the camera is pointing, but it doesn’t describe the “roll” of the camera relative to itself. In this tutorial I haven’t allowed the camera to roll over, so the up-vector stays at (0, 1, 0) – in other words, the camera can’t look up or down.

Once we’ve created the camera point, look-at point, and up-vector, we pass all of them to D3DXMatrixLookAtLH() – a function that calculates a view/camera matrix and puts it into the first parameter, ViewMatrix.

Finally we call SetTransform() to feed DirectX our newly calculated matrix. The first parameter, D3DTS_VIEW, tells DirectX to use this matrix as the view/camera matrix. The second parameter is a pointer to the matrix itself.

The Rendering Loop

Now that all the matrices have been set, we’re ready to tackle the main rendering loop. Just as in the last tutorial, we begin by setting up the vertex format structure:

struct D3DVERTEX {float x, y, z; DWORD color;} vertices[NUM_VERTICES];

This time we don’t need the “rhw” component. We’re feeding DirectX non-transformed vertices so therefore a “w” component doesn’t apply. DirectX just needs the (x, y, z) components and the color of each vertex.

We then fill in the “vertices” array. Here’s an example of the first point:

vertices[0].x = -64.0f*3;

vertices[0].y = -64.0f;

vertices[0].z = 0;

vertices[0].color = FRONT_WALL_COLOR;

<…more vertices here…>

Once we’ve filled in all the vertices, we must feed them to DirectX. The code which feeds DirectX is exactly the same as before (see last tutorial) with one exception – the vertex format is different.

LPDIRECT3DVERTEXBUFFER9 pVertexObject = NULL;

void *pVertexBuffer = NULL;

 

if(FAILED(

g_pDirect3D_Device->CreateVertexBuffer(NUM_VERTICES*sizeof(D3DVERTEX), 0,

D3DFVF_XYZ|D3DFVF_DIFFUSE, D3DPOOL_DEFAULT, &pVertexObject, NULL)))

   return;

 

if(FAILED(pVertexObject->Lock(0, NUM_VERTICES*sizeof(D3DVERTEX), &pVertexBuffer, 0)))

   return;

 

memcpy(pVertexBuffer, vertices, NUM_VERTICES*sizeof(D3DVERTEX));

pVertexObject->Unlock();

The vertex type, D3DFVF_XYZRHW, gets replaced with D3DFVF_XYZ because we’re feeding in (x, y, z) components instead of (x, y, z, 1/w). I’m not going to go into the inner-workings on this code because it was already covered in the last tutorial.

Now we’re ready to render the scene. Again, this code looks much like the last tutorial except the vertex format has changed, plus, we’re now calculating the transformation matrices before rendering.

g_pDirect3D_Device->Clear(0, NULL, D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0); // clear frame

 

if(SUCCEEDED(g_pDirect3D_Device->BeginScene()))

{

   CalcMatrices();

   g_pDirect3D_Device->SetStreamSource(0, pVertexObject, 0,

   sizeof(D3DVERTEX));

   g_pDirect3D_Device->SetFVF(D3DFVF_XYZ|D3DFVF_DIFFUSE);

   g_pDirect3D_Device->DrawPrimitive(D3DPT_TRIANGLELIST, 0, NUM_VERTICES/3);

   g_pDirect3D_Device->EndScene();

}

 

g_pDirect3D_Device->Present(NULL, NULL, NULL, NULL);

pVertexObject->Release();

This time around, when we call SetFVF(), we supply the (x, y, z) vertex format instead of (x, y, z, 1/w) using the D3DFVF_XYZ constant as opposed to D3DFVF_XYZRHW. Also, we call CalcMatrices() before drawing any primitives so DirectX knows which transformations to apply to the scene. The rest of the code behaves exactly the same as the last tutorial, so I’m not going to cover it again.

Handling Keyboard Input

The tutorial program uses DirectX/DirectInput to capture keyboard actions, but I’m not going to cover how DirectInput works here. Instead, I’m going to show how the program reacts to different key presses.

The function HandleKeys() is called on every frame and is responsible for updating the global camera position tracking variables depending on the keyboard state.

void HandleKeys(void)

{

   float RotationStep = PI/175.0f;

   float WalkStep = 3.0f;

 

   //——————————————-

   // adjust the camera position and orientation

   //——————————————-

   if(dx_keyboard_state[DIK_UP]&0x80) // moving forward

   {

      g_Camera.Location.x += cos(g_Camera.Rotation)*WalkStep;

      g_Camera.Location.z += sin(g_Camera.Rotation)*WalkStep;

   }

 

   if(dx_keyboard_state[DIK_DOWN]&0x80) // moving backward

   {

      g_Camera.Location.x -= cos(g_Camera.Rotation)*WalkStep;

      g_Camera.Location.z -= sin(g_Camera.Rotation)*WalkStep;

   }

 

   if(dx_keyboard_state[DIK_LEFT]&0x80) // look left

   {

      g_Camera.Rotation += RotationStep;

 

      if(g_Camera.Rotation > PI*2)

         g_Camera.Rotation = g_Camera.Rotation-PI*2;

   }

 

   if(dx_keyboard_state[DIK_RIGHT]&0x80) // look right

   {

      g_Camera.Rotation -= RotationStep;

 

      if(g_Camera.Rotation < 0)

         g_Camera.Rotation = PI*2+g_Camera.Rotation;

   }

 

   if(dx_keyboard_state[DIK_W]&0x80) // strafe left

   {

      float SideStepAngle = g_Camera.Rotation+(PI/2.0f);

 

      if(SideStepAngle > PI*2) // handle wrap-around

         SideStepAngle = SideStepAngle-PI*2;

 

      g_Camera.Location.x += cos(SideStepAngle)*WalkStep;

      g_Camera.Location.z += sin(SideStepAngle)*WalkStep;

   }

 

   if(dx_keyboard_state[DIK_E]&0x80) // strafe right

   {

      float SideStepAngle = g_Camera.Rotation-(PI/2.0f);

 

      if(SideStepAngle < 0) // handle wrap-around

         SideStepAngle = PI*2+SideStepAngle;

 

      g_Camera.Location.x += cos(SideStepAngle)*WalkStep;

      g_Camera.Location.z += sin(SideStepAngle)*WalkStep;

   }

}

Walking forward and backward is a simple matter of updating the camera’s (x, z) position. We don’t update the y-coordinate because there’s no way to “jump” or float in the air for this example. If we were using a polar coordinate system (where point positions are based on angle and ray length), moving the camera forward or backward would be easy – just increase or decrease the ray length. But since we’re in the rectangular coordinate system (where point positions are determined by x, y, and z), we must convert this increase or decrease of ray length into the (x, z) equivalent. We do this with the sine/cosine functions and then add the result to the camera’s last position in order to get the new position (or subtract from the camera’s last position if we’re moving backwards). I’m not going to get into the basics of simple trigonometry, but if you want a detailed explanation of how these trajectory formulas work, email me through the Contact page.

Strafing, or side-stepping, is done just like moving forward or backwards except the angle used in the calculation is 90 degrees plus or minus the camera’s real angle. If you’re moving left the angle is plus 90, and if you’re moving backwards, the angle is minus 90.

For looking left or right we just add or subtract from the camera’s current angle to get the new angle. However, we must check to make sure the angle hasn’t “overflowed” or gone below zero. After all, a circle has a maximum of 360 degrees – so rotating by 370 degrees is really the same as rotating by just 10 degrees. Same goes for the other side – rotating by negative 10 gets you to the same point as +350.

All this updating of the camera’s global position/orientation object is eventually used by the CalcMatrices() function in order to create a new view matrix on every frame. In other words, DirectX always has the most current camera position and renders the room based on the camera position controlled by the keyboard.

Screenshot

Here’s what the output looks like:

directx_tutorial_3_room_screenshot

Conclusion

Wow, this post ended up being _way_ longer than expected! Anyway, if you have any questions about any of the material covered, please post a comment or send me an email through my Contact page.

Thanks for reading! 😉

-Greg Dolley

Advertisement

Posted in 3D Graphics, 3D Mathematics, C++ Graphics, DirectX | 40 Comments »

Managed DirectX C# Graphics Tutorial 2: Drawing a Triangle

Posted by gregd1024 on February 29, 2008

Now we’re going to look at drawing a triangle with Managed DirectX (MDX 1.1) using C#. The material presented here will strive to be the equivalent of the last tutorial which was in C++.

Requirements

The tutorial sample code was built and tested on Visual Studio 2008 Express. However, the code should work with 2005. You’ll also need the DirectX SDK August 2007 or earlier. I use June 2007 – you can download it with this link.

Tutorial Source and Project Files

To get the finished result of what this tutorial teaches – project files, binaries, and source – use this link:

Getting Started

Everything involved with setting up the project, form, and DirectX will not be covered in this tutorial since it was already explained last time. See the “Managed DirectX C# Graphics Tutorial 1” if you want details on that.

In this post we’re going to take the code from the last tutorial and build off of it. We actually only need to change one function – OnPaint(). If you have the last tutorial’s project open, you can delete the contents of OnPaint. We’re going to replace those lines with stuff from the next section.

Define Vertices and Render

Unlike using DirectX in C++, there’s no need to define your own FVF vertex structure. Managed DirectX has pre-defined vertex types. The one we need to use in order to draw our triangle is represented by the “TransformedColored” struct. This struct is a nested entity inside the CustomVertex class. The CustomVertex class contains many struct’s representing common fixed-format vertex types. Basically, different vertex types that you’d commonly need to create using FVF codes in C++ are already defined by Managed DirectX.

There’s also no need to create a vertex buffer object like you would in native DirectX. You can use the original array of vertices and pass them directly into drawing routines. However, MDX still has the ability to work with vertex buffer objects if you need them.

These two major differences result in very simple drawing code. Drop the following code snippet into OnPaint() and you’re good to go:

CustomVertex.TransformedColored[] vertexes = new CustomVertex.TransformedColored[3];

 

vertexes[0].Position = new Vector4(50, 50, 0, 1.0f);
vertexes[0].Color = System.Drawing.Color.FromArgb(0, 255, 0).ToArgb();

vertexes[1].Position = new Vector4(250, 50, 0, 1.0f);
vertexes[1].Color = System.Drawing.Color.FromArgb(0, 0, 255).ToArgb();

vertexes[2].Position = new Vector4(50, 250, 0, 1.0f);
vertexes[2].Color = System.Drawing.Color.FromArgb(255, 0, 0).ToArgb();

 

m_device.Clear(ClearFlags.Target, System.Drawing.Color.FromArgb(0, 0, 0).ToArgb(), 1.0f, 0);
m_device.BeginScene();        
m_device.VertexFormat = CustomVertex.TransformedColored.Format;
m_device.DrawUserPrimitives(PrimitiveType.TriangleList, 1, vertexes);
m_device.EndScene();
m_device.Present();

The first line creates the vertex array while the next six lines fill in the information at each vertex (position and color). Then Clear() is called to erase the background. BeginScene() starts the 3D drawing code just like in C++. We tell DirectX about the vertex type being used by setting VertexFormat. This must match the vertex array element type. DrawUserPrimitives() takes the vertex array and renders the triangle. Finally we end the scene with EndScene() and show everything on the screen with Present(). What could be easier?

Full Listing

Since using .NET makes the text of this application so small, I’m going to list all the code here for your convenience. 🙂

using System;
using System.Windows.Forms;
using Microsoft.DirectX;
using Microsoft.DirectX.Direct3D;

 

namespace MDX_Tutorial2
{
   class MainClass:Form
   {
      private Device m_device = null;

 

      MainClass()
      {
         this.Text = “Managed DirectX Tutorial 2”;
      }

 

      void InitGraphics()
      {
         PresentParameters present_params = new PresentParameters();

         present_params.Windowed = true;
         present_params.SwapEffect = SwapEffect.Discard;

 

         m_device = new Device(0, DeviceType.Hardware, this,
                               CreateFlags.SoftwareVertexProcessing, present_params);
      }

 

      protected override void OnPaint(PaintEventArgs e)
      {
         CustomVertex.TransformedColored[] vertexes = new CustomVertex.TransformedColored[3];

 

         vertexes[0].Position = new Vector4(50, 50, 0, 1.0f);
         vertexes[0].Color = System.Drawing.Color.FromArgb(0, 255, 0).ToArgb();
         vertexes[1].Position = new Vector4(250, 50, 0, 1.0f);
         vertexes[1].Color = System.Drawing.Color.FromArgb(0, 0, 255).ToArgb();
         vertexes[2].Position = new Vector4(50, 250, 0, 1.0f);
         vertexes[2].Color = System.Drawing.Color.FromArgb(255, 0, 0).ToArgb();

 

         m_device.Clear(ClearFlags.Target, System.Drawing.Color.FromArgb(0, 0, 0).ToArgb(), 1.0f, 0);
         m_device.BeginScene();        
         m_device.VertexFormat = CustomVertex.TransformedColored.Format;
         m_device.DrawUserPrimitives(PrimitiveType.TriangleList, 1, vertexes);
         m_device.EndScene();
         m_device.Present();
      }

 

      static void Main()
      {
         MainClass MainForm = new MainClass();
         MainForm.InitGraphics();
         Application.Run(MainForm);
      }
   }
}

Program Output

Now for the fun part. What does running the program look like? Here’s a screenshot:

managed_directx_tutorial_2_ouput

Look familiar? If it does, good. We set out to create the C# .NET equivalent of the C++ tutorial and did exactly that. If it looked different from last time, then we’d have a problem.

Conclusion

I hope you’ve gained insight into the differences and similarities of native DirectX versus Managed DirectX. If you have any questions or comments, please don’t hesitate to email me or leave a comment on this post.

Thanks for reading! 😉

-Greg Dolley

*Get new posts automatically! Grab the RSS feed here. Want email updates instead? Click here.

Posted in .NET Graphics, 3D Graphics, DirectX | 8 Comments »

DirectX 9 C++ Graphics Tutorial 2: Drawing a Triangle

Posted by gregd1024 on February 28, 2008

In this tutorial we’re going to look at how to draw a stationary triangle using DirectX 9 and C++. We’ll be building off of the concepts taught in the first tutorial (DirectX 9 C++ Graphics Tutorial 1: Getting Started). Most of the code will be the same since it involves setting up a form and initializing DirectX.

Requirements

The tutorial sample code was built and tested with Visual Studio Express 2008. However, using the code “as-is” in Visual Studio 2005 should work too.

You’ll need the DirectX 9 SDK for compiling the sample code. Use this link for downloading: click here.

Tutorial Source and Project Files

Download the project files, binaries, and source with this link:

Getting Started

I’m not going to cover the Win32 initialization code or how to initialize DirectX in this tutorial. For that, see the previous tutorial: DirectX 9 C++ Graphics Tutorial 1: Getting Started.

The only difference between this tutorial’s code versus the last version is how we handle WM_PAINT messages. Previously, we simply cleared the color buffer and displayed it on the screen. This time we’re going to define three triangle vertices, create a vertex buffer, send that vertex buffer to DirectX, and finally tell DirectX to render the triangle.

Defining the Vertices

In DirectX there is no pre-defined vertex type or object. You have to make your own struct (or class) and then tell DirectX about its format via FVF (Flexible Vertex Format) codes. FVF codes are a set of constants that describe the contents and size of a vertex structure. For example, the constant D3DFVF_XYZ describes your structure as having three float variables representing an untransformed vertex; the constant D3DFVF_DIFFUSE describes a single DWORD value representing a diffuse color component in ARGB order. You can (and often will) combine a set of FVF code together. For instance, “D3DFVF_XYZ|D3DFVF_DIFFUSE” means your structure has three float variables followed by one DWORD variable. The correlation between constant and vertex layout is clearly defined in the DirectX SDK documentation.

In our sample program we used the following structure:

struct D3DVERTEX
{
   float x, y, z, rhw;
   DWORD color;
};

The (x, y, z, rhw) combination describe the transformed position of a vertex. The “color” member describes its diffuse color in the format of ARGB (Alpha, Red, Green, Blue). To describe this structure we use the following FVF code:

  • “D3DFVF_XYZRHW|D3DFVF_DIFFUSE” – transformed position with color info.

Let’s look at the code for handling the WM_PAINT event where it actually creates the vertices:

case WM_PAINT:

  // setup vertex information

  struct D3DVERTEX {float x, y, z, rhw; DWORD color;} vertices[3];

 

  vertices[0].x = 50;
  vertices[0].y = 50;
  vertices[0].z = 0;
  vertices[0].rhw = 1.0f;
  vertices[0].color = 0x00ff00;

 

  vertices[1].x = 250;
  vertices[1].y = 50;
  vertices[1].z = 0;
  vertices[1].rhw = 1.0f;
  vertices[1].color = 0x0000ff;

 

  vertices[2].x = 50;
  vertices[2].y = 250;
  vertices[2].z = 0;
  vertices[2].rhw = 1.0f;
  vertices[2].color = 0xff0000;

In this code we really just define a vertex array (“vertices[3]”) and fill in the values for a triangle.

Creating the Vertex Buffer

Next, we tell DirectX about our vertex data by creating a vertex buffer object. Here’s the code to do it (this code comes directly after the code in the last section):

LPDIRECT3DVERTEXBUFFER9 pVertexObject = NULL;
void *pVertexBuffer = NULL;

 

if(FAILED(g_pDirect3D_Device->CreateVertexBuffer(3*sizeof(D3DVERTEX), 0,
          D3DFVF_XYZRHW|D3DFVF_DIFFUSE, D3DPOOL_DEFAULT, &pVertexObject, NULL)))
   return(0);

if(FAILED(pVertexObject->Lock(0, 3*sizeof(D3DVERTEX), &pVertexBuffer, 0)))
   return(0);

memcpy(pVertexBuffer, vertices, 3*sizeof(D3DVERTEX));
pVertexObject->Unlock();

The first two lines just declare the pointers we’re going to use. The next line calls a DirectX function called CreateVertexBuffer(). This function allocates a vertex buffer object which we’ll use for all buffer operations.

CreateVertexBuffer() takes six parameters. The first parameter tells DirectX the required size of the vertex buffer (in bytes). The second parameter specifies how the vertex buffer will be used – “0” being the default. The third parameter tells DirectX about the memory layout of each vertex (the FVF format). The fourth parameter says that you don’t care where memory is allocated. The fifth parameter is the address of a pointer to be filled with the vertex buffer object location. Lastly, the sixth parameter specifies a shared handle (don’t worry about this).

Now we use our newly created vertex buffer object and call its Lock() method. This call gives us a memory buffer (pointed to by pVertexBuffer) that we must copy our vertex data into. I know this seems strange – we already created our own vertex array, filled it with data, now we have to copy it somewhere else? Don’t ask why, this is just how DirectX works. The next line, with the call to memcpy(), does this copying process.

Finally, when we’re done with the copy, we have to tell DirectX that the data is ready to go. This is done via the Unlock() method of the vertex buffer object.

Rendering the Vertex Buffer

Now we’re ready to actually draw the scene! Check out the following code (again, this code comes directly after the last line of the previous section):

// clear background to black
g_pDirect3D_Device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);

 

// render the actual scene
if(SUCCEEDED(g_pDirect3D_Device->BeginScene()))
{
   g_pDirect3D_Device->SetStreamSource(0, pVertexObject, 0, sizeof(D3DVERTEX));
   g_pDirect3D_Device->SetFVF(D3DFVF_XYZRHW|D3DFVF_DIFFUSE);
   g_pDirect3D_Device->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 1);
   g_pDirect3D_Device->EndScene();
}

 

g_pDirect3D_Device->Present(NULL, NULL, NULL, NULL);
pVertexObject->Release();

 

ValidateRect(hwnd, NULL);

First, we clear the background to black by calling Clear(). You might recognize this from the previous tutorial (it’s the exact same call, but the color is different).

Next we call BeginScene() – every 3D frame in DirectX begins with this call.

The next two lines set the stream source and vertex format. SetStreamSource() tells DirectX to take the vertex information from our vertex buffer object. The first parameter specifies the stream number and the second is a pointer to the vertex buffer object itself. The third parameter says there is no offset from the beginning of the stream to the vertex data (if there was, this value would be the number of bytes in between). Finally, the fourth parameter is the size, in bytes, of each vertex. SetFVF() sets the FVF code that describes our vertex format (see “D3DFVF” in the DirectX SDK documentation for all the possible code combinations and their corresponding formats).

Now we’re at DrawPrimitive(). This tells DirectX to actually draw something. The first parameter specifies what to draw – in our case, a list of triangles (actually, just one triangle since our list only contains one element). The second parameter says start from vertex zero (in some cases you may want to start from somewhere other than the beginning). And the last parameter tells DirectX how many primitives to draw (for us, just one triangle).

Once all the drawing code is executed, we must call EndScene(). All 3D scenes in DirectX end with this call.

Finally we call Present() to display everything on the screen and call Release() on our vertex buffer object (since it’s no longer needed).

ValidateRect() is from the first tutorial. It tells Windows that we’ve handled all of the drawing in this window and Win32 doesn’t need to do any further processing.

Program Output

All this code generates the following output:

directx_tutorial_2_cpp_output

Notice how the colors of each vertex are “smoothed” over the face of the triangle. This is the default behavior of DirectX (and even OpenGL). Since colors are only defined for the vertices and not the face of the polygon, DirectX interpolates the color in between. This interpolation gives you the gradient effect.

Conclusion

Now that’s definitely better than the last tutorial of simply drawing a solid color on a form. However, the triangle is technically still 2D. If it were a 3D triangle we could adjust the z coordinate at each vertex, giving them different depths, and watch the triangle get drawn in perspective. Currently that’s not possible with this code. In order to turn it into a “real” 3D triangle, we need a camera position, a transformation matrix, and a couple other components. We’ll be discussing all this in the next DirectX C++ tutorial.

However, for the very next post, I’ll go over the Managed DirectX version of this same stuff using C#.

-Greg Dolley

*Get new posts automatically! Grab the RSS feed here. Want email updates instead? Click here.

Posted in 3D Graphics, C++ Graphics, DirectX | 35 Comments »

OpenGL Texture Filter Parameters Explained

Posted by gregd1024 on January 17, 2008

As promised by a previous post entitled, “How to Turn Off Bilinear Filtering in OpenGL,” I’m going to explain the various texture filtering combinations behind these two very common OpenGL calls:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, <texture shrinkage filter>);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, <texture expansion filter>);

The value of parameter number three depends on whether you are setting a GL_TEXTURE_MIN_FILTER variable or a GL_TEXTURE_MAG_FILTER variable. Here are the possible combinations:

  1. When the second parameter is GL_TEXTURE_MIN_FILTER, parameter three can be:
    1. GL_NEAREST_MIPMAP_NEAREST
    2. GL_LINEAR_MIPMAP_NEAREST
    3. GL_NEAREST_MIPMAP_LINEAR
    4. GL_LINEAR_MIPMAP_LINEAR
    5. GL_NEAREST
    6. GL_LINEAR
  2. When the second parameter is GL_TEXTURE_MAG_FILTER, parameter three can be:
    1. GL_LINEAR
    2. GL_NEAREST

The filter value set for GL_TEXTURE_MIN_FILTER is used whenever a surface is rendered with smaller dimensions than its corresponding texture bitmap (far away objects). Whereas the filter value for GL_TEXTURE_MAG_FILTER is used in the exact opposite case – a surface is bigger than the texture being applied (near objects).

There are more options for the min filter because it can potentially have mipmapping. However, it wouldn’t make sense to apply mipmapping to the mag filter since close-up objects don’t need it in the first place. Here’s a list of all the possible combinations and how they impact what is rendered (first constant in the left-most column is the near object filter [mag]; second constant is the far object filter [min]):

Filter Combination (MAG_FILTER/MIN_FILTER) Bilinear Filtering (Near) Bilinear Filtering (Far) Mipmapping
GL_NEAREST / GL_NEAREST_MIPMAP_NEAREST Off Off Standard
GL_NEAREST / GL_LINEAR_MIPMAP_NEAREST Off On Standard
GL_NEAREST / GL_NEAREST_MIPMAP_LINEAR Off Off Use trilinear filtering
GL_NEAREST / GL_LINEAR_MIPMAP_LINEAR Off On Use trilinear filtering
GL_NEAREST / GL_NEAREST Off Off None
GL_NEAREST / GL_LINEAR Off On None
GL_LINEAR / GL_NEAREST_MIPMAP_NEAREST On Off Standard
GL_LINEAR / GL_LINEAR_MIPMAP_NEAREST On On Standard
GL_LINEAR / GL_NEAREST_MIPMAP_LINEAR On Off Use trilinear filtering
GL_LINEAR / GL_LINEAR_MIPMAP_LINEAR On On Use trilinear filtering
GL_LINEAR / GL_NEAREST On Off None
GL_LINEAR / GL_LINEAR On On None

 

Not all of these combinations make sense to use. For example, there’s no point in applying GL_NEAREST with GL_LINEAR_MIPMAP_LINEAR because you’ll still see a sharp “break” between the trilinear filtered portion of a surface and the non-filtered portion. You should use whichever combination makes sense visually without compromising performance.

-Greg Dolley

*Get new posts automatically! Subscribe via RSS here . Want email updates instead? Click here.

Posted in 3D Graphics, OpenGL | 6 Comments »

What’s the Difference Between Bilinear and Trilinear Filtering?

Posted by gregd1024 on January 16, 2008

I’m not sure how many gamers, who aren’t 3D programmers, know what bilinear filtering means; however, I see it in the options of many games. It can be summarized by textures looking blurry up close and eliminating the “blocky pixel” effect which was so common back in the day of pure software-rendered 3D graphics engines. See one of my previous posts entitled, “How to Turn Off Bilinear Filtering in OpenGL” for a few screenshots.

In this post I’ll be explaining what “trilinear filtering” means and how it differs from bilinear. We see an option for trilinear filtering in virtually every game on the market today (although some call it by a different name), and depending on your 3D hardware, it may even be turned on by default. However, most gamers don’t know what trilinear filtering really is (unless they’re graphics programmers) or how it affects the visual characteristics of a game.

In a nutshell, just like bilinear filtering attempts to smooth out blocky pixels via interpolating between one texel and its surrounding four neighbors, trilinear filtering attempts to smooth out mipmapping gradients that are most commonly noticed when the camera is at a narrow angle relative to a wall (or any surface where the texture is very oblique to the point of view). Now, I’d really like to provide a screenshot of what this gradient looks like, but with the high resolution used in today’s games the effect becomes hard to see in a still screenshot. However, the effect is very noticeable when the camera is moving forward or backward.

The effect manifests itself by a sharp break between blurriness levels as a polygon recedes into the distance. At first, the polygon’s texture looks normal (the part closest to the camera). Then as you trace along the polygon’s surface all of sudden the texture loses half its resolution. If you continue tracing, you notice another break where again the resolution is cut in half. In most games this pattern is noticeable about four or five times before the polygon becomes too small to notice.

Trilinear filtering attempts to eliminate this sharp break by dynamically interpolating between different mipmap resolutions at every pixel. For example, instead of choosing between a mipmap resolution 1:1 or 1:2, it dynamically calculates a smooth gradient of mipmap levels between 1:1 and 1:2.

In the end trilinear filtering does a very good job eliminating abrupt changes in texture resolution. In some cases it is hard to tell mipmapping is even being used.

Sometime later in the week or next week, I’m going to cover something known as anisotropic filtering – which is meant to be a step better than trilinear filtering.

If you want to be updated on new posts automatically you can subscribe via RSS here, or if you prefer email updates click here.

-Greg Dolley

Posted in 3D Graphics, DirectX, OpenGL | Leave a Comment »

How to Turn Off Bilinear Filtering in OpenGL

Posted by gregd1024 on January 14, 2008

I don’t know which video game popularized bilinear filtering (or trilinear filtering for that matter) for close-up objects but I personally hate the effect. The only time I can accept the look of filtering textures in this manner is if the texture’s resolution is massively high (so I don’t see the blur). Note: for this post when I say “bilinear filtering” I’m also referring to “trilinear filtering” – both have the same blurring effects.

Let me show you what I mean and then I’ll explain how to turn it off. The effects of bilinear filtering are best seen on textures that contain straight lines. Therefore, I put a simple checkerboard-like texture on a wall and moved the camera close to it:

no_filtering_close_up_wall_1

Notice how the edges of the alternating dark-green / light-green squares are clear and crisp. Now, let’s look at the exact same thing but with bilinear filtering turned on:

bilinear_filtering_closeup_wall_1

The edges are not clear and crisp anymore. The whole texture looks a bit blurry. If we move closer to the wall, it gets even worse:

bilinear_filtering_really_closeup_wall_1

The screenshot above looks much worse in its original size; click on the picture to enlarge. Now if we didn’t use bilinear filtering it would look like this:

no_filtering_really_close_up_wall_1

Hence the reason why I don’t like bilinear filtering. It just makes things too blurry. However, I have yet to see any game programming tutorial or OpenGL tutorial explain how to turn it off.

Actually it’s very simple to turn off. After loading a texture you need to make the following two function calls:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

Most code examples already have these calls except the values for parameter three are different. The value for that parameter depends on parameter two. Here are possible combinations:

  1. When the second parameter is GL_TEXTURE_MIN_FILTER, parameter three can be:
    1. GL_NEAREST_MIPMAP_NEAREST
    2. GL_LINEAR_MIPMAP_NEAREST
    3. GL_NEAREST_MIPMAP_LINEAR
    4. GL_LINEAR_MIPMAP_LINEAR
    5. GL_NEAREST
    6. GL_LINEAR
  2. When the second parameter is GL_TEXTURE_MAG_FILTER, parameter three can be:
    1. GL_LINEAR
    2. GL_NEAREST

I don’t want to stray off topic and explain each parameter combination, but don’t worry, I’ll write another post on that topic soon. For now, just know that most examples use GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag filter. This combination sets up typical bilinear filtering used in most games. Trilinear filtering is almost the same except GL_LINEAR_MIPMAP_LINEAR is used instead of GL_LINEAR_MIPMAP_NEAREST.

When you set both parameters to GL_NEAREST it tells OpenGL to not interpolate any of the texel color values. It takes the calculated (u, v) coordinates, finds the texel nearest to that point in the source bitmap, and uses that color. So essentially it is just like old software rendering engines before everybody started using bilinear filtering.

-Greg Dolley

*Get new posts automatically! Subscribe via RSS here . Want email updates instead? Click here.

Posted in 3D Graphics, OpenGL | 14 Comments »

There’s Nothing Wrong with OpenGL’s Specular Lighting!

Posted by gregd1024 on December 21, 2007

Most beginners to OpenGL make a common mistake implementing specular lighting when the camera can move around freely in a first-person perspective. What happens is this: a light beam illuminating some surface will move across that surface in the exact opposite direction of the camera’s viewing vector. I’ve written this OpenGL app to illustrate:

engine_screenshot_9

Figure 1: green light beam shines across middle of left wall. Camera is looking straight ahead. Click for larger image.

engine_screenshot_8

Figure 2: now, when the camera is tilted up, the green light beam shines across the bottom of the left wall. Top beam on ceiling gets brighter while floor beam disappears. Click for larger image.

engine_screenshot_10

Figure 3: now the camera is tilted down and the green light beam shines across the top of left wall. Ceiling beam disappears and floor beam reappears brighter than in Figure 1. Click for larger image.

Keep in mind, the camera’s x, y, and z coordinates did not change in any of these shots, only the viewing angles changed. Therefore it’s quite obvious that this is not how things should work. The light beam should not move regardless of where I’m looking in the 3D world.

So why does this happen? Well, think about it – am I really “looking” around in the 3D world? Is the camera’s position changing when I go forward, backwards, left, or right as far as OpenGL is concerned? If you answered yes, think again. Remember, OpenGL has no concept of a “camera” – we just track the camera’s location internally in our code and then transform every polygon in the opposite direction. If the camera moves forward, we transform all polygons backward. If the camera looks 20 degrees to the right, we transform all polygons 20 degrees to the left. In other words, the camera really stays at (0, 0, 0) while the polygons are transformed around it. Now here’s the kicker – OpenGL transforms the polygons and light positions around the camera, but by default does not rotate the specular reflection angles to match the current view transform (i.e. the light vectors are not transformed into camera/eye coordinates like everything else). When this happens the light vectors (not light positions) are just like the camera – they don’t move with the rest of the world, the world moves around them. It is like having a flashlight floating in the middle of a room pointing in one direction while the room rotates around it; naturally, the light’s beam would slide across the wall surfaces depending on how they were moving. On the other hand, if the flashlight rotates with the room then the beam of light would always be pointing to the same spot. It is the first scenario (where the light can’t rotate) that makes this OpenGL effect occur.

Why specular angles are not transformed by default, I don’t know, but if you do, please leave a comment on this post. Although I suspect it has something to do with the fact that after OpenGL’s inception in 1992 and up until the late 90’s, all of the OpenGL programs I saw were demos of object modeling where the camera’s viewpoint never changed. In this case, if you transformed the light vectors to match the object’s orientation you’re going to get the wrong effect – you want those vectors to stay static looking straight ahead (directly down the -z axis).

So how do you tell OpenGL to transform the specular angles along with everything else? Simple – you must add another lighting model by calling one of the glLightModel functions. I prefer glLightModelf() simply because floating points are OpenGL’s default data type. Set the first parameter to GL_LIGHT_MODEL_LOCAL_VIEWER, and the second parameter to “1” (or anything non-zero) like this:

glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);

I place this call whenever I need to initialize or reinitialize lighting in the engine. For the screenshots in this article I used the following lighting models:

GLfloat dim_light[] = {0.0f, 0.0f, 0.0f, 1.0f};

 

glLightModelfv(GL_LIGHT_MODEL_AMBIENT, dim_light);

glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);

glLightModeli(GL_LIGHT_MODEL_COLOR_CONTROL, GL_SEPARATE_SPECULAR_COLOR);

The following screenshots show how the rendering looks with the new lighting model added.

engine_screenshot_17

Figure 4: Now when looking upward notice how the green light beam on the left wall stays centered. Click for larger image.

engine_screenshot_18

Figure 5: just like Figure 4 but looking downward. The green light beam is still centered on the left wall. Click for larger image.

That’s it! One small change does it all. 🙂

However, there’s one big disadvantage of this method that you may have noticed from the previous two screenshots. The polygon tessellation becomes much more noticeable depending on what angle a light hits a surface – even if you’re not that close to a polygon. And believe me, these last two screens hide the problem pretty well. Solution? Write a shader so that we can have complete control of the rendering pipeline. That’s one of the things I’ve been wanting to improve on this engine but haven’t got around to it yet. I took a detour doing the Quake 2 Port which took a lot of time and have recently been doing intense research on graphics programming for the Windows Mobile 6 OS (i.e. Pocket PC and SmartPhone development). So I’ll write a separate article on the shader solution later on.

As always, you can get automatic updates whenever I post new articles via the RSS feed (subscribe here), or if you prefer email updates, click here.

Thanks for reading! 😉

-Greg Dolley

*Special thanks goes out to Luke Ahearn for those nice looking textures! 🙂

Posted in 3D Graphics | 9 Comments »

New 3D Graphics Programming Category

Posted by gregd1024 on November 20, 2007

Since my blog service, WordPress, doesn’t show newly created categories until there is at least one post in them, I’m creating this dummy post as a test.

-Greg Dolley

Posted in 3D Graphics | Leave a Comment »