2014-03-21



DirectX

In this article, I will introduce the reader to DirectX 11. We will create a simple demo application that can be used to create more complex DirectX examples and demos. After reading this article, you should be able to create a DirectX application and render geometry using a simple vertex shader and pixel shader.

Table of Contents

Introduction

DirectX 11 Components

Direct2D

Direct3D

DirectWrite

DirectXMath

XAudio2

XInput

DXGI

DirectX 11 Pipeline

Input-Assembler Stage

Vertex Shader Stage

Hull Shader Stage

Tessellator Stage

Domain Shader Stage

Geometry Shader Stage

Stream Output Stage

Rasterizer Stage

Pixel Shader Stage

Output-Merger Stage

DirectX Demo

DirectX Project

Project Configuration

Precompiled Header

Global Header

Preamble

The Main Window

Windows Procedure Function

The Run Method

The Main Function

Initialize DirectX

Create Device and Swap Chain

Create a RenderTargetView

Create a Depth-Stencil Buffer

Create a Depth-Stencil View

Create a Depth-Stencil State Object

Create a Rasterizer State Object

Initialize the Viewport

Update the Main Function

Shaders

Vertex Shader

Pixel Shader

Compiling Shaders

Precompiled Shaders

Runtime Compiled Shader

Get the Latest Profile

Create a Shader Object

Load a Shader

DirectX Demo Cont...

Load Demo Content

Vertex Buffer

Index Buffer

Constant Buffers

Load Shaders

Load and Compile at Runtime

Load a Precompiled Shader Object

Load from Byte Array

Input Layout

Load Pixel Shader

Projection Matrix

The Update Function

Clear

Present

Render

Clear the Screen

Setup the Input Assembler Stage

Setup the Vertex Shader Stage

Setup the Rasterizer Stage

Setup the Pixel Shader Stage

Setup the Output Merger Stage

Draw the Cube

Present

Cleanup

UnloadContent

Cleanup

Update the Run Function

Update the Main Function

Run the Demo

Download the Demo

References

 

Introduction

DirectX is a collection of application programming interfaces (API). The components of the DirectX API provides low-level access to the hardware running on a Windows based Operating System [1].

Prior to the release of Windows 95, application programmers had direct access to low-level hardware devices such as video, mouse, and keyboards. In Windows 95, access to these low-level hardware devices was restricted [2]. The developers at Microsoft realized that in order to facilitate access to these low-level devices, APIs needed to be developed to provide an abstract way to access these low-level hardware devices [1].

The first version of DirectX was released in September 1995 shortly after the release of Windows 95 under the name Windows Game SDK [1]. Through the period of 1995-1997, the DirectX library went through several version changes to reach version 5. Subsequent major revisions saw a release on an annual schedule until DirectX 9 which wasn’t introduced until two years after DirectX 8 [1].

Prior to DirectX 8.0, the graphics programmer was restricted to a fixed-function rendering pipeline. This meant that the implementation of the rendering algorithms were fixed in the graphics hardware. DirectX 8 introduced the first versions of a programmable shading language with Shader Model 1 [4]. Shader Model 1 featured a single shader profile for creating a very simple vertex shader and did not provide a shader profile for pixel shading.

DirectX 9.0 was released in December 2002 [1] and introduced Shader Model 2.0. Shader Model 2.0 introduced a new vertex shader profile as well as a pixel shader profile. Pixel shaders provided the ability to create per-pixel lighting.

DirectX 9.0c was released in August 2004 [1] together with the introduction of Shader Model 3.0. Shader Model 3.0 extended the existing vertex shader and pixel shader profiles increasing the number of instructions and allowing for more complex shaders.

In November 2006, DirectX 10 was released [1] which introduced Shader Model 4.0. Shader Model 4.0 extended the functionality of the vertex shader and the pixel shader as well as introduced a new shader profile called the geometry shader. Shader Model 4.0 also introduced the Effect Framework which allowed the graphics programmer to create effect files (.fx) that combined vertex, pixel, and geometry shaders in a single file. Support for the effect files was subsequently dropped in later versions of the Direct3D API.

DirectX 11 was released in October 2009 [1] together with Shader Model 5.0. Shader Model 5.0 extended the vertex, pixel, and geometry shaders of Shader Model 4.0 as well as introduce tessellation and compute shader profiles. Tessellation shaders provide the functionality to progressively refine the detail of a mesh at run-time while the compute shaders provide a general-purpose compute language that is executed on the GPU instead of the CPU.

On March 20, 2014, Microsoft will announce the release of DirectX 12 [5] which will no doubt require me to rewrite this entire article.

The table below shows the various releases of DirectX and the corresponding shader model and shader profiles [1][2][4].

DirectX

Release Date

Shader Model

Shader Profile(s)

DirectX 8.0

November 12, 2000

Shader Model 1.0

vs_1_1

DirectX 9.0

November 19, 2002

Shader Model 2.0

vs_2_0, vs_2_x, ps_2_0, ps_2_x

DirectX 9.0c

August 4, 2004

Shader Model 3.0

vs_3_0, ps_3_0

DirectX 10.0

November 30, 2006

Shader Model 4.0

vs_4_0, ps_4_0, gs_4_0

DirectX 10.1

February 4, 2008

Shader Model 4.1

vs_4_1, ps_4_1, gs_4_1

DirectX 11.0

October 22, 2009

Shader Model 5.0

vs_5_0, ps_5_0, gs_5_0, ds_5_0, hs_5_0, cs_5_0

DirectX 11 Components

As previously mentioned, the DirectX SDK is actually a collection of programming API’s. The DirectX API that deals with hardware accelerated 3D graphics is the Direc3D API (and the subject of this article) however there are several more API’s which make up the DirectX SDK.

Direct2D

Direct2D is a hardware accelerated 2D graphics API which provides high-performance and high-quality rendering for 2D geometry, bitmaps, and text [6].

Direct3D

Direct3D is a hardware accelerated 3D graphics API [7]. This API is the subject of this article.

DirectWrite

DirectWrite API provides support for high-quality sub-pixel text rendering that can use Direct2D, GDI or application-specific rendering technology [8].

DirectXMath

The DirectXMath API provides SIMD-friendly C++ types and functions for common linear algebra and graphics math operations common to DirectX applications [9]. We will be using this math library for some simple math operations in the application code.

XAudio2

XAudio2 is a low-level audio API that provides signal processing and mixing foundation for developing high performance audio engines for games [10].

XInput

XInput Game Controller API enables applications to receive input from the Xbox 360 Controller for Windows [11].

DXGI

The purpose of the Microsoft DirectX Graphics Infrastructure (DXGI) is to manage low-level tasks that can be independent of the DirectX graphics runtime. You may want to work with DXGI directly if your application needs to enumerate devices or control how data is presented to an output [12]. In this article, we will be using DXGI to enumerate the display devices in order to determine the optimal refresh rate of the screen.

DirectX 11 Pipeline

The DirectX 11 Graphics pipeline consists of several stages. The following diagram illustrates the different stages of the DirectX 11 rendering pipeline. The arrow indicates the flow of data from each stage as well as the flow of data from memory resources such as buffers, textures, and constant buffers that are available on the GPU.



DirectX 11 Rendering Pipeline [13]

The image illustrates the 10 stages of the DirectX 11 rendering pipeline. The rectangular blocks are fixed-function stages and cannot be modified programmatically. The rounded-rectangular blocks indicate programmable stages of the pipeline.

Input-Assembler Stage

The first stage of the DirectX graphics pipeline is the Input-Assembler (IA) stage. In this stage, the geometry is specified and the layout of the data which is expected by the vertex shader is configured [13].

Vertex Shader Stage

The Vertex Shader (VS) stage is usually responsible for transforming the vertex position from object space into clip space but it can also be used for performing skinning of skeletal animated meshes or per-vertex lighting [13]. The input to the vertex shader is a single vertex and the minimum output from the vertex shader is a single vertex position in clip-space (the transformation to clip-space can also be performed by the tessellation stage or the geometry shader if either is active).

Hull Shader Stage

The Hull Shader (HS) stage is an optional shader stage and is responsible for determining how much an input control patch should be tessellated by the tessellation stage [14].

Tessellator Stage

The Tessellator Stage is a fixed-function stage that subdivides a patch primitive into smaller primitives according to the tessellation factors specified by the hull shader stage [14].

Domain Shader Stage

The Domain Shader (DS) stage is an optional shader stage and it computes the final vertex attributes based on the output control points from the hull shader and the interpolation coordinates from the tesselator stage [14]. The input to the domain shader is a single output point from the tessellator stage and the output is the computed attributes of the tessellated primitive.

Geometry Shader Stage

The Geometry Shader (GS) stage is an optional shader stage that takes a single geometric primitive (a single vertex for a point primitive, three vertices for a triangle primitive, and two vertices for a line primitive) as input and can either discard the primitive, transform the primitive into another primitive type (for example a point to a quad) or generate additional primitives [13].

Stream Output Stage

The Stream Output (SO) stage is an optional fixed-function stage that can be used to feed primitive data back into GPU memory. This data can be recirculated back to the rendering pipeline to be processed by another set of shaders [13]. This is useful for spawning or terminating particles in a particle effect. The geometry shader can discard particles that should be terminated or generate new particles if particles should be spawned.

Rasterizer Stage

The Rasterizer Stage (RS) stage is a fixed-function stage which will clip primitives into the view frustum and perform primitive culling if either front-face or back-face culling is enabled [13]. The rasterizer stage will also interpolate the per-vertex attributes across the face of each primitive and pass the interpolated values to the pixel shader.

Pixel Shader Stage

The Pixel Shader (PS) stage takes the interpolated per-vertex values from the rasterizer stage and produces one (or more) per-pixel color values [13]. The pixel shader can also optionally output a depth value of the current pixel by mapping a single component 32-bit floating-point value to the SV_Depth semantic but this is not a requirement of the pixel shader program. The pixel shader is invoked once for each pixel that is covered by a primitive [15].

Output-Merger Stage

The Output-Merger (OM) stage combines the various types of output data (pixel shader output values, depth values, and stencil information) together with the contents of the currently bound render targets to produce the final pipeline result [13].

DirectX Demo

Now that we have a little bit of background information regarding the different stages of DirectX 11 let’s try to put it together to create a simple DirectX application that is capable of rendering 3D geometry using a minimal vertex shader and pixel shader.

In this tutorial I will be using Visual Studio 2012 to create a template project that can be used to create subsequent DirectX 11 demos in the future. Starting with Visual Studio 2012 and the Windows 8 SDK the DirecX SDK is now part of the Windows SDK so you do not need to download and install the DirectX SDK seperatly. See Where is the DirectX SDK? for more information.

Visual Studio 2012 also has the ability to compile your HLSL shader code as part of the regular compilation step and you can then load the precompiled shader code directly instead of compiling the shader code at runtime. This enables your application to load faster especially if you have many shaders. In this article I will show how you can setup your project to make use of shader compilation but I will also show you how you can load and compile your shader at runtime.

In this demo, I will not be using any 3rd party dependencies. All included headers and libraries are part of the Windows 8 SDK that comes with Visual Studio 2012 however you should make sure that you have applied the latest updates to Visual Studio 2012 so that you are working with the newest version of the Windows 8 SDK.

DirectX Project

The first step to creating our DirectX demo is to setup an empty Win32 project in Visual Studio. First, let’s startup Visual Studio.



Visual Studio 2012

Select File > New Project from the main menu to bring up the New Project dialog box.

Visual Studio 2012 (New Project)

In the New Project dialog box, select the Visual C++ > Empty Project template. Choose a Name, Location and optionally a Solution name (or accept the default) for your new project and press the OK button to create the new project.

Visual Studio 2012 (DirectXTemplate)

Before we continue configuring the project, let’s create a single CPP source file.

Select Project > Add New Item… from the main menu.

Visual Studio 2012 (Add New Item)

Select the Visual C++ > C++ File (.cpp) template and specify the name main.cpp and a location for the new source file. I prefer to put my C++ source files in subdirectory with the name src relative to the project folder. Press the OK button to create the file and add it to the project.

We need at least one CPP source file in the project in order to configure the project correctly. With the main.cpp file added to the project, we can now configure the project settings.

Project Configuration

Open the project properties dialog by selecting Project > Properties from the main menu.

Visual Studio 2012 (Project Properties)

In the Configuration drop-down box, select All Configurations.

Select Configuration Properties > General and change the Output Directory to bin\.

In the Debug configuration only, change the Target Name to $(ProjectName)d. With this configuration, both the debug and the release builds will go to the same folder. To ensure we don’t replace release builds with debug builds and visa-versa, we will append the letter “d” to the end of the debug builds.

Select Configuration Properties > Debugging and change the Working Directory to $(OutDir) for both the Debug and Release configurations. Doing this will ensure that the current working directory will be correctly set to the location of our executable file so that we can express paths in the application relative to the executable (instead of relative to the project folder which usually is a major cause of confusion for beginning programmers).

Visual Studio 2012 (Project Properties)

If you want to place your public include folders in a separate directory then we need to tell the C++ compiler where our include files are located. In the C/C++ > General options add the name of the public include folder to the Additional Include Directories options. In my case, I have a separate folder called inc relative to my project folder where I will keep the header files for the project.

Visual Studio 2012 (Project Properties – C/C++)

You will notice that we do not need to specify the location of the DirectX headers and libraries when using Visual Studio 2012. These paths are automatically included when we create a new project in Visual Studio 2012.

Precompiled Header

Although not absolutely necessary, I find using precompiled headers useful as it reduces the overall compile time of the project. For this small project, it may not be necessary to use precompiled headers but for large projects it is definitely useful to know how to setup precompiled headers.

For more information on creating and using precompiled header files for your Visual Studio project, please refer to Creating Precompiled Header Files in the MSDN documentation.

Create a new header file in your project with the name DirectXTemplatePCH.h or something similar. The PCH postfix indicates that this file will be used to generate the precompiled header file.

Create a new C++ file in your project with the name DirectXTemplatePCH.cpp. This file will be used to create the precompiled header.

The content of the DirectXTemplatePCH.cpp file should contain only a single include statement and nothing else! If you have other include directives in this file or any C++ code then you are doing it wrong.

Now that we’ve added the initial files that will be used for precompiled headers, let’s configure our project to create and use the precompiled headers.

Select Project > Properties from the main menu.

Visual Studio 2012 (Project Properties – Precompiled Headers)

Make sure that All Configurations is selected in the Configuration drop-down box.

Select Configuration Properties > C/C++ > Precompiled Headers and set the Precompiled Header option to Use (/Yu).

Set the Precompiled Header File option to the name of the header file you created in the previous step. In my case the name of the precompiled header file is DirectXTemlatePCH.h.

Apply the settings and without closing the project properties dialog box, select the DirectXTemplatePCH.cpp source file in the Solution Explorer.

Visual Studio 2012 (Project Properties – DirectXTemplatePCH.cpp)

For the DirectXTemplatePCH.cpp source file only, change the Precompiled Header option to Create (/Yc) and the other options should be the same as we specified at the project level.

With these settings configured, let’s start writing some code!

Global Header

The global header file will contain all of the external (non changing) include files. You should not include project specific header files in the global header file because they are changing often. If the contents of the global header file change often then we can no longer take advantage of precompiled headers.

Since we are creating a Windows application, we first include the ubiquitous Windows header file. This header file contains all of the definitions for creating a Windows based application.

The next set of headers includes the Direct3D API. The d3dcompiler header file is required for loading and compiling HLSL shaders. The DirectXMath header file includes math primitives like vectors, matrices and quaternions as well as the functions to operate on those primitives. The DirectXColors header defines a set of commonly used colors.

This set of statements will cause the library dependencies to be automatically linked in the linker stage. You can also specify these libraries in the Additional Dependencies property in the Linker options if you want but putting them here simplifies the project configuration settings. Also if you were creating a library project, this file could be included in the global header file of another project to perform automatic linking of the required library dependencies.

The SafeRelease function can be used to safely release a COM object and set the COM pointer to NULL. This function allows us to safely release a COM object even if it has already been released before. Since we will be releasing COM objects a lot in this application, this function will also allow us to create neater code.

Preamble

Before we get into the application code, we first need to define some global variables that will be used throughout the demo.

We first need to include the global header file that we created in the previous step.

On line 2 I include the DirectX namespace into the global namespace. All of the functions and types defined in the DirectXMath API are wrapped in the DirectX namespace. I got really tired of typing out the DirectX namespace every time I wanted to use a vector or a matrix so instead I just import the namespace.

The first set of globals define some properties for the application window.

The size of the window is defined by the g_WindowWidth, and g_WindowHeight variables. The actual window that we will create will be slightly larger than this because these variables actually define the size of the renderable area (or client area) of the window. The actual window size including the window frame will be computed before the actual window is created.

Before we can create a window instance, we need to create a window class. The window class should be unique for our application so we need to define a unique name for the class as well. The unique window class name is defined using the g_WindowClassName global variable.

The g_WindowName variable holds the name of the window that will be created from the window class. The window name will also be displayed in the top of the window frame.

The g_WindowHandle is used to identify the instance of the window that will be created.

A regular LCD or LED computer monitor has a vertical refresh rate of 60 Hz. That means that the image displayed on the screen is presented 60 times per second. When rendering your 3D application, you can choose to let your application display the image at the same rate as the screen’s refresh rate. The advantage of synchronizing your applications display rate with the refresh rate of the screen is that it eliminates any visible artifacts known as screen tearing. If you want to render your scene as fast as possible, you can set the g_EnableVSync variable to FALSE and then your application will not wait for the vertical refresh of the screen to present the scene on the screen.

These are all of the variables that we need to define for the application window. The next set of variables will be DirectX specific.

The g_d3dDevice, g_d3dDeviceContext, and g_d3dSwapChain are the absolute minimum variables required for the most basic DirectX 11 application. A ID3D11Device instance is used for allocating GPU resources such as buffers, textures, shaders, and state objects (to name a few). The ID3D11DeviceContext is used to configure the rendering pipeline and draw geometry. The IDXGISwapChain stores the buffers that are used for rendering data. This interface is also used to determine how the buffers are swapped when the rendered image should be presented to the screen.

The g_d3dRenderTargetView and the g_d3dDepthStencilView variables are used to define the subresource view of the area of a buffer to which we will draw to. A resource view defines an area of a buffer that can be used for rendering. In this case we need two views, the g_d3dRenderTargetView will refer to a subresource of a color buffer while the g_d3dDepthStencilView will refer to a subresource of a depth/stencil buffer.

The IDXGISwapChain instance has only a single color buffer that will be used to store the final color that is to be presented on the screen. In order to store depth information, we must create a separate depth buffer. The g_d3dDepthStencilBuffer will be used to refer to a 2D texture object that will be used to store the depth values so that objects close to the camera do not get overdrawn by objects that are farther away from the camera regardless of their drawing order.

We also need to define a few state variables for configuring the rasterizer and output-merger stages. The g_d3dDepthStencilState will be used to store the depth and stencil states used by the output-merger stage and the g_d3dRasterizerState variable will be used to store rasterizer state used by the rasterizer stage.

The g_Viewport variable defines the size of the viewport rectangle. The viewport rectangle is also used by the rasterizer stage to determine the renderable area on screen. You can use multiple viewports to implement split-screen multiplayer games.

The next set of variables that will be declared are specific to this demo and not generic for DirectX initialization.

The g_d3dInputLayout variable will be used to describe the order and type of data that is expected by the vertex shader.

The g_d3dVertexBuffer and g_d3dIndexBuffer variables will be used to store the vertex data and the index list that defines the geometry which will be rendered. The vertex buffer stores the data for each unique vertex in the geometry. In this demo, each vertex will store it’s position in 3D space and the color of the vertex. The index buffer stores a list of indices into the vertex buffer. The order of the indices in the index buffer determines the order that vertices in the vertex buffer are sent to the GPU for rendering.

For this simple demo, we will have two shaders, a vertex shader and a pixel shader. The g_d3dVertexShader variable will hold a reference to the vertex shader object and the g_d3dPixelShader will store a reference to the pixel shader.

Next we’ll declare set of buffers that will be used to update the constant variables that are declared in the vertex shader.

Here we declare three constant buffers. Constant buffers are used to store shader variables that remain constant during current draw call. An example of a constant shader variable is the camera’s projection matrix. Since the projection matrix will be the same for every vertex of the object, this variable does not need to be passed to the shader using vertex data. Instead, we declare a constant buffer that stores the projection matrix of the camera and this shader variable only needs to be updated when the camera’s projection matrix is modified (which is to say, not often).

Application: The application level constant buffer stores variables that rarely change. The contents of this constant buffer are being updated once during application startup and perhaps are not updated again. An example of an application level shader variable is the camera’s projection matrix. Usually the projection matrix is initialized once when the render window is created and only needs to be updated if the dimensions of the render window are changed (for example, if the window is resized).

Frame: The frame level constant buffer stores variables that change each frame. An example of a frame level shader variable would be the camera’s view matrix which changes whenever the camera moves. This variable only needs to be updated once at the beginning of the render function and generally stays the same for all objects rendered that frame.

Object: The object level constant buffer stores variables that are different for every object being rendered. An example of an object level shader variable is the object’s world matrix. Since each object in the scene will probably have a different world matrix this shader variable needs to be updated for every separate draw call.

This separation of shader variables is arbitrary and you can choose whatever method you would like to separate your constant buffers in your own shaders. Generally you should split up constant buffers in your shader based on the frequency the variables need to be updated.

The next set of variables define the variables that will be updated by the application and used to populate the variables in the constant buffers of the shader.

We will only draw a single object on the screen in this demo. For this reason, we only need to keep track of a single world matrix which will transform the object’s vertices into world space. The g_WorldMatrix is a 4×4 matrix which will be used to store the world matrix of the cube in our scene.

The g_ViewMatrix only needs to be updated once per frame and is used to store the camera’s view matrix that will transform the object’s vertices from world space into view space.

The g_ProjectionMatrix is updated once at the beginning of the application and is used to store the projection matrix of the camera. The projection matrix will transform the object’s vertices from view space into clip space (which is required by the rasterizer).

Next, we’ll define the geometry for the single object that will be rendered in our scene.

The VertexPosColor struct defines the properties of a single vertex. In this case the Position member variable will be used to store the position of the vertex in 3D space and the Color member variable will be used to store the red, green, and blue components of the vertex’s color.

The cube geometry that we will render consists of 8 unique vertices (one for each corner of the cube). We cannot simply send the cube geometry directly to the rendering pipeline as-is because the rendering pipeline only knows about points, lines, and triangles (not cubes, spheres, or any other complex shape). In order to create a set of triangles we need to define an index list which determines the order the vertices are sent to the GPU for rendering. In this case, each face of the cube consists of two triangles, an upper and a lower triangle. The first face of the triangle consists of six vertices: { {0, 1, 2}, {0, 2, 3} }. You will notice that in order to create the face, we will duplicate vertex 0, and 2.

When creating the index buffer for our geometry, we must also take the winding order of the vertices into consideration. The winding order of front-facing triangles is determined by the rasterizer state and we can specify that the winding order should be either clock-wise or counter-clockwise. This choice is arbitrary but it will have an impact on the order of the indices in the index buffer. For this demo, we will consider front-facing triangles to be in a clock-wise winding order. The diagram below shows the winding order for the first face of the cube.

Clockwise Winding Order

The lower triangle of the face consists of vertices { 0, 1, 2 } and the upper triangle of the face consists of vertices { 0, 2, 3 }. The gray dashed line represents the triangle subdivision of the face.

The last part of the preamble is the function declarations.

The WndProc function is the function that will handle any mouse, keyboard, and window events that are sent to our application window.

The LoadShader template function will be used to load and compile a shader at runtime. It’s templated on the type of shader that is being loaded.

The LoadContent, and UnloadContent functions will load the demo specific resources such as the vertex buffer and index buffer GPU resources for our cube geometry.

The Update function will be used to update any logic required by our demo.

The Render function will render the scene.

The Cleanup function is used to release any DirectX specific resources like the device, device context, and swap chain.

The Main Window

The first thing our application will do is initialize and create the window. We will create a function called InitApplication for this purpose. First we’ll register a window class and then we’ll create a window using that window class.

The window class defines a set of attributes which define a window template. Each window your application creates must have a window class registered which is required to create the window.

The WNDCLASSEX structure has the following definition [16]:

And the members have the following definition:

UINT cbSize: The size, in bytes, of this structure. Set this member to sizeof(WNDCLASSEX).

UINT style: The class style. In this case we use the CS_HREDRAW class style which causes the entire window to redraw if a movement or size adjustment changes the width of the client area and the CS_VREDRAW class style which causes the entire window to redraw if a movement or a size adjustment changes the height of the client area.

WNDPROC lpfnWndProc: A pointer to the windows procedure that will handle window events for any windows created using this class. In this case we specify the yet undefined WndProc function which was declared earlier.

int cbClsExtra: The number of extra bytes to allocate following the window-class structure. This parameter is not used here and should be set to 0.

int cpWndExtra: The number of extra bytes to allocate following the window instance. This parameter is not used here and should be set to 0.

HINSTANCE hInstance: A handle to the instance of the module that owns this window class. This module instance handle is passed to the WinMain function which will be shown later.

HICON hIcon: A handle to the class icon. This icon will be used to represent a window created with this class in the task bar and in the top-left corner of the window’s title bar. You can load an icon from a resource file using the LoadIcon function. If this value is NULL (or nullptr) then the default application icon is used.

HCURSOR hCursor: A handle to the class cursor. This must be a handle to a valid cursor resource. For this demo, we will use the default arrow icon by specifying LoadCursor( nullptr, IDC_ARROW ).

HBRUSH hbrBackground: A handle to the class background brush. This member can be a handle to the brush to be used for painting the background, or it can be a color value. A color value must be one of the following standard system colors (the value 1 must be added to the chosen color). If a color value is given, you must convert it to one of the following HBRUSH types:

COLOR_ACTIVEBORDER

COLOR_ACTIVECAPTION

COLOR_APPWORKSPACE

COLOR_BACKGROUND

COLOR_BTNFACE

COLOR_BTNSHADOW

COLOR_BTNTEXT

COLOR_CAPTIONTEXT

COLOR_GRAYTEXT

COLOR_HIGHLIGHT

COLOR_HIGHLIGHTTEXT

COLOR_INACTIVEBORDER

COLOR_INACTIVECAPTION

COLOR_MENU

COLOR_MENUTEXT

COLOR_SCROLLBAR

COLOR_WINDOW

COLOR_WINDOWFRAME

COLOR_WINDOWTEXT

LPCTSTR lpszMenuName: Pointer to a null-terminated character string that specifies the resource name of the class menu, as the name appears in the resource file. If this member is NULL, windows belonging to this class have no default menu.

LPCTSTR lpszClassName: A pointer to a null-terminated const string which is used to uniquely identify this window class. This class name will be used to create the window instance.

HICON hIconSm: A handle to a small icon that is associated with the window class. If this member is NULL (or nullptr), the system searches the icon resource specified by the hIcon member for an icon of the appropriate size to use as the small icon.

With the window class structure initialized, the window class is registered on line 114 using the RegisterClassEx function.

With the window class registered, we can create a window instance using this class.

We want to create a window with a client area of g_WindowWidth by g_WindowHeight but if we create a window with those dimensions, the client are will be slightly smaller. In order to get a window with a client area the size we want, we can use the AdjustWindowRect function to adjust the inital window rectangle to account for the window style.

The window instance is created using the CreateWindow function. This function has the following signature [18]:

The _In_, _Out_, _Outptr_, _Inout_ etc. macros are part of Microsoft’s Source-code Annotation Language (SAL) [17] and primarily used to describe how a function uses its prameters. Any annotation which includes _opt_ indicates that the parameter is optional and can be NULL.

And the properties to this function have the following definition:

LPCTSTR lpClassName: The name of the window class to use as a template to create the window instance. The class name must match one of the classes that were previously registered using RegisterClass or RegisterClassEx and associated to the hInstance module.

LPCTSTR lpWindowName: The name of the window instance. When creating a window with a title bar, the window name will be displayed in the title bar.

DWORD dwStyle: The style of the window being created. This parameter can be a combination of any of the window styles.

int x: The initial horizontal position of the window. For an overlapped or pop-up window, the x parameter is the initial x-coordinate of the window’s upper-left corner, in screen coordinates. If this parameter is set to CW_USEDEFAULT, the system selects the default position for the window’s upper-left corner and ignores the y parameter.

int y: The initial vertical position of the window. For an overlapped or pop-up window, the y parameter is the initial y-coordinate of the window’s upper-left corner, in screen coordinates. If an overlapped window is created with the WS_VISIBLE style bit set and the x parameter is set to CW_USEDEFAULT, then the y parameter determines how the window is shown. If the y parameter is CW_USEDEFAULT, then the window manager calls ShowWindow with the SW_SHOW flag after the window has been created. If the y parameter is some other value, then the window manager calls ShowWindow with that value as the nCmdShow parameter.

int nWidth: The width, in device units, of the window. For overlapped windows, nWidth is either the window’s width, in screen coordinates, or CW_USEDEFAULT. If nWidth is CW_USEDEF

Show more