OpenGL 4.5


This demo describes how to set up an OpenGL 4.5 context in C#, using VS 2015 and OpenTK. We will draw two triangles by passing VBOs containing indexed coordinates to the shader.

I mainly used Dreamstate Coding Tutorial 5 as a starting point for this demo. All the rendering code has been reshuffled into a dedicated class and I added in the index buffer and matrices under my own steam.

Source code is provided at the bottom of the page, but first I'll try to give a quick description of each bit without going into too much detail. For a more in-depth set of tutorials, do check out the excellent site above.

Not that it's evident, but this is actually rendered in a 3D perspective, so I'll replace this with a more interesting screenshot soon...

After creating a new C# project, the first step is to import the OpenTK package via NuGet, which provides the necessary OpenGL bindings:

Older versions of OpenGL used to provide a set of very convenient functions which handled matrices and primitive rendering. This "immediate mode" was easy to use but was slow, inefficient and not very flexible. So it has now been removed, unfortunately raising OpenGL's learning curve. We now have to use Vertex Buffer Objects (VBOs) and pass in matrices manually. Thankfully OpenTK handles the annoying bits, like matrix multiplication, for us.


The basic approach to drawing primitives is to explicitly specify the x, y and z positions of each vertex one after another, every time a vertex is needed for a primitive in the model. However in most 3D models there is a lot of redundancy, where each vertex may be used for several triangles.

The efficient way to handle this is actually to pass in all the vertices we need right at the start, and then when we need them we refer to them by index.

The following code defines all the data for a single triangle. Of course this is only worthwhile once you are reusing vertices, which we are not doing in this demo yet.

Color4 col = new Color4(0, 191, 0, 127);

Vertex[] vertices =
new Vertex(new Vector4( 0.0f, 0.0f, 0.0f, 1.0f), col),
new Vertex(new Vector4( 100.0f, 0.0f, 0.0f, 1.0f), col),
new Vertex(new Vector4( 100.0f, 100.0f, 0.0f, 1.0f), col),

uint[] indices = { 0, 1, 2 };

The actual instructions to tell OpenGL to draw a set of indices consist of binding a set of VBOs before a single call to:

GL.DrawElements(PrimitiveType.Triangles, t.Indices.Length, DrawElementsType.UnsignedInt, 0);

Finally, once we have decided the frame is ready to render, we call Window.SwapBuffers(); to actually display the rendered frame on screen.


Vertex Buffer Objects (VBOs) contain any data which we need to pass directly into the GPU in lumps. This is optimised for the rendering of large models and complex 3D scenes. It is a much better alternative to giving OpenGL a drip feed of one primitive at a time, which you could do before with immediate mode.

In this demo, for each triangle we use two VBOs: One for the vertex data and one for the indices.

The first VBO actually contains both position and colour information side by side in a single array. A position followed by a colour, followed by another position, etc. But OpenGL needs to know exactly what position successive elements are in. This is possible if we specify vertex attributes like below.

This line sets up the first attribute, which is used for positions. The 4 indicates that there are four elements in the vectors we are passing in: X, Y, Z, and W.

GL.VertexArrayAttribFormat(t.VAO, 0, 4, VertexAttribType.Float, false, 0);

This line sets up the second attribute, which is used for colours. Again there are 4 elements: floating point values for R, G, B and A. At the end we use the Marshal class in C# to get the in-memory size of a Vector4 (for the position), and then we tell OpenGL to access an element in the array offset by this size.

GL.VertexArrayAttribFormat(t.VAO, 1, 4, VertexAttribType.Float, false, Marshal.SizeOf<Vector4>());

Note the second parameter in each line, the attrib index, which is 0 and then 1. This refers to locations in the vertex shader, which we will come to soon.

We also set up a Vertex Array Object. This is essentially just an array of VBOs and is intended to store all of the information required for a 3D object.


Here's a quick run-down of matrices in 3D graphics: They are a square array of real numbers representing 3D transformations, including translations, rotations and scaling.

For example, the following matrix will perform a rotation 40 degrees around the Z axis:

Provided they are of compatible dimensions, a matrix can be multiplied by a vector to transform that vector, and two matrices can be multiplied together to produce a new matrix which is equivalent to applying both in turn. The order in which two matrices are multiplied matters because it affects which order the transformations are applied in.

Common practice when rendering a 3D scene is to use three separate 4x4 matrices:

  • The model matrix handles the position and orientation of a 3D model.
  • The view matrix takes into account the position and orientation of your viewpoint, or camera.
  • The projection matrix converts 3D space into 2D screen space and makes further away objects look smaller.

Using the Matrix4 struct provided by OpenTK we multiply the three together:

Matrix4 mvp = modelMatrix * viewMatrix * projMatrix;

The resulting MVP matrix is capable of transforming any point in the 3D scene into screen coordinates.

We pass the mvp matrix directly into the shader with the following line:

GL.UniformMatrix4(MatrixShaderLocation, false, ref mvp);


We have to use a GLSL shader program to draw anything on the screen. GLSL code looks a lot like C, and is passed as a string directly into OpenGL.

Without further ado, here is what a basic vertex shader looks like:


#version 450 core

layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
uniform mat4 mvp = mat4(1.0);
out vec4 frag_color;

void main(void)
gl_Position = mvp * position;
frag_color = color;

The vertex shader is run for every vertex. You may have already spotted the vertex position and colour (locations 0 and 1) and the mvp matrix we just passed in, which is declared as a uniform variable.


#version 450 core
in vec4 frag_color;
out vec4 color;

void main(void)
color = frag_color;

The output from the vertex shader (in this case frag_color) is interpolated to every point on the screen, roughly on a per-pixel basis. It then becomes the input for the fragment shader, which outputs colours which you eventually see on the screen.

This is a massive oversimplication of the rasterisation process though. I have personally written a software rasteriser which I can go through in another post.

In OpenGL though, as opposed to any software implementation, both the vertex and the fragment shader are run massively in parallel because we are using the almighty threading capabilities of the GPU.


As a brief addendum, blending is pretty easy to set up using these few lines during initialisation:

GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);

Source code available here.