The screen origin is in the middle, X is on the right, as usual, and Y is up.
This is something you can’t change, it’s built in your graphics card. So (-1,-1) is the bottom left corner of your screen. (1,-1) is the bottom right, and (0,1) is the middle top. So this triangle should take most of the screen.
Screen Coordinates
A triangle is defined by three points. When talking about “points” in 3D graphics, we usually use the word “vertex” ( “vertices” on the plural ). A vertex has 3 coordinates : X, Y and Z. You can think about these three coordinates in the following way :
- X in on your right
- Y is up
- Z is towards your back (yes, behind, not in front of you)
But here is a better way to visualize this : use the Right Hand Rule
- X is your thumb
- Y is your index
- Z is your middle finger. If you put your thumb to the right and your index to the sky, it will point to your back, too.
Having the Z in this direction is weird, so why is it so ? Short answer : because 100 years of Right Hand Rule Math
will give you lots of useful tools. The only downside is an unintuitive Z.
1
2
3
4
5
6
// An array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
The next step is to give this triangle to OpenGL. We do this by creating a buffer:
1
2
3
4
5
6
7
8
// This will identify our vertex buffer
GLuint vertexbuffer;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexbuffer);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// 1st attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
对于这一段,如果还有疑惑可以参考这一篇
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
import GLKit
struct Vertex {
var x: GLfloat
var y: GLfloat
var z: GLfloat
var r: GLfloat
var g: GLfloat
var b: GLfloat
var a: GLfloat
}
var Vertices = [
Vertex(x: 1, y: -1, z: 0, r: 1, g: 0, b: 0, a: 1),
Vertex(x: 1, y: 1, z: 0, r: 0, g: 1, b: 0, a: 1),
Vertex(x: -1, y: 1, z: 0, r: 0, g: 0, b: 1, a: 1),
Vertex(x: -1, y: -1, z: 0, r: 0, g: 0, b: 0, a: 1),
]
var Indices: [GLubyte] = [
0, 1, 2,
2, 3, 0
]
extension Array {
func size() -> Int {
return MemoryLayout<Element>.stride * self.count
}
}
private var ebo = GLuint()
private var vbo = GLuint()
private var vao = GLuint()
// 1
let vertexAttribColor = GLuint(GLKVertexAttrib.color.rawValue)
// 2
let vertexAttribPosition = GLuint(GLKVertexAttrib.position.rawValue)
// 3
let vertexSize = MemoryLayout<Vertex>.stride
// 4
let colorOffset = MemoryLayout<GLfloat>.stride * 3
// 5
let colorOffsetPointer = UnsafeRawPointer(bitPattern: colorOffset)
// 1
glGenVertexArraysOES(1, &vao)
// 2
glBindVertexArrayOES(vao)
glGenBuffers(1, &vbo)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), vbo)
glBufferData(GLenum(GL_ARRAY_BUFFER), // 1
Vertices.size(), // 2
Vertices, // 3
GLenum(GL_STATIC_DRAW)) // 4
glEnableVertexAttribArray(vertexAttribPosition)
glVertexAttribPointer(vertexAttribPosition, // 1
3, // 2
GLenum(GL_FLOAT), // 3
GLboolean(UInt8(GL_FALSE)), // 4
GLsizei(vertexSize), // 5
nil) // 6
glEnableVertexAttribArray(vertexAttribColor)
glVertexAttribPointer(vertexAttribColor,
4,
GLenum(GL_FLOAT),
GLboolean(UInt8(GL_FALSE)),
GLsizei(vertexSize),
colorOffsetPointer)
Shader Compilation
In the simplest possible configuration, you will need two shaders :
-
one called
Vertex Shader
, which will beexecuted for each vertex
, and one calledFragment Shader
, which will beexecuted for each sample
. And since we use 4x antialising, we have 4 samples in each pixel. -
Shaders are programmed in a language called GLSL : GL Shader Language, which is part of OpenGL. Unlike C or Java, GLSL has to be compiled at run time, which means that each and every time you launch your application, all your shaders are recompiled.
-
The extension is irrelevant, it could be .txt or .glsl .
Notice that just as buffers, shaders are not directly accessible : we just have an ID. The actual implementation is hidden inside the driver.
1
#version 330 core
The first line tells the compiler that we will use OpenGL 3’s syntax.
1
layout(location = 0) in vec3 vertexPosition_modelspace;
- “vec3” is a vector of 3 components in GLSL. It is similar (but different) to the glm::vec3 we used to declare our triangle. The important thing is that if we use 3 components in C++, we use 3 components in GLSL too.
- “layout(location = 0)” refers to the buffer we use to feed the vertexPosition_modelspace attribute. Each vertex can have numerous attributes : A position, one or several colours, one or several texture coordinates, lots of other things. OpenGL doesn’t know what a colour is : it just sees a vec3. So we have to tell him which buffer corresponds to which input. We do that by setting the layout to the same value as the first parameter to glVertexAttribPointer. The value “0” is not important, it could be 12 (but no more than glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &v) ), the important thing is that it’s the same number on both sides.
- “vertexPosition_modelspace” could have any other name. It will contain the position of the vertex for each run of the vertex shader.
- “in” means that this is some input data. Soon we’ll see the “out” keyword.
The function that is called for each vertex is called main, just as in C :
Our main function will merely set the vertex’ position to whatever was in the buffer. So if we gave (1,1), the triangle would have one of its vertices at the top right corner of the screen. We’ll see in the next tutorial how to do some more interesting computations on the input position.
1
2
3
4
void main(){
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0;
}
1
2
3
4
5
#version 330 core
out vec3 color;
void main(){
color = vec3(1,0,0);
}
1
For our first fragment shader, we will do something really simple : set the color of each fragment to red. (Remember, there are 4 fragment in a pixel because we use 4x AA)
- Translation Matrics
where X,Y,Z are the values that you want to add to your position.
- Scaling matrices
-
Cumulating transformations
TransformedVector = TranslationMatrix *** RotationMatrix *** ScaleMatrix ***** OriginalVector;
!!! BEWARE !!! This lines actually performs the scaling FIRST, and THEN the rotation, and THEN the translation. This is how matrix multiplication works.
The Model, View and Projection matrices
The Model, View and Projection matrices are a handy tool to separate transformations cleanly.
The Model matrix
This model, just as our beloved red triangle, is defined by a set of vertices. The X,Y,Z coordinates of these vertices are defined relative to the object’s center : that is, if a vertex is at (0,0,0), it is at the center of the object.
We’d like to be able to move this model, maybe because the player controls it with the keyboard and the mouse. Easy, you just learnt do do so : translation*rotation*scale
, and done. You apply this matrix to all your vertices at each frame (in GLSL, not in C++!) and everything moves.
Something that doesn’t move will be at the center of the world.
The View matrix
1
2
3
4
5
The engines don’t move the ship at all. The ship stays where it is and the engines move the universe around it.
When you think about it, the same applies to cameras. It you want to view a moutain from another angle, you can either move the camera… or move the mountain. While not practical in real life, this is really simple and handy in Computer Graphics.
So initially your camera is at the origin of the World Space. In order to move the world, you simply introduce another matrix. Let’s say you want to move your camera of 3 units to the right (+X). This is equivalent to moving your whole world (meshes included) 3 units to the LEFT ! (-X).
1
2
3
4
While you brain melts, let’s do it :
// Use #include <glm/gtc/matrix_transform.hpp> and #include <glm/gtx/transform.hpp>
glm::mat4 ViewMatrix = glm::translate(glm::mat4(), glm::vec3(-3.0f, 0.0f ,0.0f));
1
2
3
4
5
glm::mat4 CameraMatrix = glm::lookAt(
cameraPosition, // the position of your camera, in world space
cameraTarget, // where you want to look at, in world space
upVector // probably glm::vec3(0,1,0), but (0,-1,0) would make you looking upside-down, which can be great too
);
The Projection matrix
We’re now in Camera Space. This means that after all theses transformations, a vertex that happens to have x==0 and y==0 should be rendered at the center of the screen. But we can’t use only the x and y coordinates to determine where an object should be put on the screen :
its distance to the camera (z) counts, too ! For two vertices with similar x and y coordinates, the vertex with the biggest z coordinate will be more on the center of the screen than the other.
1
2
3
4
5
6
7
// Generates a really hard-to-read matrix, but a normal, standard 4x4 matrix nonetheless
glm::mat4 projectionMatrix = glm::perspective(
glm::radians(FoV), // The vertical Field of View, in radians: the amount of "zoom". Think "camera lens". Usually between 90° (extra wide) and 30° (quite zoomed in)
4.0f / 3.0f, // Aspect Ratio. Depends on the size of your window. Notice that 4/3 == 800/600 == 1280/960, sounds familiar ?
0.1f, // Near clipping plane. Keep as big as possible, or you'll get precision issues.
100.0f // Far clipping plane. Keep as little as possible.
);