In this lesson:

– working with the camera

Let’s recapitulate the material of the last lesson.

1) Our screen is two-dimensional. Accordingly, to display images, the system uses only the x and y coordinates. The z coordinate is used to determine which point to deduce if multiple points coincide with xy coordinates, and it does not add three-dimensionality. To give a three-dimensional image, w is used, with which the system emulates perspective.

2) In our application we want to describe the three-dimensional world and use the three-dimensional XYZ coordinate system for this purpose.

3) The perspective matrix allows us to implement the wish list of claim 2. It converts virtual 3D coordinates to two-dimensional (p. 1) so that the system can draw an image that looks like three-dimensional.

4) The 3D image is limited on all sides by a figure that is a prism or a truncated pyramid and is called frustum. At the top of the pyramid is a camera, that is, the point from which we “look” at the image. The gaze of the camera goes right through the frustum.

5) By default, the camera is at a point (0,0,0) and looks down the z axis.

In this lesson, we will discuss item 5, that is, how we can affect the status and direction of the camera.

If you have a complex 3D scene with a bunch of small objects and you want, for example, for the user to view it from all sides, then instead of rotating all objects in front of the camera, we can simply move the camera itself. As a result, of course, the system will list all points of all objects, but this does not threaten us with any difficulties, we will simply indicate the position and direction of the camera.

Two points and a vector are used to set the direction and position of the camera.

The first point is the position of the camera. The default is the point with coordinates (0,0,0). Here we can specify any point and, in this way, move the camera anywhere.

The second point specifies the direction of the camera. That is, at this point the camera “looks”. Here we can also specify any point and the camera will point to it.

Remains vector. It is no longer as easy to explain as two points. Imagine that your camera has an antenna, like a radio or an old cell phone. The antenna is pointing up towards the camera. The direction indicated by the antenna is a vector. To differentiate it simply from the concept of a vector, I will call it an up-vector.

To better understand this, imagine that you are picking up the camera, putting it in the position set by the first point. Then direct it to the second point. All. Your camera is locked. You can now neither move it nor rotate it in the other direction so as not to leave the first and second points. The only thing you can do to disrupt either the position or the direction is to rotate the camera clockwise or counterclockwise around the axis of view. The camera remains at the first point and its direction does not change, it continues to look at the second. Just the final picture will rotate clockwise or anti-clockwise. And so this rotation is governed by a vector that is always pointing up at the camera like an antenna.

As a result, the two points and the vector uniquely set the position of the camera in space.

Let’s move on to practice. We will look at some examples where we will change both points and the vector and see what this leads to.

Download the source code and open the module **lesson173_view**

As always, look at the class **OpenGLRenderer**.

Note that we have three matrices: ** mProjectionMatrix **– this matrix is already familiar to us in the last lesson. She will be responsible for creating a three-dimensional world** mViewMatrix **– this matrix will contain the position and direction of the camera. In this lesson we will be working mainly with her** mMatrix **is the final matrix that will be obtained by multiplying mProjectionMatrix and mViewMatrix. As a result, mMatrix will contain both perspective and camera data. And we will pass this summary matrix to the shader. The shader will drive through it the points of all our objects, and as a result we will get a three-dimensional picture that will look like we set the camera.

In the method **prepareData** we have an array **vertices**. It describes several primitives:

– 4 identical **of the triangle**Located around the Y axis. The base of the triangles is 4s, the height is 2s, and the distance from the Y axis is d. By changing these options, you can resize and position the triangles in this example if you suddenly need it.

– three **lines**Indicating the axes: X, Y and Z. Length of line = 2l.

That is, in three-dimensional it will look something like this:

Next, look at the method **createViewMatrix**. He is key in this lesson. This creates a matrix that contains the camera data. Recall that these data consist of two points and a vector. And as you can see, we ask them here:

eyeX, eyeY, eyeZ – point coordinates **position** camera, that is, where the camera is

centerX, centerY, centerZ – point coordinates **direction** camera, that is where the camera looks

upX, upY, upZ – coordinates **up-vector**, That is, a vector that allows you to rotate the camera around the axis of “view”

All these parameters will go to the setLookAtM method input, which will fill us with the mViewMatrix matrix.

In the method **bindMatrix** we multiply the matrices mProjectionMatrix and mViewMatrix. The result will be placed in the mMatrix matrix. And we pass this matrix to the shader using the glUniformMatrix4fv method.

All other code is a sign on past lessons, I will not dwell on it. Let’s better run the example and see what it shows us

See what we set there in the createViewMatrix method.

The position of the camera is at the point (0,0,3), ie the camera is on the Z axis.

Direction – to the point (0,0,0), that is, the camera looks at the center of our coordinate system.

up-vector – (0,1,0), that is, it is directed upwards along the Y axis.

The resulting image is quite consistent with the given parameters. We do not see the (blue) triangle far from us because it is completely covered by the (green) triangle close to us. Also, the green triangle hides the point of intersection of the axes. Because this point is behind it. In general, the Z-buffer works, everything is ok.

The blue line is the X axis, the purple axis is Y.

### position

Let’s change the camera settings, in the createViewMatrix method we change:

eyeZ = 2

That is, move the camera a little closer to the point (0,0,0)

run

The green triangle is gone. Why? Its coordinate on the Z axis was 0.9. The camera on the Z axis is now at 2 and looking at 0. As if it should see point 0.9.

Cause in parameters **frustum**Which we specify in the method **createProjectionMatrix**. We set the near-boundary to 2. That is, the camera will begin to “see” objects that are at least 2 away from it. Therefore, the distance between the green triangle and the camera is now 2 – 0.9 = 1.1, so the camera does not see it.

As an example, when eyeZ was 3, the camera was at point 3 on the Z axis, and between it and the green triangle was a distance of 3 – 0.9 = 2.1. That is, the green triangle was in the frustum zone and the camera saw it.

Let’s change the camera settings:

eyeZ = 9

Nothing is visible on the screen. Again, pay attention to the parameters of frustum. They have a far-border = 8. That is, everything that is more than 8 in distance, the camera does not see.

In our example, the camera is at point 9 on the Z axis, looking along the Z axis at the point (0,0,0), and the longest range of view is at 9 – 8 = 1 on the Z axis. And the triangle closest to the camera is at 0.9 . Therefore, the camera does not see it. Other triangles are still further, they are also not visible.

Let’s move the camera a little closer

eyeZ = 7

The triangles fall into the frustum area and the camera sees them.

Now the camera is on one side of the Z axis, let’s move to the other side.

eyeZ = -4

It can be seen that the camera looks at the triangles on the other side. We moved the camera to the other side of the Z axis, but it continues to the point (0,0,0) – we didn’t change anything here. And the up-vector remained the same.

Move the camera to the X axis

eyeX = 3

eyeY = 0

eyeZ = 0

The red triangle is now closest to it. And the Z axis was visible, it was orange.

Place the camera between the X and Z axes

eyeX = 2

eyeY = 0

eyeZ = 4

Lift the camera along the Y axis

eyeX = 2

eyeY = 3

eyeZ = 4

We look at the triangles from above. All three axes and their point of intersection are clearly visible.

lower the camera

eyeX = 2

eyeY = -2

eyeZ = 4

Now the camera looks from below

### direction

We tested the position of the camera, now try to change the direction of the camera.

Put the camera back on the Z axis

eyeX = 0

eyeY = 0

eyeZ = 4

The camera looks at the point (0,0,0). let’s change that

centerX = 1;

That is, change the direction of the camera slightly to the right along the X axis.

now to the left

centerX = -1;

above

centerY = 2;

down

centerY = -3;

centerZ I did not change, try to change it yourself to see that the direction of the camera will change, even if X and Y are unchanged.

### up-vector

It remains to consider the vector that sets the camera’s rotation.

Reset camera direction to (0,0,0)

centerX = 0;

centerY = 0;

centerZ = 0;

The vector is now set as (0,1,0), that is, the camera is rotated so that its up-vector looks upwards along the Y axis.

Let’s turn the camera slightly to the right

upX = 1;

That is, the up-vector is now (1,1,0). That is, it no longer looks up along the Y axis, but up and to the right – between the Y and X axes.

You can not rotate so much to the X axis

upX = 0.2f;

And you can rotate 90 degrees so that the up-vector is in the same direction as the X axis.

upX = 1;

upY = 0;

Take the right turn and point the up-vector down the Y axis

upX = 0;

upY = -1;

The camera swung upward.

### animation

Let’s add a little movement to our still image. We continue to perform all actions in the OpenGLRenderer class.

we add a constant

private final static long TIME = 10000;

We change the onDrawFrame method. Let’s add in its beginning the call to createViewMatrix and bindMatrix methods.

@Override public void onDrawFrame(GL10 arg0) { createViewMatrix(); bindMatrix(); ... }

We will need this to frame each frame and pass it to the shader.

Rewrite the createViewMatrix method

private void createViewMatrix() { float time = (float)(SystemClock.uptimeMillis() % TIME) / TIME; float angle = time * 2 * 3.1415926f; // точка положения камеры float eyeX = (float) (Math.cos(angle) * 4f); float eyeY = 1f; float eyeZ = 4f; // точка направления камеры float centerX = 0; float centerY = 0; float centerZ = 0; // up-вектор float upX = 0; float upY = 1; float upZ = 0; Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ); }

This method will now call each frame. We use this to create animation. To do this, we will perform a number of calculations:

1) Using the time (SystemClock.uptimeMillis) and the TIME constant, we will calculate the time float value from 0 to 1 in the variable. That is, the time variable will increase from 0 to 1, then from 1 to 0, again increase to 1, and so on. e.

2) Further, multiplying this value by 2 and by the number Pi, we get a range of values from 0 to 2 * Pi. And this is the angle expressed in radians.

3) Passing the resulting angle in cos or sin, we get values in the range from -1 to 1. That is, the value will smoothly run between -1 and 1, back and forth. If suddenly this is not clear to you, then it makes sense to read the basic thread of trigonometry.

4) It remains to multiply the result by, for example, 4 and we will get a value that will run between – 4 and 4.

Put this value in eyeX. That is, the camera will move along the X axis from -4 to 4.

Run and get about the following result

If we hang the speaker at eyeY, we get a camera that moves up / down

eyeX = 2f; eyeY = (float) (Math.cos(angle) * 3f); eyeZ = 4f;

Having done so:

eyeX = (float) (Math.cos(angle) * 4f); eyeY = 1f; eyeZ = (float) (Math.sin(angle) * 4f);

we get a camera that rotates around triangles.

Accordingly, you can experiment and hang these values at different coordinates and get different camera trajectories / directions / rotations.

### A small addition to the up-vector

Let’s go back to one of the examples we considered when we positioned the camera this way

eyeX = 2 eyeY = 3 eyeZ = 4 centerX = 0 centerY = 0 centerZ = 0 upX = 0 upY = 1 upZ = 0

And we got this result

If you imagine in this three-dimensional scene, it is obvious that the camera is looking slightly down at its position. It is at a height of 3 (along the Y axis) and looks at a point with a height of 0. This camera requires a slight downward slope.

But with this up-vector we have (0,1,0). That is, it is strictly parallel to the Y axis. Now let’s mention the description of the vector – it should look strictly upwards to the camera. And so our camera is tilted to the Y axis, so it turns out that the up-vector of the camera cannot be parallel to the Y axis. It will also be tilted towards the axis.

It turns out the contradiction. We set the up-vector in the code, but in reality it cannot be this way. The reason for the contradiction is the slightly incorrect description of the up-vector I gave. I explained it as simply as possible so as not to complicate it immediately. And now we need to absorb some more information. I don’t know how to explain it all in terms of geometry. Therefore, I will describe in my own words the simplest way to understand the up-vector.

So, we have a point where the camera looks. And from this point we will hold up-vector. That is not from the camera, but from the point where the camera looks. And depending on the direction of this vector, the camera (which looks at this point) will rotate so that in the final two-dimensional image this vector looks up.

That is, in 3D it can be tilted to or from the camera itself, but in the final two-dimensional image it will always point upwards.

Just in case I will remind the geometry. If we draw a vector (a, b, c) from (x, y, z), then we get a directional segment between points (x, y, z) and (x + a, y + b, z + c).

In the example, the camera looks at the point (0,0,0). Up-vector – (0,1,0). That is, if you draw a vector from this point, the interval between (0,0,0) and (0,1,0) will be obtained. I think it is clear that it will coincide in direction with the Y axis. The camera should be such that this segment (and therefore the Y axis) should look up. And we see on the final picture that the Y axis is really looking up.

In general, words are difficult to explain. Let’s look at an example. We change the code so that the up-vector from the point where the camera is pointing is always displayed. Let’s make it white.

I will give the code of the whole class, so as not to write the changes

public class OpenGLRenderer implements Renderer { private final static int POSITION_COUNT = 3; private Context context; private FloatBuffer vertexData; private int uColorLocation; private int aPositionLocation; private int uMatrixLocation; private int programId; private float[] mProjectionMatrix = new float[16]; private float[] mViewMatrix = new float[16]; private float[] mMatrix = new float[16]; float centerX; float centerY; float centerZ; float upX; float upY; float upZ; public OpenGLRenderer(Context context) { this.context = context; } @Override public void onSurfaceCreated(GL10 arg0, EGLConfig arg1) { glClearColor(0f, 0f, 0f, 1f); glEnable(GL_DEPTH_TEST); int vertexShaderId = ShaderUtils.createShader(context, GL_VERTEX_SHADER, R.raw.vertex_shader); int fragmentShaderId = ShaderUtils.createShader(context, GL_FRAGMENT_SHADER, R.raw.fragment_shader); programId = ShaderUtils.createProgram(vertexShaderId, fragmentShaderId); glUseProgram(programId); createViewMatrix(); prepareData(); bindData(); } @Override public void onSurfaceChanged(GL10 arg0, int width, int height) { glViewport(0, 0, width, height); createProjectionMatrix(width, height); bindMatrix(); } private void prepareData() { float s = 0.4f; float d = 0.9f; float l = 3; float[] vertices = { // первый треугольник -2 * s, -s, d, 2 * s, -s, d, 0, s, d, // второй треугольник -2 * s, -s, -d, 2 * s, -s, -d, 0, s, -d, // третий треугольник d, -s, -2 * s, d, -s, 2 * s, d, s, 0, // четвертый треугольник -d, -s, -2 * s, -d, -s, 2 * s, -d, s, 0, // ось X -l, 0, 0, l, 0, 0, // ось Y 0, -l, 0, 0, l, 0, // ось Z 0, 0, -l, 0, 0, l, // up-вектор centerX, centerY, centerZ, centerX + upX, centerY + upY, centerZ + upZ, }; vertexData = ByteBuffer .allocateDirect(vertices.length * 4) .order(ByteOrder.nativeOrder()) .asFloatBuffer(); vertexData.put(vertices); } private void bindData() { // координаты aPositionLocation = glGetAttribLocation(programId, "a_Position"); vertexData.position(0); glVertexAttribPointer(aPositionLocation, POSITION_COUNT, GL_FLOAT, false, 0, vertexData); glEnableVertexAttribArray(aPositionLocation); // цвет uColorLocation = glGetUniformLocation(programId, "u_Color"); // матрица uMatrixLocation = glGetUniformLocation(programId, "u_Matrix"); } private void createProjectionMatrix(int width, int height) { float ratio = 1; float left = -1; float right = 1; float bottom = -1; float top = 1; float near = 2; float far = 8; if (width > height) { ratio = (float) width / height; left *= ratio; right *= ratio; } else { ratio = (float) height / width; bottom *= ratio; top *= ratio; } Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far); } private void createViewMatrix() { // точка положения камеры float eyeX = 2; float eyeY = 3; float eyeZ = 4; // точка направления камеры centerX = 0; centerY = 0; centerZ = 0; // up-вектор upX = 0; upY = 1; upZ = 0; Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ); } private void bindMatrix() { Matrix.multiplyMM(mMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0); glUniformMatrix4fv(uMatrixLocation, 1, false, mMatrix, 0); } @Override public void onDrawFrame(GL10 arg0) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // треугольники glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f); glDrawArrays(GL_TRIANGLES, 0, 3); glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f); glDrawArrays(GL_TRIANGLES, 3, 3); glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f); glDrawArrays(GL_TRIANGLES, 6, 3); glUniform4f(uColorLocation, 1.0f, 1.0f, 0.0f, 1.0f); glDrawArrays(GL_TRIANGLES, 9, 3); // оси glLineWidth(1); glUniform4f(uColorLocation, 0.0f, 1.0f, 1.0f, 1.0f); glDrawArrays(GL_LINES, 12, 2); glUniform4f(uColorLocation, 1.0f, 0.0f, 1.0f, 1.0f); glDrawArrays(GL_LINES, 14, 2); glUniform4f(uColorLocation, 1.0f, 0.5f, 0.0f, 1.0f); glDrawArrays(GL_LINES, 16, 2); // up-вектор glLineWidth(3); glUniform4f(uColorLocation, 1.0f, 1.0f, 1.0f, 1.0f); glDrawArrays(GL_LINES, 18, 2); } }

The changes are a bit small. Variables that contained camera direction and up-vector data are now taken from the method to class members. They are used in the prepareData method to set the coordinates of a segment that will show us an up-vector constructed from the point where the camera is looking. And the onDrawFrame method added the output of this segment to the screen.

run

We see a white segment showing an up-vector. You can set any point of position and direction of the camera, and up-vector. And in the received image the up-vector will be directed upwards.

A couple more example results

I hope that after that the up-vector will become clear to you.