Lesson 180. ConstraintLayout. foundations

Lesson 180. ConstraintLayout. foundations


Android Studio by default suggests that we use ConstraintLayout when creating screen layout. Let’s see what this is for a piece and how to deal with it.

Although this is a 180th lesson, it will be honed for beginners because it will provide links from the very first lessons. Therefore, I ask the experienced developers not to be surprised at the style of presentation of the material, the beginning you can miss.

a little theory

Let’s start with the basics themselves. You need to use a special container to place different components (buttons, input boxes, checkboxes, etc.) on the screen. It is where you will place the components. In Android, components are called View and the container is ViewGroup.

There are several types of ViewGroup: LinearLayout, RelativeLayout, FrameLayout, TableLayout, ConstraintLayout, etc.

They differ in how they arrange the components within themselves. LinearLayout, for example, builds them horizontally or vertically. And TableLayout – as a table. You can read more about this in Lesson 6.

In this lesson, we will understand how the components in the ConstraintLayout container will behave.

Generally, the word Constraint translates as constraint, coercion. But to me, it does not accurately reflect the content. The most successful word I can pick up here is bind. I will use it.

Practice

To help you practice yourself, I recommend creating a module for this lesson. We already talked about how to create a module in Lesson 3.

In the studio menu: File > New > New modules

Application / Library name: ConstraintLayoutIntro
Module name: p0180constraintlayoutintro
Package name: ru.startandroid.p0180constraintlayoutintro

So, we have a module in Android Studio. And it has the file res> layout> activity_main.xml.

We will open this file with a double click. It looks like this

Make sure the Design tab on the left and bottom of the flyer and the view is Design, not Blueprint.

You now see Hello World text on the screen. This text is displayed using a View called TextView.

You can see it in the Component Tree (bottom left).

Note that TextView seems to be embedded in ConstraintLayout. This is what I was talking about in the beginning. ConstraintLayout is a container, and inside it are different View, in our case – TextView. You can also say that ConstraintLayout is the parent or parent of TextView’s ViewGroup.

Let’s remove TextView from the screen. To do this simply select it on the screen or in the Component Tree and press the Del button on the keyboard.

Now ConstraintLayout is blank and the screen does not display anything.

If you suddenly deleted something by mistake, you can always restore it by pressing Ctrl + Z.

And if you did something wrong there and you can’t cancel it, open the Text tab (left and bottom) and paste this code there:




    

Your screen will return to its original state.

Why bindings are needed

Let’s add some component to the screen, for example, TextView again. To do this, simply drag the component with the mouse from the Palette to the screen.

TextView then appeared on the screen and in Component Tree.

Let’s launch the application and see what this text will look like.

We see that TextView has gone left and up. Something clearly went wrong.

If you open a text view of your screen (the Text tab on the bottom left), you will see that the TextView element is highlighted with a red line.

Hovering over it will show an error:
This view is not restricted, it only has designtime positions, so it will jump to (0,0) unless you add constraints.

The studio tells us that View is not tethered. Its current status on the screen is relevant only for development (ie only in the studio). And when the application is running, this position will be ignored, and View will go to the point (0,0), that is, left-up (which is what we observed at startup).

How to make sure that View in ConstraintLayout stays in place and does not angle? Constraints must be added. They will set View positions on the screen with respect to any other items or parent View.

How to Add Bindings

Let’s add bindings to our TextView.

If you highlight on the TextView screen, you can see 4 circles on its sides.

These circles are used to create bindings.

There are two types of bindings: one puts the View horizontally and the other vertically.

Let’s create a horizontal binding. We tie the TextView position to the left edge of its parent. Let me remind you that the parent of TextView is ConstraintLayout, which in our case occupies the entire screen. Therefore, the edges of ConstraintLayout coincide with the edges of the screen.

To create an anchor, click on TextView to highlight it. Then left-click the left circle and drag it to the left border.

TextView also went left. He was tied to his father’s left border.

But not necessarily they should be close. We can retreat. To do this, simply left-click TextView, drag to the right, and release.

Note the number that is changing. This is the value of TextView indentation from the object to which it is attached (in our case, from the parent’s left border).

let’s launch the application

Previously, TextView used to go up to us, but now it just went up. He did not go to the left because we created a horizontal binding for him. And TextView now knows that it has to be horizontal with some indentation from the left edge.

Let’s create a vertical bind to anchor TextView vertically.

We use the upper circle and pull it to the upper limit. TextView binds vertically to the parent’s top border. You can then drag TextView wherever you need to adjust horizontal and vertical indentation. When you drag, you see the value of the indentation.

TextView is now tied both horizontally and vertically. That is, he knows exactly where he should be on the screen while the program is running.

Run to check

TextView has not gone anywhere, but is where we have set up using bindings.

Let’s add another View, for example, the Button button.

If you launch the application now, the button will go left and up because it is not tied to anything.

We can bind not only to the borders of the parent, but also to other View. Let’s tie the button to TextView.

We tied the button to TextView, creating two bindings

1) Vertical binding. The upper border of the button is tied to the lower border of the TextView. Indent = 82.
That is, we can say that the vertical axis:
button top = TextView + 82 bottom

2) Horizontal binding. The left border of the button is tied to the right border of TextView. Indent 103.
Horizontal axis:
left border of button = right border of TextView + 103

Since the button is tied to TextView, if we now move TextView, then the button will also move.

We’ll add View. For example, CheckBox.

Let’s make it horizontal to the same level as TextView. To do this, we need to bind the left border of CheckBox to the left border of TextView and make a zero indent. And vertically tied to the lower boundary of the father.

Now the checkbox and TextView are left-aligned.

How to remove a bind

To remove the bind, just click on the appropriate circle. We remove the bindings from the buttons.

There is a special button to remove all the bindings at once

Binding on both sides

We have looked at examples where View was tied to each axis on one side only. That is, only left or right horizontally, and vertically above or below. But we can bind View on both sides of each axis.

So far, let’s just look at the horizontal snap. But, of course, all this will work for vertical binding as well.

For example, let’s try to tie the left edge to the parent parent border and the right edge to the parent parent border.

Let’s clear the screen from all View and add a new TextView without any bindings. Now let’s tie it to the left and right borders of the father.

TextView initially went left because it was anchored to the left border, but after creating an anchor to the right border, it aligned and is now centered. That is, the bindings are aligned with each other, and the View is exactly in the middle between what it is tied to the left and what it is attached to the right. That is, in our case, the View is in the middle between the left and right boundaries of its parent.

Note that such two-sided bindings appear as springs, not lines.

Let’s complicate the example a bit by adding a button and adjusting the bindings

The button is tied to the right edge. And TextView is tied to the left edge and to the button.

If we now move the button, then TextView will remain exactly midway between the left edge and the button.

We can adjust the two-sided binding so that the View is not located in the middle, but closer to the left edge or to the button. It is convenient to use a special scroll in Properties.

With this scroll you set the proportion. The default value is 50. This is half of 100. Accordingly, View is half the distance between the objects to which it is attached. In our case, at value = 50, TextView is in the middle between the left edge and the button.

If you set a value of, for example, 25, then TextView will be at the left edge at a quarter of the distance between the left edge and the button. If you set 75, the TextView will be 3/4 of the distance between the left edge and the button from the left edge.

And as far as the distance between the left edge and the button is concerned, these proportions will always be maintained.

In the next lesson, we will continue to explore the possibilities of ConstraintLayout.

P.S.
If you came in this lesson from the first lesson, you can now go back and continue your studies. This information will be enough for you.




Discuss in the forum [8 replies]

Lesson 176. OpenGL. Indices, textures for the cube.

Lesson 176. OpenGL. Indices, textures for the cube.


In this lesson:

– we use indexes and texture for the cube

In the last lesson, we learned how to hang a texture on a triangle, and saw how two triangles can be the face of a cube. An example of that lesson could well be supplemented by displaying a full cube by adding the other 5 faces and matching them with the texture coordinates.

In this tutorial we will be drawing a cube, but we will use a couple of new specific pieces to make it easier.

First, we will use indexes. This will allow you to shorten and simplify the code a bit. Now, to draw a cube, we need to specify all vertices in the array of vertices several times, because each vertex is used in the several triangles that make up the faces of the cube. As a result, the array of vertices is very complex and cumbersome. And if suddenly we need to change the coordinates of one vertex, then we will need to find all its occurrence in the array of vertices and change them.

Instead, we can first list all 8 vertices of the cube in an array of vertices. And in a separate array of indices we already set only the numbers (array indices) of vertices for the construction of triangles. That is, now in the array (indexes) we will specify several times not the vertices, but their numbers. And if we need to change the coordinates of the vertex, then we just need to find that vertex in the vertex array and change it once there.

We will pass the vertex array to the shader as an attribute. And we pass the array of indexes to the glDrawElements method, which we call in onDrawFrame instead of the usual glDrawArrays.

Secondly, in this lesson we use the texture object CUBE_MAP, Not 2D. This is a special type of texture that holds a set of 6 images. And if we are drawing a cube, we use this type of texture in the fragment shader, then its 6 images will lie flat on 6 faces of the cube.

This type of texture, by the way, frees us from having to bind texture coordinates to vertices. We just need to make a cube of 8 vertices, and the system itself will impose textures on it.

Let’s watch the code. Download the source code and open the module lesson176_texture_cube.

Let’s start with the class TextureUtils. He has a method loadTextureCubeWhich is receiving not more than one resource id, but an array. In this array, we will pass six ids of image resources, one to each face.

Creating a texture object remains unchanged. Then we create an array of Bitmap and load images using id resources. Next, put slot 0 and target GL_TEXTURE_CUBE_MAP place our created texture object. Now we can tweak this texture object and put pictures in it.

For starters, using the glTexParameters method, we set up filtering in GL_LINEAR for both modes (for details in Lesson 175).

Then, using texImage2D methods, we place the images into the texture. But for this we do not use target GL_TEXTURE_CUBE_MAP, but its derived target. I will explain a little what they mean.

The center of the cube will be at (0,0,0). And the edges of the cube will be perpendicular to the axes X, Y and Z. That is, each axis will pass through two faces, and these two faces will be on different sides from zero. That is, if you take, for example, the X-axis, there will be two points of intersection with the edges: -1 and 1. That is, one face of the cube will cross the axis at point -1 and the other at point 1. Accordingly, these boundaries can be called as NEGATIVE_X and POSITIVE_X.

Similarly, the boundaries of Y and Z cross their boundaries by -1 and 1. And their boundaries can be denoted as NEGATIVE_Y and POSITIVE_Y, and NEGATIVE_Z and POSITIVE_Z

These face names are used as targets when filling in textures. We are required to specify what limits, which picture to compare. And when drawing on each face, the corresponding picture will be superimposed on it.

As a result, the loadTextureCube method takes from us an array of 6 images, creates a texture object, and fills it with these images in target GL_TEXTURE_CUBE_MAP.

looking class OpenGLRenderer

method prepareData. Only 8 vertices are contained in the vertices array. And the indexArray array contains descriptions of the triangles that make up the edges of the cube, but instead of vertex coordinates, vertex indices from the vertices array are used here.

In texture, we store the texture id of a cube created from the 6 images previously considered by the loadTextureCube method.

Note that there are no texture coordinates and mappings with vertices like in the previous lesson. In the case of a cubic texture, the shader itself will deal with these issues.

IN bindData we transmit vertex data to the vertex shader attribute. Note that we will use the texture created previously in slot 0, target GL_TEXTURE_CUBE_MAP.

IN onDrawFrame Instead of glDrawArrays, we use the glDrawElements method. He requires us to indicate:
– what type of primitives to draw
– how many elements from an array of indexes to take for drawing
– what type of data is used in the index array
– an array of indexes.

That is, we used the glDrawArrays method to specify how many vertices to use. And the shader just took the vertices in order from the attribute. And in the glDrawElements method we pass an array of indexes, which explicitly specifies which vertices and in what order to take from the attribute.

riding shader vertex_shader.glsl

In a_Position (vec4 type), the vertices of the cube come. We need to pass this data to the Fragment Shader in interpolated form and in vec3 format. For this we use varying vec3 variable v_Position. We place only the first three components (xyz) of the variable vec4 in it.

In gl_Position we pass data on vertices processed by a matrix.

That is, we passed the vertices to the Fragment Shader before they were processed by the matrix. That is, the fragment shader needs the vertices that we put in the array of vertices.

Fragment shader fragment_shader.glsl

The uniform variable we put the slot number in is of type samplerCube. This (as far as I understand) means that in the slot with the specified number, the shader will look for a cubic texture.

we call the method textureCube, Which requires the slot to specify the slot number in which the texture is placed and the interpolated data along the vertices of the cube. The textureCube method will determine which face is now drawn, which of the 6 images that lie in the texture should be used to map the vertices and coordinates of the texture.

run

We see that the top is 3. This is a result of the fact that we put the picture box3 in target GL_TEXTURE_CUBE_MAP_POSITIVE_Y. That is, it lies on the edge of the cube, which intersects the positive (upper) part of the Y-axis. Similarly, the face 5 lies on the face of the cube, which intersects the positive (near us) part of the Z-axis, because we put box5 in target GL_TEXTURE_CUBE_MAP_POSITIVE_Z.

Add rotation to look from all sides

You can verify other faces with the corresponding target, everything should be as we indicated in the loadTextureCube method.

some remarks

For the shader to properly lay the texture on the cube, it is necessary that the center of the cube is at a point (0,0,0) and that the edges of the cube are perpendicular to the coordinate axes.

I tried to break these conditions and set, for example, such vertices in an array of vertices:

-0, 0.7f, 1,
1, 0.7f, 0,
-0, -0.7f, 1,
1, -0.7f, 0,
-1, 0.7f, -0,
0, 0.7f, -1,
-1, -0.7f, -0,
0, -0.7f, -1

That is, the vertices are set so that the cube will be slightly rotated and its boundaries will be non-perpendicular to the axes.

result

That is, the shader tried to apply textures to the borders according to their location relative to the axes, but the cube was placed incorrectly, and the textures lay on the corners.

I also tried to shift the center of the cube, for example, by -2 along the Z axis.

-1, 1, -1,
1, 1, -1,
-1, -1, -1,
1, -1, -1,
-1, 1, -3,
1, 1, -3,
-1, -1, -3,
1, -1, -3

result

It can be seen that the textures also went wrong.

That is a method textureCube the fragment shader will work correctly only if the center of the cube is at a point (0,0,0) and the boundaries will be perpendicular to the axes.

In the example of this tutorial, I used 6 different pictures for clarity. Of course, you can only use one image, if you have all the faces of the cube must be the same. In this case, the loadTextureCube method can send not an array, but only one id, create one bitmap and hang it on all 6 target.




Discuss in the forum [5 replies]

Lesson 175. OpenGL. Textures.

Lesson 175. OpenGL. Textures.


In this lesson:

– we use textures

In the past lessons, we have drawn colored triangles. In this lesson, we will change the color to texture. We can take any picture and tell the system to “overlay” that image on a triangle instead of just filling it with color.

Before we get into practice, we will need to discuss two main points when working with textures:
– How to get a texture ready to use with OpenGL from a regular image
– how to apply a texture to a triangle

Creating texture from a picture

Let’s start by handing the picture to us in OpenGL. To do this, we have to learn three concepts: texture unit, texture target, texture object.

Texture object is a texture object that stores texture and some of its parameters. The features of working with OpenGL are such that you can’t just pick up and edit this object, or use it to display it. You need to put it in a specific slot. And then you can modify or use this texture object in your image.

The slots look something like this

Each large rectangle signed GL_TEXTURE (Where N = 0,1,2 …) is texture unit. GL_TEXTURE is the name of a constant that can be accessed. I only painted three units, but more.

Every small rectangle inside a large one is a texture target. It can still be called texture type. And as far as I understand, there are only two types in OpenGL ES:
GL_TEXTURE_2D is a regular two-dimensional texture
GL_TEXTURE_CUBE_MAP – Expanded cube texture. That is, this is such a thing, consisting of 6 squares

We will use GL_TEXTURE_2D in this lesson.

To work with a texture object, it must be placed in the target of any unit. Next, our work will go with this target. And it will already change the texture object.

That is, in order for us to use any 2D image as a texture on the screen, we will need to follow these steps.

1) Read the picture in Bitmap

2) Create a texture object

3) Make some unit active. The system will perform all further actions on textures in this unit. By default, the active unit is GL_TEXTURE0.

4) Place the created texture object (according to claim 2) into any texture target. In our examples, this will usually be GL_TEXTURE_2D. In the text below I will use this target. We place the object in the target to be able to work with that object. Now all the operations we want to do with the object we will address in the target.

5) The texture object we have created, but not configured. We need to do two things: throw in Bitmap (according to claim 1) and set up filtering. Filtering is responsible for which algorithms will be used if the texture has to be compressed or stretched to display it.

I remind you that we do not work directly with the texture object. But this object is already in the target, and we will work with target, and it will already bring all the information to the texture object.

That is, for GL_TEXTURE_2D it is necessary to specify the necessary filtering modes and transfer Bitmap to it. After that, our texture object is ready. You can go to the shader.

6) Fragment shader will work with the texture. It is he who is responsible for filling the figures with pixels. Only now, instead of a simple color, will it determine which point in the texture should be displayed for each point of the triangle.

In order for the shader to know exactly what texture it should use, we need to provide it with this information. Logically, it would seem, just to pass in it an object of texture. But, unfortunately, everything is a little more complicated than we would like. And in the shader we will pass not the texture object, but the unit number in which the texture is currently located.

But we remember that the texture is not just in the unit, but also in the target. How then will the shader understand what target the specified unit should look for texture? It will depend on what type of variable we use in the shader to represent the texture. We will use the sampler2D type in our example. And thanks to this type, the shader will understand that it needs to retrieve the texture from target GL_TEXTURE_2D.

We will take it further by example. Now, the most important idea to understand is that we do not work directly with the texture object. We put it in a specific target of a certain unit. After that, we can change it there via target, and the Fragment shader can then take it for display.

Using texture

Now the second important theoretical part. We need to understand how the texture will be “pulled” on the object. Consider the simplest example. We have a square that we have drawn using two triangles.

For simplicity, I only use X and Y coordinates here. Z is not absolutely important here.

So we used 4 vertices to draw a square. To impose a texture on this square, we need to map the vertices of the square and the coordinates of the texture.

The texture can be represented as follows

That is, each side of the texture is considered equal to 1 (even if the sides are not equal). And using these S and T coordinates we can point to any point in the texture.

If we want to hang a texture on our square, we just need to compare the angles of the square and the angles of the texture. That is, for each vertex of the square, you must specify a texture point that will correspond to that vertex.

In our example, we compare the coordinates of the vertex of the square and the coordinates of the texture point as follows:

top left (-1,1) -> top left texture point (0,0)

left lower vertex (-1, -1) -> left lower vertex texture (0,1)

right top (1.1) -> right top texture point (1.0)

bottom right vertex (1, -1) -> right bottom vertex texture (1,1)

So we have the vertices of the square mapped the corners of the texture and, as a result, the texture will evenly square and fill it completely.

It should be understood here that the texture will not be squared but two triangles. After all, we build images from triangles. One triangle will be overlaid on one texture and another triangle overlaid. As a result, two pieces of texture on two triangles will look like a whole square texture.

This is how one triangle looks like

Well, it is logical to assume that if the shaders are engaged in the comparison of the vertices of the triangle and the coordinates of the texture, then we will need to transmit this data to the shaders. The tops of the peaks are already passed to us, and in this lesson we will add texture coordinates to them. When the shader receives this data, it will know which vertex is at which point of texture corresponds. And for all other points of the triangle (which are between the vertices), the corresponding texture points will be calculated by interpolation.

This mechanism is similar to what we discussed in Lesson 171 when drawing a gradient. There we specified color for each vertex, and the Fragment shader interpolated them between the vertices, and we got a gradient. In the case of texture, the Fragment Shader will calculate the texture coordinates, not the color.

Let’s look at the code that will implement everything we’ve discussed. Download the source code and open the module lesson175_texture.

First, let’s look at the class TextureUtils. It has a method loadTexture. This method accepts the image resource id as input, and returns the id of the created texture object to the output, which will contain the image. Let’s consider this method in detail.

method glGenTextures create an empty texture object. How parameters are passed:
– how many objects you need to create. We need one texture, point to 1.
– int array into which method will place the id of the created objects
– offset of the array (the index of the array element from which the method will begin to populate the array). Here, as always, we pass 0.

We check if id is 0, then something went wrong and no texture object was created. Turn 0.

Following are the methods for getting Bitmap from a resource. You can read more about this in Lessons 157-159.

If Bitmap fails, we delete the texture object using the glDeleteTextures method. How parameters are passed:
– how many objects to delete. We need to remove 1 object.
– array of id objects
– offset of the array (the index of the array element from which the method will begin reading the array). Again 0.

Next, work begins with the units and target. method glActiveTexture we make the unit GL_TEXTURE0 active, ie the unit with the number 0. Now all further operations will be addressed to this unit. But the target will need to be specified in each operation.

method glBindTexture we at target GL_TEXTURE_2D place our texture object by passing its id there. Note, we have specified only target, no unit. Because we already set the unit one line earlier and the system, having received only the target, works with this target in the active unit.

method glTexParameters we can set the texture object parameters. This method has three parameters:
– target
– what parameter we will change
is the value we want to assign to this parameter

In our example, we use the glTexParameteri method to specify filtering parameters. Let me remind you that filtering is used when the size of the triangle does not match the size of the texture, and the texture has to be compressed or stretched so that it evenly sits on the triangle.

There are two filtering options we need to specify:
GL_TEXTURE_MIN_FILTER – which filtering mode will be applied when compressing the image
GL_TEXTURE_MAG_FILTER – which filtering mode will be applied when stretching the image

Both of these parameters are set to GL_LINEAR mode. What this mode means and what other modes are, I will briefly describe at the end of this tutorial so as not to be distracted right now.

method texImage2D we pass the bitmap to the texture object. Here we specify the target bitmap previously created. The other two parameters are left 0, they are not important to us yet.

method recycle we tell the system that we no longer need bitmap.

Finally, we call the method again glBindTexture, In which in target GL_TEXTURE_2D we pass 0. Thus, we untie our texture object from that target.

That is, we first placed the texture object in the target, performed all operations with it, and then released the target. As a result, our texture object is now customized, ready to go, and not tied to any target.

looking class OpenGLRenderer. Here, compared to past lessons, there are a few changes that do not apply to textures. I took the code to create the shaders and the program into a separate createAndUseProgram method. And in the getLocations method, I made calls to methods that return us the position of variables in the shader.

Now we are looking at innovations regarding textures. That is what and how we do to use texture. Let me remind you briefly that we are required to:

1) Create a texture object from the picture
2) Match the vertices of the triangle and the coordinates of the texture, and pass this data to the shader so that it knows how, it should overlay the texture on the triangle.
3) Put the texture object in the target of some unit
4) Transfer to the shader the unit number that currently contains the texture object

look at the method prepareData. In the vertices array, we specify 4 vertices to draw a square. For each vertex we set 5 numbers. The first three are the coordinates of the vertex, and the last two are the corresponding place on the textures.

In the texture variable, we place the texture object id created from the picture box.

In the method getLocations note two new variables from shaders:
a_Texture is an attribute in vertex shaders. We will pass texture coordinates to it.
u_TextureUnit is a uniform variable, we will pass the unit number into which we will place the texture.

In the method bindData we first pass the vertex coordinates to aPositionLocation. Then we pass the texture coordinates to aTextureLocation. That is, we pass data into two attributes from one array. We already did this in Lesson 171. If suddenly forgot, you can look there, I wrote everything in great detail.

method glActiveTexture we are active unit 0. It is still active by default, but suddenly somewhere in the code we changed it and made some other unit active. Therefore, just in case, we perform this operation.

method glBindTexture we place the texture object in target GL_TEXTURE_2D.

method glUniform1i we pass to the shader information that the texture it can find in unit 0.

In the method onDrawFrame we ask the system to draw us triangles of 4 vertices. As a result, a square will be drawn and a texture will be superimposed on it.

Now we look at the shaders.

First a riding shader vertex_shader.glsl. Here we still calculate the final coordinates (gl_Position) for each vertex using a matrix. And the a_Texture attribute comes with texture coordinates. And we write them immediately in varying the v_Texture variable. This will allow us in the fragment shader to obtain interpolated data by texture coordinates.

Fragment shader fragment_shader.glsl. In it we have a uniform variable u_TextureUnit, in which we get the unit number in which we need the texture. Note the type of variable. As a reminder, we passed 0 as an integer to this variable. And here we have some complicated type of sampler2D. It confused me a little at first, and I had to dig that moment. In the end, I came to the conclusion that when the system passes 0 to the shader in the sampler2D type, it looks into the unit 0 and places the texture content from target GL_TEXTURE_2D in sampler2D.

That is, the number passed to the shader (in our case 0) indicates which unit to watch. And the type of variable to which this number is passed (in our case sampler2D) indicates from which target the texture (from 2D target) should be taken. Of course, this only works if you put texture there with the glActiveTexture and glBindTexture methods.

In varying the v_Texture variable comes interpolated texture coordinates from the vertex shader. And the shader knows which point of texture should be displayed at the current point of the triangle.

It remains to use the texture coordinates and the texture itself to get the final snippet. This will execute the method texture2D, And in gl_FragColor we get the color of the desired point from the texture.

run the program

The texture is evenly squared. Rather, parts of the texture lie on triangles and as a result we see a square.

Partial use of texture

In the example we used the whole texture from (0,0) to (1,1). But this is not necessary. We can only use part of it.

Let’s look at this picture

It contains two pictures. And a square only needs one, such as the one on the left, up to 0.5. To square it, we just need to change the mapping of vertices and points of texture. Now the right vertices of the square will be compared not with the right angles of the picture, but with the middle.

Let’s display another square with this left half of the texture

supplement the array vertices:

float[] vertices = {
        -1,  1, 1,   0, 0,
        -1, -1, 1,   0, 1,
         1,  1, 1,   1, 0,
         1, -1, 1,   1, 1,
 
        -1,  4, 1,   0, 0,
        -1,  2, 1,   0, 1,
         1,  4, 1,   0.5f, 0,
         1,  2, 1,   0.5f, 1,
};

We added 4 more to the 4 vertices. This is also the square that will be drawn above the first one. The texture coordinates for it correspond to the left half of the texture.

Since we are going to use another texture, we need it in class OpenGLRenderer create another variable

private int texture2; 

In the method prepareData add the code to create the second texture object

texture2 = TextureUtils.loadTexture(context, R.drawable.boxes); 

rewrite onDrawFrame

public void onDrawFrame(GL10 arg0) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 
    glBindTexture(GL_TEXTURE_2D, texture);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
 
    glBindTexture(GL_TEXTURE_2D, texture2);
    glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
}

Here, we ask the system to draw triangles first from the first four vertices, then from the second four vertices. Each four will give us a square. And before drawing each square, we place the texture corresponding to it in the target so that the shader uses the first texture for the first square and the second for the second square.

run

We see another square. The shader used not the whole texture, but its left half, because we specified it with coordinates in an array of vertices.

Finally, a little more theory

several units

Why may several units be needed? There are times when a Fragment shader has to use several textures at once to get the final fragment. Then he can not do without several units, in the target-s which are placed different textures.

You can get the number of units available to you using the glGetIntegerv method

glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, cnt, 0);

Where, cnt is an int array[] from one element. As a result, int[0] will store the number of units.

filtration modes

Let’s talk a little more about filtering. What is it, when applied, what filtering modes are available?

So, we need a texture to “pull” on the triangle. The point of the triangle is called – fragment (hence the name of the fragment shader, which should draw each fragment of the triangle). And the point of texture is called Texel. When texture is superimposed on a triangle, their dimensions may not match, and the system has to adjust the size of the texture to the size of a triangle. That is to stuff several texels into one fragment (minification) if the texture is larger than a triangle. Or stretch one textel into several magnifications if the texture is smaller. In this case, filtering is applied to obtain the final fragment.

There are two basic modes of image filtering:
NEAREST – just take the nearest texel for each snippet. Works fast, but the quality is worse.
LINEAR – 4 nearest texels are taken for each fragment and their average value is calculated. Slower speed, but better quality.

In addition to filtering, mipmapping can be applied. It creates several copies of different sizes for texture – from the original to the very minimal. When performing the filtering, a copy of the texture closest to the triangle is taken. This will provide better quality and speed up the process, but will increase memory consumption because you need to keep a few smaller copies of the texture in memory. More details can be found here.

The following code is required to enable mipmapping:

glGenerateMipmap(GL_TEXTURE_2D); 

Call it as soon as you put the bitmap in the texture. For a guaranteed result, your texture should have POT (power of two) sizes. That is, the width and height of the texture must be equal to any degree of two: 1, 2, 4, 8, 16, 32, etc. The maximum size is 2048. The texture is not required to be square, that is, the width may not be equal to the height, the main thing is that both values ​​are POT.

There are two ways mipmapping can be used in filtering:
MIPMAP_NEAREST – выбирается копия текстуры наиболее подходящая по размеру и к ней применяется фильтрация, чтобы получить итоговый фрагмент из текселей
MIPMAP_LINEAR – выбираются две копии текстуры наиболее подходящие по размеру, к обоим применяется фильтрация. От фильтрации каждой копии мы получаем по фрагменту, а в качестве итогового фрагмента берем их среднее значение.

Второй способ даст лучшее качество, но первый – быстрее.

Эти два способа подбора копий в комбинации с двумя ранее рассмотренными режимами фильтрации дают нам 4 режима фильтрации:

GL_NEAREST_MIPMAP_NEAREST – фильтрация NEAREST, выбор копии MIPMAP_NEAREST. Тобто выбирается ближайшая копия текстуры, и к ней применяется NEAREST фильтрация.

GL_NEAREST_MIPMAP_LINEAR – фильтрация NEAREST, выбор копии MIPMAP_LINEAR. Тобто выбираются две ближайших копии текстуры, и к каждой копии применяется NEAREST фильтрация. Итоговым результатом будет среднее от двух полученных фрагментов.

GL_LINEAR_MIPMAP_NEAREST – фильтрация LINEAR, выбор копии MIPMAP_NEAREST. Тобто выбирается ближайшая копия текстуры, и к ней применяется LINEAR фильтрация.

GL_LINEAR_MIPMAP_LINEAR – фильтрация LINEAR, выбор копии MIPMAP_LINEAR. Тобто выбираются две ближайших копии текстуры, и к каждой копии применяется LINEAR фильтрация. Итоговым результатом будет среднее от двух полученных фрагментов.

Итого мы получаем 6 возможных режимов фильтрации:
GL_NEAREST
GL_LINEAR
GL_NEAREST_MIPMAP_NEAREST
GL_LINEAR_MIPMAP_NEAREST
GL_NEAREST_MIPMAP_LINEAR
GL_LINEAR_MIPMAP_LINEAR

Первые два применимы и для minification и для magnification. Остальные четыре – только для minification.

Если снова взглянуть на наш код в методе loadTexture класса TextureUtils:

GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);

Мы настраиваем два параметра:

GL_TEXTURE_MIN_FILTER – параметр для задания режима фильтрации при minification GL_TEXTURE_MAG_FILTER – параметр для задания режима фильтрации при magnification

В обоих случаях задаем LINEAR фильтрацию.

Как сделать куб

Я в телеграмм-канал сайта скидывал картинку крутящегося ящика и написал, что вы после этого урока сами сможете такое сделать.

Для этого вам нужно будет немного доработать текущий код.

Для начала, отмените все изменения, которые мы внесли в процессе этого урока.

В моем примере есть только одна сторона куба, состоящая из двух треугольников. Вам надо будет дорисовать остальные 5 сторон. Тобто добавить в массив вершин еще 10 треугольников и правильно сопоставить их с координатами текстуры. Ну и конечно, добавить их отрисовку в onDrawFrame

Текстуру можно для всех сторон использовать одну и ту же. Но если есть желание, можете поискать в инете еще текстуры и сделать куб с разными текстурами для каждой стороны. В этом случае вам надо будет для каждой текстуры создать объект текстуры и помещать его в target перед тем, как вызывать метод отрисовки треугольников соответствующей стороны куба.

А если хотите сделать поворот, то добавляйте model матрицу и настраивайте поворот вокруг оси Y. О том, как это сделать, мы говорили в прошлом уроке.




Discuss in the forum [13 replies]

Lesson 174. OpenGL. Model


In this lesson:

– move a separate object

In the last two lessons, we looked at two matrices.

the first (projection) Allowed us to use three-dimensional coordinates to construct the image. She took all the work of translating 3D into a flat screen image.

the other (view) Allowed camera control. We could look at the image from any angle and from any angle.

In this lesson we look at the third matrix (model) That will allow us to move, rotate, compress, and stretch individual objects in our image.

So, we have a task – to move the object in the final image. That is, you need to move only one object and not touch the other. You can usually dynamically change the vertices of this object in the array that we pass to the shader. But this is not a good way. Especially if you need to rotate the object – you have to calculate the result yourself. And in productivity we lose, if each frame we will pull all coordinates in a shader.

It is much more convenient to use the model matrix. She will just need to indicate what conversions you need, and she will calculate everything herself.

Let’s remember how everything works for us. We get one final result from several matrices. And pass it to the riding shader. And we have an array of vertices. We chase these vertices through a shader that converts them with the resultant matrix.

In the last lesson we used two matrices: projection and view. From them received the final and passed it to the shader. As a result, all objects in our image (ie, axes and triangles) passed through the shader and were converted to 2D (by projection of the part of the summary matrix) and reflected from a certain angle and at a certain angle (by view of the part of the final matrix).

In general, I lead to the fact that we used two of these matrices for all objects. And in this lesson, we will need to apply a third matrix to one of the objects (in addition to those two) in order to affect the location / size / rotation of only that object and not touch the other.

That is, projection and view will process our object along with other objects, it will be part of the overall image, and the third matrix will further transform (eg, shift or rotate) it and only it. Other objects will not be affected.

Let’s look at the code, it will become clearer with it. Download the source code and open the lesson174_model module

looking class OpenGLRenderer. The code as a whole is already familiar with past lessons. I will only comment on the changes.

We have 4 matrices described:

mProjectionMatrix – projection matrix

mViewMatrix – view matrix

mModelMatrix – model matrix

mMatrix is ​​the final matrix

In the method prepareData we define an array of vertices, in which we describe three axes and one triangle.

In the method bindMatrix we added a model matrix to the calculation of the total matrix. Multiply the view and model of the matrix, then multiply the result with the projection matrix, get the final matrix and pass it to the shader.

All the basic code from onDrawFrame for the sake of convenience and clarity, I split into two drawAxes and drawTriangle methods.

IN drawAxes we set the matrix using the setIdentityM method. Then we compute and pass the final matrix to the shader using the bindMatrix method. Since we dropped the model matrix, then multiplying it with the view matrix will give the view matrix unchanged. That is, the dropped model matrix will not affect the final matrix, which will only contain data from the view and projection of the matrices. This is exactly what we need to draw axes.

Subsequent calls to the glDrawArrays method will drive the vertices through the shader, and the shader will use the final matrix to obtain the end result.

method drawTriangle will draw a triangle. Initially, we just reset the model matrix just in case, because before we set it up, we need it clean. Next, in the setModelMatrix method, we will specify the transformations we need in the matrix model. And in the bindMatrix method, we form the final matrix and pass it to the shader. Now, the next calls to the glDrawArrays method will drive the vertices through a shader that contains a matrix constructed with a custom matrix model. Accordingly, those transformations that we put in the model matrix will be applied to the object that will be drawn by the shader. In our case, it’s a triangle.

This is probably a key point in the lesson and should be understood. That is, for the reproduction of axes, we compute the final matrix with an empty model matrix, and for the triangle with the model matrix in which we will configure the transformation. As a result, the axes will be drawn as they should, and the triangle will be displaced / rotated / compressed / stretched, looking at what transformations we will adjust in the model matrix.

method setModelMatrix is still empty. That is, we are not adjusting the model matrix yet, and the triangle will be drawn without any transformations.

run

Translate

Now let’s move the triangle to the right by 1. Let’s rewrite setModelMatrix

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 1, 0, 0);
}

The translateM method sets the matrix to move. In it we specify the model matrix and zero indentation. The last three parameters are the offset values ​​along the X, Y, and Z axes. We set the offset along the X axis by 1. As the camera looks at images from the Z axis, then shifting the triangle along the X axis to 1 will give us a right shift.

Now, when running bindMatrix, which goes before drawing a triangle, the resulting matrix will be calculated taking into account the configured model matrix, and this will affect how the triangle is drawn – it will be shifted to the right.

run

The triangle is shifted to the right. Note that we did not change the initial vertices of the triangle in the array. The offset is realized by the matrix. And it was implemented only for the triangle, and the axes remained in place. It happened because different matrixes were used when displaying axes and triangles.

Let’s test further

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, -1, 0, 0);
}

Offset by -1 on X

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 0, 2, 0);
}

Y axis offset by 2

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 0, 0, 2);
}

Z-axis offset by 2.

Since the camera at our point (0,0,5), the triangle, shifting along the Z axis, began to be closer to the camera and began to look larger.

Now let’s give it away

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 0, 0, -1);
}

Z-axis offset by -1.

The triangle became located further and it is visible that it went beyond the intersection of axes.

Of course, you can specify offset on multiple axes at once. Try it yourself.

Scale

Consider compression / stretching capabilities. The scaleM method is similar to the translateM method, but the last three parameters are set not by the offset, but by the compression ratio for each axis.

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 1, 1, 1);
}

We set three units, that is, the object will not change in any of the axes.

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 2, 1, 1);
}

We set a factor of 2 for the X axis, that is, the object will be enlarged twice along the X axis. In our example, this will be an increase in width.

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 1, 3, 1);
}

Coefficient 3 on the Y axis. In our example, this will be three times the height.

If you set a factor less than one, the object will be compressed. Let’s squeeze it along the X axis.

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 0.5f, 1, 1);
}

If you set a negative coefficient, the object will be mirrored.

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 1, -2, 1);
}

We set a factor of -2 for the Y axis. The object will be magnified twice in height and mirrored along the Y axis.

You can set values ​​for multiple axes at once

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 2, 0.5f, 1);
}

Stretched on the X-axis and compressed on the Y

Rotate

It remains to consider the turn. To do this, we use the rotateM method, in which we specify the angle of rotation and the axis of rotation.

private void setModelMatrix() {
    Matrix.rotateM(mModelMatrix, 0, 45, 0, 0, 1);
}

Here we set an angle of 45 degrees and the axis of rotation – (0,0,1). That is, from the beginning of the coordinate system (point (0,0,0)), the axis is drawn through the point given by us (0,0,1). And a rotation will be made around this axis.

The triangle turned 45 clockwise. Why against the clock?

Here is from (0,0,0) to (0,0,1) looking straight into the camera. The triangle turns 45 clockwise when viewed along the direction of this axis. As the camera looks directly opposite to the axis of rotation, it can see that the triangle has turned 45 clockwise.

Let’s change the axis direction so that it coincides with the direction of the camera. To do this, simply set the negative value to Z.

private void setModelMatrix() {
    Matrix.rotateM(mModelMatrix, 0, 45, 0, 0, -1);
}

Here from point (0,0,0) to point (0,0, -1) coincides with the direction of the camera and now the camera sees a clockwise turn.

Set an angle of 180 degrees

private void setModelMatrix() {
    Matrix.rotateM(mModelMatrix, 0, 180, 0, 0, -1);
}

The triangle turned upside down.

A little later, let’s look at the turns around the X and Y axes and add animations, but first let’s look at one important point.

You can specify more than one matrix conversion. And transformations can be both one type, and different.

For example, two resize

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 2, 1, 1);
    Matrix.scaleM(mModelMatrix, 0, 1, 0.5f, 1);
}

The result will be summarized.

The triangle also stretched twice along the X-axis and contracted to 0.5 on the Y-axis. Of course, the result will be the same if we specify both transformations in one method call at once

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 2, 0.5f, 1);
}

Example with two displacements

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 1, 0, 0);
    Matrix.translateM(mModelMatrix, 0, 0, 2, 0);
}

Both displacements will be applied

The triangle will be shifted by 1 along the X axis and 2 by the Y axis.

They can also be combined into one

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 1, 2, 0);
}

Moving and size

private void setModelMatrix() {
    Matrix.scaleM(mModelMatrix, 0, 1, 2, 1);
    Matrix.translateM(mModelMatrix, 0, 1, 0, 0);
}

The triangle is moved by 1 along the X axis and stretched twice along the Y axis

Everything is just a sample, but be careful. In some cases, the order of the transformations matters.

Consider an example: rotation + displacement

To get started, let’s move the camera a little farther from the triangle to get a bigger view. To do this, change the value in the createViewMatrix method

float eyeZ = 8;

Next we set the transformation

private void setModelMatrix() {
    Matrix.translateM(mModelMatrix, 0, 2f, 0, 0);
    Matrix.rotateM(mModelMatrix, 0, 45, 0, 0, 1);
}

We see that the triangle is rotated 45 degrees around the Z axis and displaced 2 by the X axis.

That is, the first turn was made, then the move.

Change places of conversion:

private void setModelMatrix() {
    Matrix.rotateM(mModelMatrix, 0, 45, 0, 0, 1);
    Matrix.translateM(mModelMatrix, 0, 2f, 0, 0);
}

It would seem that the same operations and the result is different. This is because the order of operations has changed. Initially, the triangle was shifted by 2 along the X axis. Then from that position it was rotated about the Z axis and went up accordingly.

That is, it seems that the operations are performed in reverse order because we specify in the code. Consider this moment when you are making several transformations.

Let’s add animations for clarity

private void setModelMatrix() {
    float angle = (float)(SystemClock.uptimeMillis() % TIME) / TIME * 360;
    Matrix.rotateM(mModelMatrix, 0, angle, 0, 0, 1);
    Matrix.translateM(mModelMatrix, 0, 2f, 0, 0);
}

The angle angle will change from 0 to 360 every 10 seconds.

The animation clearly shows that the triangle moves each frame first along the X axis and then rotates around the Z axis.

We swap the conversion operations again to see what it will look like in the animation

private void setModelMatrix() {
    float angle = (float)(SystemClock.uptimeMillis() % TIME) / TIME * 360;
    Matrix.translateM(mModelMatrix, 0, 2f, 0, 0);
    Matrix.rotateM(mModelMatrix, 0, angle, 0, 0, 1);
}

It is already evident that each frame of the triangle first rotates around the Z axis, and then we shift it along the X axis to the right.

Once again I will write about just in case what is happening at all.

1) In the prepareData method, we have prepared vertex data in the array, and in the bindData method we have passed them to vertex shaders. We were called these methods once at the beginning, in the onSurfaceCreated method. That is, we only transmit vertices once, not every frame!

2) When displaying each frame, the system calls the onDrawFrame method. In it, we call glDrawArrays, in which we specify which vertices (from the ones we passed in paragraph 1) the shader should take and which one we draw from them.

3) In addition, onDrawFrame we list the matrix and pass it to the shader. That is, we do this more than once at the very beginning, as vertices, namely each frame. And so we frame each model with a rotateM method and a constantly changing variable angle, each new frame of the summary matrix contains data that is different from the data it contained when the previous frame was played.

4) As a result, each frame shader takes the data we transmitted to it in paragraph 1, applies to them the resulting matrix in the current frame and gives the calculated coordinates of the vertices. As a result, each new frame of the object is drawn in a different place and that is what gives us the animation.

Now you can go back to the turn and look at it in more detail.

We slightly shift the camera to see the triangle at an angle. This will make it better to see the turns on all axes. To do this, change the camera position in the createViewMatrix method

float eyeX = 2;
float eyeY = 2;
float eyeZ = 3;

And in the transformation we will set a turn

private void setModelMatrix() {
    float angle = (float)(SystemClock.uptimeMillis() % TIME) / TIME * 360;
    Matrix.rotateM(mModelMatrix, 0, angle, 0, 0, -1);
}

We see a rotation around the Z axis (yellow). Let’s also try the X and Y axes.

In the setModelMatrix method, change the parameters of the rotateM method:

Matrix.rotateM(mModelMatrix, 0, angle, 1, 0, 0);

The triangle will now rotate around the X axis (red)

Y-axis (blue)

Matrix.rotateM(mModelMatrix, 0, angle, 0, 1, 0);

Now the axis drawn to the point (0,1,1)

Matrix.rotateM(mModelMatrix, 0, angle, 0, 1, 1);

Try to imagine an axis going from point (0,0,0) to point (0,1,1) – that is, it will lie between the axes Y and Z. And a triangle will rotate around it.

As a small independent work, try to display the axis of rotation, just as we output the up-vector in the last lesson. This will give more clarity of rotation.




Discuss in the forum [9 replies]

Lesson 173. OpenGL. Cell

Lesson 173. OpenGL. Cell


In this lesson:

– working with the camera

Let’s recapitulate the material of the last lesson.

1) Our screen is two-dimensional. Accordingly, to display images, the system uses only the x and y coordinates. The z coordinate is used to determine which point to deduce if multiple points coincide with xy coordinates, and it does not add three-dimensionality. To give a three-dimensional image, w is used, with which the system emulates perspective.

2) In our application we want to describe the three-dimensional world and use the three-dimensional XYZ coordinate system for this purpose.

3) The perspective matrix allows us to implement the wish list of claim 2. It converts virtual 3D coordinates to two-dimensional (p. 1) so that the system can draw an image that looks like three-dimensional.

4) The 3D image is limited on all sides by a figure that is a prism or a truncated pyramid and is called frustum. At the top of the pyramid is a camera, that is, the point from which we “look” at the image. The gaze of the camera goes right through the frustum.

5) By default, the camera is at a point (0,0,0) and looks down the z axis.

In this lesson, we will discuss item 5, that is, how we can affect the status and direction of the camera.

If you have a complex 3D scene with a bunch of small objects and you want, for example, for the user to view it from all sides, then instead of rotating all objects in front of the camera, we can simply move the camera itself. As a result, of course, the system will list all points of all objects, but this does not threaten us with any difficulties, we will simply indicate the position and direction of the camera.

Two points and a vector are used to set the direction and position of the camera.

The first point is the position of the camera. The default is the point with coordinates (0,0,0). Here we can specify any point and, in this way, move the camera anywhere.

The second point specifies the direction of the camera. That is, at this point the camera “looks”. Here we can also specify any point and the camera will point to it.

Remains vector. It is no longer as easy to explain as two points. Imagine that your camera has an antenna, like a radio or an old cell phone. The antenna is pointing up towards the camera. The direction indicated by the antenna is a vector. To differentiate it simply from the concept of a vector, I will call it an up-vector.

To better understand this, imagine that you are picking up the camera, putting it in the position set by the first point. Then direct it to the second point. All. Your camera is locked. You can now neither move it nor rotate it in the other direction so as not to leave the first and second points. The only thing you can do to disrupt either the position or the direction is to rotate the camera clockwise or counterclockwise around the axis of view. The camera remains at the first point and its direction does not change, it continues to look at the second. Just the final picture will rotate clockwise or anti-clockwise. And so this rotation is governed by a vector that is always pointing up at the camera like an antenna.

As a result, the two points and the vector uniquely set the position of the camera in space.

Let’s move on to practice. We will look at some examples where we will change both points and the vector and see what this leads to.

Download the source code and open the module lesson173_view

As always, look at the class OpenGLRenderer.

Note that we have three matrices:
mProjectionMatrix – this matrix is ​​already familiar to us in the last lesson. She will be responsible for creating a three-dimensional world
mViewMatrix – this matrix will contain the position and direction of the camera. In this lesson we will be working mainly with her
mMatrix is the final matrix that will be obtained by multiplying mProjectionMatrix and mViewMatrix. As a result, mMatrix will contain both perspective and camera data. And we will pass this summary matrix to the shader. The shader will drive through it the points of all our objects, and as a result we will get a three-dimensional picture that will look like we set the camera.

In the method prepareData we have an array vertices. It describes several primitives:
– 4 identical of the triangleLocated around the Y axis. The base of the triangles is 4s, the height is 2s, and the distance from the Y axis is d. By changing these options, you can resize and position the triangles in this example if you suddenly need it.
– three linesIndicating the axes: X, Y and Z. Length of line = 2l.

That is, in three-dimensional it will look something like this:

Next, look at the method createViewMatrix. He is key in this lesson. This creates a matrix that contains the camera data. Recall that these data consist of two points and a vector. And as you can see, we ask them here:

eyeX, eyeY, eyeZ – point coordinates position camera, that is, where the camera is
centerX, centerY, centerZ – point coordinates direction camera, that is where the camera looks
upX, upY, upZ – coordinates up-vector, That is, a vector that allows you to rotate the camera around the axis of “view”

All these parameters will go to the setLookAtM method input, which will fill us with the mViewMatrix matrix.

In the method bindMatrix we multiply the matrices mProjectionMatrix and mViewMatrix. The result will be placed in the mMatrix matrix. And we pass this matrix to the shader using the glUniformMatrix4fv method.

All other code is a sign on past lessons, I will not dwell on it. Let’s better run the example and see what it shows us

See what we set there in the createViewMatrix method.

The position of the camera is at the point (0,0,3), ie the camera is on the Z axis.

Direction – to the point (0,0,0), that is, the camera looks at the center of our coordinate system.

up-vector – (0,1,0), that is, it is directed upwards along the Y axis.

The resulting image is quite consistent with the given parameters. We do not see the (blue) triangle far from us because it is completely covered by the (green) triangle close to us. Also, the green triangle hides the point of intersection of the axes. Because this point is behind it. In general, the Z-buffer works, everything is ok.

The blue line is the X axis, the purple axis is Y.

position

Let’s change the camera settings, in the createViewMatrix method we change:

eyeZ = 2

That is, move the camera a little closer to the point (0,0,0)

run

The green triangle is gone. Why? Its coordinate on the Z axis was 0.9. The camera on the Z axis is now at 2 and looking at 0. As if it should see point 0.9.

Cause in parameters frustumWhich we specify in the method createProjectionMatrix. We set the near-boundary to 2. That is, the camera will begin to “see” objects that are at least 2 away from it. Therefore, the distance between the green triangle and the camera is now 2 – 0.9 = 1.1, so the camera does not see it.

As an example, when eyeZ was 3, the camera was at point 3 on the Z axis, and between it and the green triangle was a distance of 3 – 0.9 = 2.1. That is, the green triangle was in the frustum zone and the camera saw it.

Let’s change the camera settings:

eyeZ = 9

Nothing is visible on the screen. Again, pay attention to the parameters of frustum. They have a far-border = 8. That is, everything that is more than 8 in distance, the camera does not see.

In our example, the camera is at point 9 on the Z axis, looking along the Z axis at the point (0,0,0), and the longest range of view is at 9 – 8 = 1 on the Z axis. And the triangle closest to the camera is at 0.9 . Therefore, the camera does not see it. Other triangles are still further, they are also not visible.

Let’s move the camera a little closer

eyeZ = 7

The triangles fall into the frustum area and the camera sees them.

Now the camera is on one side of the Z axis, let’s move to the other side.

eyeZ = -4

It can be seen that the camera looks at the triangles on the other side. We moved the camera to the other side of the Z axis, but it continues to the point (0,0,0) – we didn’t change anything here. And the up-vector remained the same.

Move the camera to the X axis

eyeX = 3
eyeY = 0
eyeZ = 0

The red triangle is now closest to it. And the Z axis was visible, it was orange.

Place the camera between the X and Z axes
eyeX = 2
eyeY = 0
eyeZ = 4

Lift the camera along the Y axis

eyeX = 2
eyeY = 3
eyeZ = 4

We look at the triangles from above. All three axes and their point of intersection are clearly visible.

lower the camera

eyeX = 2
eyeY = -2
eyeZ = 4

Now the camera looks from below

direction

We tested the position of the camera, now try to change the direction of the camera.

Put the camera back on the Z axis

eyeX = 0
eyeY = 0
eyeZ = 4

The camera looks at the point (0,0,0). let’s change that

centerX = 1;

That is, change the direction of the camera slightly to the right along the X axis.

now to the left

centerX = -1;

above

centerY = 2;

down

centerY = -3;

centerZ I did not change, try to change it yourself to see that the direction of the camera will change, even if X and Y are unchanged.

up-vector

It remains to consider the vector that sets the camera’s rotation.

Reset camera direction to (0,0,0)
centerX = 0;
centerY = 0;
centerZ = 0;

The vector is now set as (0,1,0), that is, the camera is rotated so that its up-vector looks upwards along the Y axis.

Let’s turn the camera slightly to the right

upX = 1;

That is, the up-vector is now (1,1,0). That is, it no longer looks up along the Y axis, but up and to the right – between the Y and X axes.

You can not rotate so much to the X axis

upX = 0.2f;

And you can rotate 90 degrees so that the up-vector is in the same direction as the X axis.

upX = 1;
upY = 0;

Take the right turn and point the up-vector down the Y axis

upX = 0;
upY = -1;

The camera swung upward.

animation

Let’s add a little movement to our still image. We continue to perform all actions in the OpenGLRenderer class.

we add a constant

 private final static long TIME = 10000; 

We change the onDrawFrame method. Let’s add in its beginning the call to createViewMatrix and bindMatrix methods.

@Override
public void onDrawFrame(GL10 arg0) {
    createViewMatrix();
    bindMatrix();
 
    ...
}

We will need this to frame each frame and pass it to the shader.

Rewrite the createViewMatrix method

private void createViewMatrix() {
 
    float time = (float)(SystemClock.uptimeMillis() % TIME) / TIME;
    float angle = time  *  2 * 3.1415926f;
 
    // точка положения камеры
    float eyeX = (float) (Math.cos(angle) * 4f);
    float eyeY = 1f;
    float eyeZ = 4f;
 
    // точка направления камеры
    float centerX = 0;
    float centerY = 0;
    float centerZ = 0;
 
    // up-вектор
    float upX = 0;
    float upY = 1;
    float upZ = 0;
 
    Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
}

This method will now call each frame. We use this to create animation. To do this, we will perform a number of calculations:

1) Using the time (SystemClock.uptimeMillis) and the TIME constant, we will calculate the time float value from 0 to 1 in the variable. That is, the time variable will increase from 0 to 1, then from 1 to 0, again increase to 1, and so on. e.

2) Further, multiplying this value by 2 and by the number Pi, we get a range of values ​​from 0 to 2 * Pi. And this is the angle expressed in radians.

3) Passing the resulting angle in cos or sin, we get values ​​in the range from -1 to 1. That is, the value will smoothly run between -1 and 1, back and forth. If suddenly this is not clear to you, then it makes sense to read the basic thread of trigonometry.

4) It remains to multiply the result by, for example, 4 and we will get a value that will run between – 4 and 4.

Put this value in eyeX. That is, the camera will move along the X axis from -4 to 4.

Run and get about the following result

If we hang the speaker at eyeY, we get a camera that moves up / down

eyeX = 2f;
eyeY = (float) (Math.cos(angle) * 3f);
eyeZ = 4f;

Having done so:

eyeX = (float) (Math.cos(angle) * 4f);
eyeY = 1f;
eyeZ = (float) (Math.sin(angle) * 4f);

we get a camera that rotates around triangles.

Accordingly, you can experiment and hang these values ​​at different coordinates and get different camera trajectories / directions / rotations.

A small addition to the up-vector

Let’s go back to one of the examples we considered when we positioned the camera this way

eyeX = 2
eyeY = 3
eyeZ = 4
centerX = 0
centerY = 0
centerZ = 0
upX = 0
upY = 1
upZ = 0

And we got this result

If you imagine in this three-dimensional scene, it is obvious that the camera is looking slightly down at its position. It is at a height of 3 (along the Y axis) and looks at a point with a height of 0. This camera requires a slight downward slope.

But with this up-vector we have (0,1,0). That is, it is strictly parallel to the Y axis. Now let’s mention the description of the vector – it should look strictly upwards to the camera. And so our camera is tilted to the Y axis, so it turns out that the up-vector of the camera cannot be parallel to the Y axis. It will also be tilted towards the axis.

It turns out the contradiction. We set the up-vector in the code, but in reality it cannot be this way. The reason for the contradiction is the slightly incorrect description of the up-vector I gave. I explained it as simply as possible so as not to complicate it immediately. And now we need to absorb some more information. I don’t know how to explain it all in terms of geometry. Therefore, I will describe in my own words the simplest way to understand the up-vector.

So, we have a point where the camera looks. And from this point we will hold up-vector. That is not from the camera, but from the point where the camera looks. And depending on the direction of this vector, the camera (which looks at this point) will rotate so that in the final two-dimensional image this vector looks up.

That is, in 3D it can be tilted to or from the camera itself, but in the final two-dimensional image it will always point upwards.

Just in case I will remind the geometry. If we draw a vector (a, b, c) from (x, y, z), then we get a directional segment between points (x, y, z) and (x + a, y + b, z + c).

In the example, the camera looks at the point (0,0,0). Up-vector – (0,1,0). That is, if you draw a vector from this point, the interval between (0,0,0) and (0,1,0) will be obtained. I think it is clear that it will coincide in direction with the Y axis. The camera should be such that this segment (and therefore the Y axis) should look up. And we see on the final picture that the Y axis is really looking up.

In general, words are difficult to explain. Let’s look at an example. We change the code so that the up-vector from the point where the camera is pointing is always displayed. Let’s make it white.

I will give the code of the whole class, so as not to write the changes

public class OpenGLRenderer implements Renderer {
 
    private final static int POSITION_COUNT = 3;
 
    private Context context;
 
    private FloatBuffer vertexData;
    private int uColorLocation;
    private int aPositionLocation;
    private int uMatrixLocation;
    private int programId;
 
    private float[] mProjectionMatrix = new float[16];
    private float[] mViewMatrix = new float[16];
    private float[] mMatrix = new float[16];
 
 
    float centerX;
    float centerY;
    float centerZ;
 
    float upX;
    float upY;
    float upZ;
 
    public OpenGLRenderer(Context context) {
        this.context = context;
    }
 
    @Override
    public void onSurfaceCreated(GL10 arg0, EGLConfig arg1) {
        glClearColor(0f, 0f, 0f, 1f);
        glEnable(GL_DEPTH_TEST);
        int vertexShaderId = ShaderUtils.createShader(context, GL_VERTEX_SHADER, R.raw.vertex_shader);
        int fragmentShaderId = ShaderUtils.createShader(context, GL_FRAGMENT_SHADER, R.raw.fragment_shader);
        programId = ShaderUtils.createProgram(vertexShaderId, fragmentShaderId);
        glUseProgram(programId);
        createViewMatrix();
        prepareData();
        bindData();
    }
 
    @Override
    public void onSurfaceChanged(GL10 arg0, int width, int height) {
        glViewport(0, 0, width, height);
        createProjectionMatrix(width, height);
        bindMatrix();
    }
 
    private void prepareData() {
 
        float s = 0.4f;
        float d = 0.9f;
        float l = 3;
 
        float[] vertices = {
 
                // первый треугольник
                -2 * s, -s, d,
                2 * s, -s, d,
                0, s, d,
 
                // второй треугольник
                -2 * s, -s, -d,
                2 * s, -s, -d,
                0, s, -d,
 
                // третий треугольник
                d, -s, -2 * s,
                d, -s, 2 * s,
                d, s, 0,
 
                // четвертый треугольник
                -d, -s, -2 * s,
                -d, -s, 2 * s,
                -d, s, 0,
 
                // ось X
                -l, 0, 0,
                l, 0, 0,
 
                // ось Y
                0, -l, 0,
                0, l, 0,
 
                // ось Z
                0, 0, -l,
                0, 0, l,
 
                // up-вектор
                centerX, centerY, centerZ,
                centerX + upX, centerY + upY, centerZ + upZ,
        };
 
        vertexData = ByteBuffer
                .allocateDirect(vertices.length * 4)
                .order(ByteOrder.nativeOrder())
                .asFloatBuffer();
        vertexData.put(vertices);
    }
 
    private void bindData() {
        // координаты
        aPositionLocation = glGetAttribLocation(programId, "a_Position");
        vertexData.position(0);
        glVertexAttribPointer(aPositionLocation, POSITION_COUNT, GL_FLOAT,
                false, 0, vertexData);
        glEnableVertexAttribArray(aPositionLocation);
 
        // цвет
        uColorLocation = glGetUniformLocation(programId, "u_Color");
 
        // матрица
        uMatrixLocation = glGetUniformLocation(programId, "u_Matrix");
    }
 
    private void createProjectionMatrix(int width, int height) {
        float ratio = 1;
        float left = -1;
        float right = 1;
        float bottom = -1;
        float top = 1;
        float near = 2;
        float far = 8;
        if (width > height) {
            ratio = (float) width / height;
            left *= ratio;
            right *= ratio;
        } else {
            ratio = (float) height / width;
            bottom *= ratio;
            top *= ratio;
        }
 
        Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
    }
 
    private void createViewMatrix() {
        // точка положения камеры
        float eyeX = 2;
        float eyeY = 3;
        float eyeZ = 4;
 
        // точка направления камеры
        centerX = 0;
        centerY = 0;
        centerZ = 0;
 
        // up-вектор
        upX = 0;
        upY = 1;
        upZ = 0;
 
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
    }
 
 
    private void bindMatrix() {
        Matrix.multiplyMM(mMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
        glUniformMatrix4fv(uMatrixLocation, 1, false, mMatrix, 0);
    }
 
    @Override
    public void onDrawFrame(GL10 arg0) {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 
        // треугольники
        glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f);
        glDrawArrays(GL_TRIANGLES, 0, 3);
 
        glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);
        glDrawArrays(GL_TRIANGLES, 3, 3);
 
        glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f);
        glDrawArrays(GL_TRIANGLES, 6, 3);
 
        glUniform4f(uColorLocation, 1.0f, 1.0f, 0.0f, 1.0f);
        glDrawArrays(GL_TRIANGLES, 9, 3);
 
        // оси
        glLineWidth(1);
 
        glUniform4f(uColorLocation, 0.0f, 1.0f, 1.0f, 1.0f);
        glDrawArrays(GL_LINES, 12, 2);
 
        glUniform4f(uColorLocation, 1.0f, 0.0f, 1.0f, 1.0f);
        glDrawArrays(GL_LINES, 14, 2);
 
        glUniform4f(uColorLocation, 1.0f, 0.5f, 0.0f, 1.0f);
        glDrawArrays(GL_LINES, 16, 2);
 
        // up-вектор
        glLineWidth(3);
        glUniform4f(uColorLocation, 1.0f, 1.0f, 1.0f, 1.0f);
        glDrawArrays(GL_LINES, 18, 2);
 
    }
 
}

The changes are a bit small. Variables that contained camera direction and up-vector data are now taken from the method to class members. They are used in the prepareData method to set the coordinates of a segment that will show us an up-vector constructed from the point where the camera is looking. And the onDrawFrame method added the output of this segment to the screen.

run

We see a white segment showing an up-vector. You can set any point of position and direction of the camera, and up-vector. And in the received image the up-vector will be directed upwards.

A couple more example results

I hope that after that the up-vector will become clear to you.




Discuss in the forum [6 replies]

Lesson 172. OpenGL. Perspective. Frustum. Ortho.


In this lesson:

– use perspective mode
– describe frustum
– we use ortho-mode

We turn to 3D. And first let’s figure out how we can realize the prospect. That is, as the distance from us, the objects become smaller, and as they approach – more.

Download the source code and open the module lesson172_perspective.

We look at the code, the class OpenGLRenderer. In the method prepareData two vertices are given.

float x1 = -0.5f, y1 = -0.8f, x2 = 0.5f, y2 = -0.8f;
 
float[] vertices = {
        x1, y1, 0.0f, 1.0f,
        x2, y2, 0.0f, 1.0f,
};

Note that we use 4 values ​​for each vertex – x, y, z, and w. If everything is clear from x, y, z, it’s just the coordinates of the three axes, then the 4th value (w) is unknown to us, we haven’t used it before. If it is not explicitly included in the vertex data, the default value in the shader will be 1. We will also make it equal to one.

The onDrawFrame method states that we need to draw two points from these vertices

glDrawArrays(GL_POINTS, 0, 2);

let’s launch the application

There are two green dots on the screen

Now let’s find out why we need this 4th vertex value – w. It is used by the system to create perspective. When the vertex (x, y, z, w) is input to the system, the system divides the coordinates x, y, z by w and eventually receives the vertex by the coordinates (x / w, y / w, z / w), and this division gives perspective effect. Let’s make sure. Rewrite the array to prepareData:

float[] vertices = {
        x1, y1, 0.0f, 1.0f,
        x1, y1, 0.0f, 1.5f,
        x1, y1, 0.0f, 2.0f,
        x1, y1, 0.0f, 2.5f,
        x1, y1, 0.0f, 3.0f,
        x1, y1, 0.0f, 3.5f,
 
        x2, y2, 0.0f, 1.0f,
        x2, y2, 0.0f, 1.5f,
        x2, y2, 0.0f, 2.0f,
        x2, y2, 0.0f, 2.5f,
        x2, y2, 0.0f, 3.0f,
        x2, y2, 0.0f, 3.5f,
};

We continue to use only two points (x1, y1, 0) and (x2, y2, 0), but now we output each of them 6 times, changing the value of w from 1 to 3.5.

It is logical to assume that if you draw the same point 6 times, the screen will result in one point. But we use different w for each of the 6 points. That is, when drawing each point, its coordinates will be divided by w, and therefore will be different from the others. For example, for a point (x1, y1,0) we get a set of points: (x1 / 1, y1 / 1, 0/1)
(X1 / 1.5, y1 / 1.5, 0 / 1.5)
(X1 / 2, y1 / 2, 0/2)
etc.

That is, they will be completely different points and, accordingly, they will be drawn in different places, not in one.

In the method onDraw don’t forget to specify the glDrawArrays method that we now need to draw not 2 but 12 vertices.

glDrawArrays(GL_POINTS, 0, 12);

run

Each of the two points has now turned into 6 points. And notice how they are relative to each other. An illusion of perspective is created, that is, the points seem to be moving away from us. As long as you pay attention to the size of the point, it does not change. Look exactly at the placement. The greater the value of w, the “farther” from us is the point in the final image.

When I was studying this topic myself, I had some kind of “pattern break”. I once expected that I would simply set the z-coordinate and, thus, indicate to the system how far or near my point would be. And here is some w.

Let’s try to forget about w and use z.

Rewrite the array to prepareData:

float[] vertices = {
        x1, y1, -1.0f,
        x1, y1, -1.5f,
        x1, y1, -2.0f,
        x1, y1, -2.5f,
        x1, y1, -3.0f,
        x1, y1, -3.5f,
 
        x2, y2, -1.0f,
        x2, y2, -1.5f,
        x2, y2, -2.0f,
        x2, y2, -2.5f,
        x2, y2, -3.0f,
        x2, y2, -3.5f,
};

And replace the POSITION_COUNT constant from 4 to 3.

private final static int POSITION_COUNT = 3;

This constant is used in the glVertexAttribPointer method and denotes the number of components we use to pass vertex location data. In the previous example we used 4 components (XYZW) and now we will only have 3 (XYZ).

We have removed the w data from the top (it will automatically be 1 when transferring data to the shader). Now we use different z-coordinates. That is, it seems intuitive that approximately the same result should be obtained, that is, the points should be lined up in perspective, further and further, because they are moving away from us at the expense of the z-coordinate.

run

We only see two points. The focus failed. And here is one very important thing to understand. Our screen is a two-dimensional image. That is, he has only two axes – X and Y. Accordingly, he takes into account only these coordinates when placing on the screen all objects. And if we want to create the illusion of deleting an object, that is, reducing it in size and some offset along the perspective, then we need to change the X and Y values.

Here is an analogy with a piece of paper. You took a letter and painted a house, for example. And then you were asked to draw exactly the same house, but that it stood a little further “inland” of the letter. You just take and draw the same house, but a little smaller, because it’s located a little “farther” from you, and your brain knows that the distance of an object can be emulated simply by making it smaller. But you did not use any z-coordinates. You painted everything on a 2D sheet and used only the X and Y axes.

The situation is similar with OpenGL. The system expects you to have x and y coordinates to draw them on a two-dimensional screen. And she can depict any perspective of the object, only using the x and y coordinates. We have seen from the example points how the value of w can help us. It changes x and y and gives us perspective on the final image.

But then a reasonable question arises – why do you need the z coordinate at all? There is also a job for her. In our two-dimensional image, it is used by a depth buffer (also called a z-buffer). An example is when you already have two adjacent (x, y) dots at the stage of drawing by the image system. For example: (1,2,0) and (1,2, -1). That is, they both have the coordinates x = 1 and y = 2, but differ only in z. And suppose one of these points is red and the other blue. Which system should draw on the screen?

By default, the last one will be shown. That is, it will simply be drawn on top of the previous one. But this is not always correct in terms of the 3D scene. After all, we can first send a close object to the drawing and then a distant object. And in a proper 3D scene, if both of these objects are in the same line of view, the near object must overlap with the far object. But by default it will be visible far, because it was drawn after the neighbor and just wiped it. And here comes the z-coordinate. It is the depth buffer that determines which of the points is “closer” to you, and which is further, and will display the neighbor. And far away, it will not be drawn accordingly.

It should also be noted here that the z-coordinate is limited to 1 in each direction. That is, all points that have a z-coordinate greater than 1 or less than -1 will simply not be drawn. That is, similar to the coordinates of x and y. If you remember, we talked about it in lesson 169.

That is, all summary points must lie between -1 and +1 on each of the three axes.

When I read up to this point, I was a little upset because it all looks kind of awkward, weird and unclear. Especially the w-value, which has to be calculated in some way in order to set this or that distance of removal of the object.

But! Everything turned out to be not as sad as it might have seemed. OpenGL kindly provides mechanisms that let us know nothing about the w-value and use the same z-coordinate to specify the distance to the object. To do this, we just need to create a matrix and use it.

The main point is that there are two coordinate systems:
1) The first one is the 2D we just looked at. Where a pair of x and y sets the location of a point on the screen, z is used by the depth buffer and w is used to adjust xyz to get perspective. It is on this system that we have worked with you until now.
2) The second coordinate system is virtual 3D space. It has three axes of coordinates, and it uses the xyz coordinates to specify the location of the object. It is on this system that we will create our image. And the system, using a matrix, will convert it all to the first system, that is, to the usual coordinates of the 2D screen.

That is, we will place the vertex in 3D using three coordinates (x, y, z) and pass it to the shader. Also in the shader we will pass the matrix. And the shader with the matrix will transform the vertex and get the output (x, y, z, w) values ​​in which the perspective will already be calculated and these values ​​will be used by the system for drawing. That is, the matrix calculates for us, as from z specified by us, to obtain w so that the object is drawn as if it were at the specified distance (z). Thus, the matrix will transition from a virtual three-dimensional world to a two-dimensional screen.

So, we need to create this magic matrix, pass it to the shader, and then to the shader to use this matrix. Before we create a matrix, we need to understand what it will do. We look at the picture.

Camera position is the point where the camera is located. That is from this point we will see the image.

Near plane and Far plane – near and far visibility limits. The camera’s “gaze” direction runs through the center of these borders. Also coming from the camera are four rays that pass through the vertices of these boundaries and eventually form a pyramid. The camera will see everything in this pyramid in between near and far borders (this area is called frustum).

That is, how will the screen image be obtained?

1) First we draw our objects that we want to see. To do this, we always set an array of vertices and ask to draw the objects we need. That is, everything we did before, in the past lessons. The only difference is we will now use the z-coordinate. This will allow us to build a full 3D image, that is, to “zoom in” and “zoom out” objects.

2) We form a frustum matrix, that is, a matrix that will contain data about the pyramid (which we just discussed). To do this, we will specify the distance from the camera to the near and far borders, and the dimensions of the near-border. This will be enough to fully describe the frustum zone.

3) In the shader we apply the matrix of claim 2 to our vertices of claim 1. This will project three-dimensional objects onto a two-dimensional surface. That is, the part of the work I talked about at the beginning of the lesson will be done when the volume moves to the plane and a w-value is used to create perspective and z is used as a z-buffer.

That is, converting virtual 3D coordinates to real 2D coordinates to display images on a two-dimensional screen and maintain 3D visibility.

And as you may recall, we were saying that on the two-dimensional screen we have limits on each of the axes. That is points beyond the coordinates -1 and 1 on each of the three axes that will not be drawn. When converting from a 3D scene to a 2D screen, the Frustum matrix calculates that objects outside the frustum zone will be located at -1 and 1 coordinates after being converted to 2D, and will not be drawn accordingly. .

Let’s go from theory to practice and create a frustum matrix

Let’s rewrite the riding shader vertex_shader.glsl:

attribute vec4 a_Position;
uniform mat4 u_Matrix;
 
void main()
{
    gl_Position = u_Matrix * a_Position;
    gl_PointSize = 5.0;
}

Previously, we just passed the coordinates of the vertex (a_Position) to the system (gl_Position). Now we will transform them with a matrix that will transform the 3D scene into a 2D screen. To do this, we add the u_Matrix matrix to the shader as a uniform parameter. And we will multiply this matrix by a_Position.

In the method bindData we add code to access the matrix in the shader

private void bindData(){
    // координаты
    …
 
    // цвет
    …
 
    // матрица
    uMatrixLocation = glGetUniformLocation(programId, "u_Matrix");
}

Nothing new for us. We use the glGetUniformLocation method and specify the program and the variable name in the shader. And the variable uMatrixLocation has already been declared by me in source codes.

It remains to create a matrix and transfer it to the shader.

Let’s create a method in the same class bindMatrix:

private void bindMatrix(int width, int height) {
    float left = -1.0f;
    float right = 1.0f;
    float bottom = -1.0f;
    float top = 1.0f;
    float near = 1.0f;
    float far = 8.0f;
 
    Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
    glUniformMatrix4fv(uMatrixLocation, 1, false, mProjectionMatrix, 0);
}

Here we specify all the parameters of the frustum area. near and far is the distance from the camera to the near and far borders. variables left, right, bottom and top are the coordinates of the sides of the near-border. If left = -1 and right = 1, it is easy to calculate that the width of near in our three-dimensional scene will be 2. The same height will be equal to 2 because bottom = -1 and top = 1.

It is important to understand here that it does not matter what width / height will be near the border. In the end, it will still convert the matrix to a range from -1 to +1 on the X and Y. The just, if you make the width near equal to 100, then the coordinates of the vertices of your objects will be about the same order. And if you make a width of 2 (ie from -1 to 1, as in our example), then the coordinates of the vertices will be in the range from -1 to 1. Here, as you find it more convenient.

We pass all these parameters to the Matrix method.frustumM. In addition, we pass the matrix there mProjectionMatrixIn which the result will be written. The second parameter of the method is which element of the matrix to write data to. We specify 0.

method glUniformMatrix4fv transfer the matrix to the shader. To do this, specify the position of the matrix – uMatrixLocation, and the matrix data – mProjectionMatrix. For other parameters we use the default values, they are not of interest to us yet.

Width and height come to the input of the bindMatrix method. As long as we do not use them, we will go a little further.

We will call the bindMatrix method in onSurfaceChanged and pass surface sizes there.

@Override
public void onSurfaceChanged(GL10 arg0, int width, int height) {
    glViewport(0, 0, width, height);
    bindMatrix(width, height);
}

run the program

Now that we have used the matrix, the z-coordinates we have specified for the vertices have started to work. And we got the points in perspective.

Let’s look at a more interesting example. Instead of points, we will draw triangles.

rewrite prepareData:

private void prepareData() {
    float z1 = -1.0f, z2 = -1.0f;
 
    float[] vertices = {
            // первый треугольник
            -0.7f, -0.5f, z1,
            0.3f, -0.5f, z1,
            -0.2f, 0.3f, z1,
 
            // второй треугольник
            -0.3f, -0.4f, z2,
            0.7f, -0.4f, z2,
            0.2f, 0.4f, z2,
    };
 
    vertexData = ByteBuffer
            .allocateDirect(vertices.length * 4)
            .order(ByteOrder.nativeOrder())
            .asFloatBuffer();
    vertexData.put(vertices);
}

We will draw two identical triangles in size. The z-coordinates of the vertices are plotted in the variables z1 and z2 for convenience. By changing the values ​​of these variables, we will change the distance to the triangles in the 3D scene. z1 is the distance to the first triangle and z2 is to the second.

rewrite the method onDrawFrame:

@Override
public void onDrawFrame(GL10 arg0) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 
    // зеленый треугольник
    glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f);
    glDrawArrays(GL_TRIANGLES, 0, 3);
 
 
    // синий треугольник
    glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);
    glDrawArrays(GL_TRIANGLES, 3, 3);
}

In the method glClear we added the variable GL_DEPTH_BUFFER_BIT. This is required to clear the depth buffer.

Next, for each triangle, we specify a color and ask the system to draw it.

Also, a string must be added at the beginning of the onSurfaceCreated method.

glEnable(GL_DEPTH_TEST);

This line includes the use of depth buffer. This will allow the system to determine which point is closer to us and display it. We have already discussed this in detail a little earlier.

run

We see two identical triangles.

Now changing the parameters z1 and z2 we can change the distance to the triangles.

Draw the second triangle away

To do this, change the z-coordinate value in the method prepareData

float z1 = -1.0f, z2 = -3.0f;

result

Now let’s return the second to the place, and the first will be removed

float z1 = -2.0f, z2 = -1.0f;

give away both

float z1 = -3.0f, z2 = -3.0f;

The z-coordinate works as it should.

When we determined the distance to the near and far boundaries, we did so from the point (0,0,0). That’s where the default camera is. In addition, the camera is directed along the Z axis, downward (ie, as the distance from the camera the z value will decrease). We’ll learn how to change the position and direction of the camera in one of the following lessons, while we take it for granted.

Based on this information, and mentioning that near we have set = 1 and far = 8, we can assume that the camera will see all objects having z-coordinates from -1 to -8.

Let’s try to set the following values

float z1 = -0.5f, z2 = -9.0f;

run

Both triangles are now beyond the frustum and cannot be seen by the camera.

There is one small flaw in our matrix that we need to correct. Let’s see what the bug is about.

Set the parameters z1 and z2

float z1 = -1.0f, z2 = -1.0f;

let’s launch the application

let’s rotate the screen

It can be seen that the picture is not the same. Let’s figure out why. 3D objects with frustum are first projected into 2D images on the near-border, and then this image from near is stretched to the real screen of the device. Near-border we have square. But the screen of the device is not square, but rectangular. And in portrait orientation, the height is greater than width, and in landscape – width is greater than height. That is, we have a square image stretched to a rectangular screen and we see a distorted picture. Corrects this easily. We just need to make the near-border aspect ratio the same as the screen aspect ratio.

That is, if we have a screen in portrait mode, for example, 480 * 800, then we divide both of these values ​​into smaller ones, that is, by 480 and get 1 * 1.66. We got the aspect ratio of the screen. And these values ​​will be used to determine the dimensions of the near-boundary. That is, in the bindMatrix method, we set left = -1, right = 1, top = 1.66, bottom = -1.66. As a result, the aspect ratio of the near-boundary will be exactly the same as the aspect ratio of the screen. And the final picture will be evenly stretched to the screen without any distortion.

Accordingly, when you turn the screen in landscape orientation, we get a resolution of 800 * 480, and the aspect ratio will be 1.66 * 1. And in bindMatrix we set left = -1.66, right = 1.66, top = 1, bottom = -1.

rewrite bindMatrix

private void bindMatrix(int width, int height) {
    float ratio = 1.0f;
    float left = -1.0f;
    float right = 1.0f;
    float bottom = -1.0f;
    float top = 1.0f;
    float near = 1.0f;
    float far = 8.0f;
    if (width > height) {
        ratio = (float) width / height;
        left *= ratio;
        right *= ratio;
    } else {
        ratio = (float) height / width;
        bottom *= ratio;
        top *= ratio;
    }
     
    Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
    glUniformMatrix4fv(uMatrixLocation, 1, false, mProjectionMatrix, 0);
}

We use the input width and height parameters to determine the aspect ratio and to determine the screen orientation. And depending on the orientation, we set the height and width of the near-border in proportion to the size of the screen.

run

rotate the screen

Now the result is the same

If for some reason you do not need perspective and full 3D, you can use ortho instead of perspective mode. The difference from ortho from perspective is that the matrix will describe not a pyramid but a parallelogram.

In this mode, the subject will always be the same size, no matter how far away from the camera. If you are creating a 2D game on OpenGL, then this mode is perfect for you.

To use this mode, you just need to form the matrix using the orthoM method instead of the frustumM in bindMatrix.

Try changing the z-coordinates of the triangles yourself and make sure that the perspective no longer works.

I noticed that in ortho mode, triangles are not displayed when placed near the near-border level. That is, z = -1 in our example. I cannot yet explain why. In perspective mode, there is no such problem.

It was a difficult subject and it was quite normal that understanding would not come immediately. Just re-read this lesson periodically, and gradually everything will fall into place.

I also highly recommend that you download a demo from this page. Search for the file name there: matrixProjection.zip. Upload the archive, remove and run the exe-shnik in the bin folder

You can change projection type: perspective or ortho, and specify the parameters for creating the matrix yourself. We just talked about all this, so that you will be able to practice perfectly and clearly here.




Discuss in the forum [18 replies]

Lesson 171. OpenGL. Color.

Lesson 171. OpenGL. Color.


In this lesson:

– convey color for the vertices
– use varying variable

In the last lesson we learned how to draw graphic primitives. Now let’s learn to use different colors.

Let me remind you that we set the color as follows:

glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);

Where uColorLocation is a variable that knows where the u_Color variable responsible for color is located under the fragment shader (see Lesson 169).

The glUniform4f method call can be taken from the bindData method and placed in the onDrawFrame

We are opening a project with a lesson171_colors module in it. We look at the OpenGLRenderer class. It is similar to the same class in the example of the last lesson. That is, it draws the same 4 triangles, 2 lines, and three points. But now he will do it in different colors.

The vertices of all primitives are specified in an array

float[] vertices = {
        // треугольник 1
        -0.9f, 0.8f,
        -0.9f, 0.2f,
        -0.5f, 0.8f,
 
        // треугольник 2
        -0.6f, 0.2f,
        -0.2f, 0.2f,
        -0.2f, 0.8f,
 
        // треугольник 3
        0.1f, 0.8f,
        0.1f, 0.2f,
        0.5f, 0.8f,
 
        // треугольник 4
        0.1f, 0.2f,
        0.5f, 0.2f,
        0.5f, 0.8f,
 
        // линия 1
        -0.7f, -0.1f,
        0.7f, -0.1f,
 
        // линия 2
        -0.6f, -0.2f,
        0.6f, -0.2f,
 
        // точка 1
        -0.5f, -0.3f,
 
        // точка 2
        0.0f, -0.3f,
 
        // точка 3
        0.5f, -0.3f,
};

And in the method onDrawFrame there are changes:

@Override
public void onDrawFrame(GL10 arg0) {
    glClear(GL_COLOR_BUFFER_BIT);
    glLineWidth(5);
 
    // синие треугольники
    glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);
    glDrawArrays(GL_TRIANGLES, 0, 12);
 
    // зеленые линии
    glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f);
    glDrawArrays(GL_LINES, 12, 4);
 
    // красные точки
    glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f);
    glDrawArrays(GL_POINTS, 16, 3);
}

Before every call glDrawArrays there is a challenge glUniform4fWhich sets the color. That is, the triangles will be blue, the lines will be green and the dots will be red.

run

But what if we wanted to draw one of the lines in yellow, for example. Then we just split the call glDrawArraysWhich draws two lines, on two calls, each of which will draw one line. And before each call we will put the right color.

rewrite onDrawFrame:

public void onDrawFrame(GL10 arg0) {
    glClear(GL_COLOR_BUFFER_BIT);
    glLineWidth(5);
 
    // синие треугольники
    glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);
    glDrawArrays(GL_TRIANGLES, 0, 12);
 
    // зеленая линия
    glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f);
    glDrawArrays(GL_LINES, 12, 2);
 
    // желтая линия
    glUniform4f(uColorLocation, 1.0f, 1.0f, 0.0f, 1.0f);
    glDrawArrays(GL_LINES, 14, 2);
 
    // красные точки
    glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f);
    glDrawArrays(GL_POINTS, 16, 3);
}

We split one line drawing call into two and set each color to the right color.

Notice the parameters of these two new calls to the glDrawArrays method. The first line uses two vertices, starting with index 12. And the second also has 2 vertices, but starting with index 14. And before division, this method took 4 vertices, starting with index 12.

run

This method of setting the color is quite simple. There is a more interesting way. We can put a color for each vertex of the primitive. And in the process of drawing, the system itself will interpolate the colors of the vertices to the entire surface of the graphic primitive.

That is, for example, we draw a line using two vertices. The first vertex is green and the second vertex is red. The drawn line will be gradient, that is, have a green color on the side of the first vertex, and as it approaches the second vertex the green color will change red. That is, the system itself calculates the colors of all the intermediate pixels between the vertices (this is called the clever word Interpolation).

Let’s rewrite shaders, first riding vertex_shader.glsl:

attribute vec4 a_Position;
attribute vec4 a_Color;
 
varying vec4 v_Color;
 
void main()
{
    gl_Position = a_Position;
    gl_PointSize = 5.0;
 
    v_Color = a_Color;
}

We have added the a_Color attribute. We will pass color values ​​to each vertex, just as we pass the vertex coordinates to a_Position.

Also, we added the v_Color variable. Note the word varying. We already know that there are attribute variables in which we pass separate data for each vertex into vertex shaders. There are uniform variables in which we pass a single value for all vertices in vertex shaders and all points in a fragment shader. Now we get to the third (and last) type of variables in shaders – varying. Such variables are used to exchange data between the vertex and the fragment shader. We vary the variableing ourselves into vertex shaders, then the system interpolates these values ​​and returns the result to us in the Fragment shader.

In our example in vertex shaders we put a_Color in v_Color. That is, if you look at the example of the green-red line, the first vertex will place in v_Color the value of green, and the second – red. The Fragment shader is then executed for each point between these vertices, and in this shader we will get the interpolated value of v_Color. It will be green for the points near the first vertex and gradually change to red as the points redefine the path to the second vertex. This will allow us to draw a green-red line.

All calculations of the values ​​of varying variables are performed by the system. All we need to do is set the values ​​in the vertex shaders and count them as fragmentary.

Let’s rewrite the Fragment Shader fragment_shader.glsl:

precision mediump float;
 
varying vec4 v_Color;
 
void main()
{
    gl_FragColor = v_Color;
}

we add varying v_Color variable. The value in it is already calculated by the system based on data from the vertex shader. All we have to do is write it in gl_FragColor.

Now you need to change the application code to transmit not only the coordinates of the vertices, but also the color in the shader.

change OpenGLRenderer.java:

delete the variable

private int uColorLocation;

and add it instead

private int aColorLocation;

IN prepareData set the vertices:

float[] vertices = {
        // линия 1
        -0.4f, 0.6f, 1.0f, 0.0f, 0.0f,
        0.4f, 0.6f, 0.0f, 1.0f, 0.0f,
 
        // линия 2
        0.6f, 0.4f, 0.0f, 0.0f, 1.0f,
        0.6f, -0.4f, 1.0f, 1.0f, 1.0f,
 
        // линия 3
        0.4f, -0.6f, 1.0f, 1.0f, 0.0f,
        -0.4f, -0.6f, 1.0f, 0.0f, 1.0f,
};

We will draw three lines, ie we set 6 vertices. But now for each vertex there are not only two XY coordinates, but also three RGB color components. Total 2 + 3 = 5 values ​​for each vertex.

Let’s rewrite bindData:

private void bindData() {
    // координаты
    aPositionLocation = glGetAttribLocation(programId, "a_Position");
    vertexData.position(0);
    glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 20, vertexData);
    glEnableVertexAttribArray(aPositionLocation);
 
    // цвет
    aColorLocation = glGetAttribLocation(programId, "a_Color");
    vertexData.position(2);
    glVertexAttribPointer(aColorLocation, 3, GL_FLOAT, false, 20, vertexData);
    glEnableVertexAttribArray(aColorLocation);
}

First, we transmit data by coordinates. Here, almost unchanged, only in method glVertexAttribPointer the fifth parameter we pass 20. Previously we passed here 0.

This fifth parameter is called stride. It must place the number of bytes that are occupied by our array of data on each vertex. We have 5 float values ​​for each vertex: 2 coordinates (XY) and three color components (RGB). 5 float values ​​are 5 * 4 bytes = 20 bytes. This is the value we convey in stride.

That is, if we look at these two lines

vertexData.position(0);
glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 20, vertexData);

then the following scheme will be obtained:

1) the position in the vertexData array is set to 0, that is, to the first element
2) the system takes 2 float values ​​(ie vertex coordinates) from vertexData and passes them to aPositionLocation (corresponding to the a_Position attribute in vertex shaders)
3) the position is moved by 20 bytes, that is, to the coordinates of the next vertex.

Items 2 and 3 are executed as many times as the vertices need to be drawn. An offset of 20 bytes each time will set the position in the array to the coordinates of the next vertex.

Let’s look further. Everything is similar here. At aColorLocation, we get the location of the a_Color attribute and execute the code

vertexData.position(2);
glVertexAttribPointer(aColorLocation, 3, GL_FLOAT, false, 20, vertexData);

1) the position in the vertexData array is set to 2, that is, to the third element (where the first vertex color data begins)
2) the system takes 3 float values ​​(ie the RGB components of the vertex color) from vertexData and passes them to aColorLocation (which corresponds to the a_Color attribute in vertex shaders)
3) the position is moved by 20 bytes, that is, to the color of the next vertex

Items 2 and 3 are executed as many times as the vertices need to be drawn. A 20 byte offset each time will set the array position to the next vertex color data.

Remaining to overwrite onDrawFrame method:

@Override
public void onDrawFrame(GL10 arg0) {
    glLineWidth(5);
    glDrawArrays(GL_LINES, 0, 6);
}

As you can see, here we just ask the system to draw us lines using 6 vertices. And we say nothing about color or coordinates. The system will run the riding shader 6 times, and thanks to the glVertexAttribPointer methods (which we just discussed in detail) will be able to figure out what data from the array it will need to use as the coordinates of the vertices (it will pass them in a_Position), and which – as color data (a_Color).

run the program

As a result, we see lines whose color changes from one vertex to another. This is a result of the fact that we passed the color data to the riding shader and used varying variables.

Let’s rewrite onDrawFrame:

@Override
public void onDrawFrame(GL10 arg0) {
    glLineWidth(5);
    glDrawArrays(GL_LINE_LOOP, 0, 6);
}

We use a line drawing mode that connects all vertices with each other.

run

Finally, we draw a triangle with vertices of different colors and see how it interpolates these colors to its entire surface.

Rewrite the vertices array to prepareData:

float[] vertices = {
        -0.5f, -0.2f, 1.0f, 0.0f, 0.0f,
        0.0f, 0.2f, 0.0f, 1.0f, 0.0f,
        0.5f, -0.2f, 0.0f, 0.0f, 1.0f,
};

We set three vertices, each of which is filled with two coordinates and three color components.

In the onDrawFrame method, we ask you to draw a triangle.

@Override
public void onDrawFrame(GL10 arg0) {
    glLineWidth(5);
    glDrawArrays(GL_TRIANGLES, 0, 3);
}

run

and get a gradient fill

I hope that after this lesson, a general picture of the shader mechanism began to emerge. It can be divided into points:

1) The method glDrawArrays, In which we specify what figures to draw and how many vertices to use. How many vertices we will indicate here, so many times the horse shader will sound.

2) The riding shader has the attributes in which we need to transmit vertices. The method is responsible for this glVertexAttribPointer, In which we explain in detail the system from which array to retrieve data and by what rules (access, data type, number of values ​​per vertex)

3) Running riding shaderIn which we are still simply passing the received data further to the drawing – gl_Position. At these coordinates, the system will draw the vertices of the primitives. Also in vertex shader we use varying variable to interpolate color and pass it to Fragment shader.

4)Fragment shader is used to draw the contents of a primitive. That is, it is called for each point of the primitive. In our case, it gets the interpolated color and passes it further into gl_FragColor. We will see this color on the screen.

I recommend playing around with an array of vertices and trying to put your coordinates and colors there, and use different shapes in the glDrawArrays method. This will help you better understand all these mechanisms.




Discuss in the forum [6 replies]

Lesson 170. OpenGL. graphic primitives

Lesson 170. OpenGL. graphic primitives


In this lesson:

– we draw graphic primitives

The original lessons are available on githab. Download the project, we will use the lesson170_primitives module in it.

In the last lesson, we learned how to transmit vertex data to shaders and obtain a triangle. To make this mechanism clearer, let’s try to develop a theme, and let’s create some examples of passing vertices and constructing different graphical primitives (point, line, and triangle) from these vertices.

triangle

Right now, our app draws one triangle. If we look at the OpenGLRenderer class, in the prepareData method we will see in it a list of vertices:

float[] vertices = { -0.5f, -0.2f, 0.0f, 0.2f, 0.5f, -0.2f, }; 

Each pair of values ​​is the coordinates (x, y) of one vertex. Three pairs = three vertices = triangle.

Next, in the onDrawFrame method, we use the glDrawArrays method to draw a triangle.

    @Override public void onDrawFrame(GL10 arg0) { 
        glClear(GL_COLOR_BUFFER_BIT); 
        glDrawArrays(GL_TRIANGLES, 0, 3); 
    }

GL_TRIANGLES is the type of primitive to draw, a triangle in our case

0 – vertices must be taken from the array starting from position 0, ie from the first

3 – means that you need to use three vertices for drawing

run the program

Now let’s try to draw 4 triangles. For this we need more vertices. 4 triangles, each with three vertices, so we need 3 * 4 = 12 vertices.

Rewrite the vertices array in the prepareData method

        float[] vertices = {
                // треугольник 1
                -0.9f, 0.8f, -0.9f, 0.2f, -0.5f, 0.8f,
        
                // треугольник 2
                -0.6f, 0.2f, -0.2f, 0.2f, -0.2f, 0.8f,
        
                // треугольник 3
                0.1f, 0.8f, 0.1f, 0.2f, 0.5f, 0.8f,
        
                // треугольник 4
                0.1f, 0.2f, 0.5f, 0.2f, 0.5f, 0.8f,
        };

We now have 12 vertices of which we can construct 4 triangles.

run

But we only see one triangle instead of 4. We forgot to tell the system that we had to draw triangles using 12 vertices. That is, in the glDrawArrays method, we are still passing the value 3, which means that the system will take a value from the array to draw only three vertices.

rewrite onDrawFrame

    @Override public void onDrawFrame(GL10 arg0) {
        glClear(GL_COLOR_BUFFER_BIT);
        glDrawArrays(GL_TRIANGLES, 0, 12);
    }

In glDrawArrays, we specify 12 instead of 3. Now the system will know that it needs to use data from the array to form 12 vertices to draw triangles, and we will get 4 triangles.

run

We see two triangles and a rectangle. This rectangle is actually composed of two triangles that are close to each other.

Look at the vertex array and note that triangles 3 and 4 have common vertices (0.1f, 0.2f) and (0.5f, 0.8f). Here, on this side, they united, forming a rectangle.

We ended up with 4 triangles, two of which look like one rectangle.

types of triangles

To draw a triangle, we pass type GL_TRIANGLES to the glDrawArrays method. There are two other types of triangles: GL_TRIANGLE_STRIP and GL_TRIANGLE_FAN.

What is the difference between them? see the picture

GL_TRIANGLES – every three transmitted vertices form a triangle. That is

v0, v1, v2 is the first triangle

v3, v4, v5 is the second triangle

GL_TRIANGLE_STRIP – each subsequent triangle uses the last two vertices of the previous one

v0, v1, v2 is the first triangle

v1, v2, v3 is the second triangle

v2, v3, v4 is the third triangle

v3, v4, v5 is the fourth triangle

GL_TRIANGLE_FAN – each subsequent triangle uses the last vertex of the previous and the first vertex

v0, v1, v2 is the first triangle

v0, v2, v3 is the second triangle

v0, v3, v4 is the third triangle

Consider these types in the examples

Set 6 vertices in prepareData:

float[] vertices = { 0.1f, 0.8f, 0.1f, 0.2f, 0.5f, 0.8f, 0.1f, 0.2f, 0.5f, 0.2f, 0.5f, 0.8f, };

type GL_TRIANGLES and 6 vertices in glDrawArrays:

glDrawArrays(GL_TRIANGLES, 0, 6);

run

We get a rectangle composed of two triangles.

Now we draw the same rectangle, but a little differently

We set 4 vertices

float[] vertices = { 0.1f, 0.8f, 0.1f, 0.2f, 0.5f, 0.8f, 0.5f, 0.2f, };

type GL_TRIANGLE_STRIP and 4 vertices

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

If you have a development environment swearing at constant GL_TRIANGLE_STRIP, then import it at the beginning of the class:

import static android.opengl.GLES20.GL_TRIANGLE_STRIP;

And for all the following constants, do the same.

run

The result is the same, but this time we used 4 vertices rather than 6. The triangle type GL_TRIANGLE_STRIP helped save a little. In this example, this is usually not particularly critical, but overall, the fewer vertices we have to transmit, the higher the speed of the program.

Consider the latter type. We set 8 vertices

        float[] vertices = { 
                0.0f, 0.0f, 
                -0.4f, 0.4f, 
                0.4f, 0.4f, 
                0.8f, 0.0f, 
                0.4f, -0.4f, 
                -0.4f, -0.4f, 
                -0.8f, 0.0f, 
                -0.4f, 0.4f, 
        };

type GL_TRIANGLE_FAN and 8 vertices

glDrawArrays(GL_TRIANGLE_FAN, 0, 8);

run

Got a hexagon. To do this, we specified the center vertex and vertices, and in GL_TRIANGLE_FAN mode, the system drew a hexagon.

line

Go to the line. To draw a line we need to specify two vertices.

We set them in an array:

float[] vertices = { -0.9f, -0.9f, 0.9f, 0.9f, };

And rewrite onDrawFrame:

    @Override 
    public void onDrawFrame(GL10 arg0) { 
        glClear(GL_COLOR_BUFFER_BIT); 
        glDrawArrays(GL_LINES, 0, 2); 
    }

We use the GL_LINES constant and specify that we need to use two vertices.

run

One line is clear, let’s try to draw three lines.

we indicate the vertices

        float[] vertices = { 
                // линия 1 
                -0.9f, -0.9f, 0.9f, 0.9f, 
                
                // линия 2 
                -0.5f, 0.0f, 0.5f, 0.0f, 
                
                // линия 3 
                0.0f, 0.7f, 0.0f, -0.7f, 
        };

Remember to specify in glDrawArrays that you need to use 6 vertices, and set the line thickness = 5.

    @Override 
    public void onDrawFrame(GL10 arg0) { 
        glClear(GL_COLOR_BUFFER_BIT); 
        glLineWidth(5); 
        glDrawArrays(GL_LINES, 0, 6); 
    }

run

Three lines are drawn

types of lines

There are three types of line reproduction. Let’s look at them in the examples.

Type GL_LINES we have already used it, it just takes pairs of vertices and draws lines between them. That is, if we have vertices (v0, v1, v2, v3, v4, v5), then we get three lines (v0, v1), (v2, v3) and (v4, v5).

we set the vertices

        float[] vertices = {
                -0.4f, 0.6f, 
                0.4f, 0.6f, 
                0.6f, 0.4f,
                0.6f, -0.4f, 
                0.4f, -0.6f, 
                -0.4f, -0.6f,
        };

Specify the type GL_LINES, 6 vertices

glDrawArrays(GL_LINES, 0, 6);

run

Each pair of vertices formed a line.

Type GL_LINE_STRIP draws lines not in pairs, but sequentially between all vertices. That is, if we have vertices (v0, v1, v2, v3, v4, v5), then we get five lines (v0, v1), (v1, v2), (v2, v3), (v3, v4) and (v4, v5).

We will use the same vertices and change the type to GL_LINE_STRIP

glDrawArrays(GL_LINE_STRIP, 0, 6); 

run

The lines are drawn in sequence between all the vertices

Type GL_LINE_LOOP similar to GL_LINE_STRIP, except that it also draws a line between the first and last point.

Change type to GL_LINE_LOOP

glDrawArrays(GL_LINE_LOOP, 0, 6); 

run

The result is the same as GL_LINE_STRIP, plus is the line between the first and last vertices.

Point, point

It remains to consider the point. There are no longer any different types here, just GL_POINTS.

Note it in glDrawArrays:

    @Override 
    public void onDrawFrame(GL10 arg0) { 
        glClear(GL_COLOR_BUFFER_BIT); 
        glDrawArrays(GL_POINTS, 0, 6); 
    }

The vertices will remain the same.

The point thickness can be specified in vertex shaders using the gl_PointSize variable.

vertex_shader.glsl

    attribute vec4 a_Position; 
    
    void main() { 
        gl_Position = a_Position; 
        gl_PointSize = 5.0; 
    }

run

6 points are drawn

And finally, let’s draw several different primitives at once, for example: 4 triangles, 2 lines, 3 points

4 triangles are 4 * 3 = 12 vertices

2 lines are 2 * 2 = 4 vertices

3 points are 3 vertices

Together we need to set 12+ 4 + 3 = 19 vertices

        float[] vertices = { 
                // треугольник 1 
                -0.9f, 0.8f, -0.9f, 0.2f, -0.5f, 0.8f, 
                
                // треугольник 2 
                -0.6f, 0.2f, -0.2f, 0.2f, -0.2f, 0.8f, 
                
                // треугольник 3 
                0.1f, 0.8f, 0.1f, 0.2f, 0.5f, 0.8f, 
                
                // треугольник 4 
                0.1f, 0.2f, 0.5f, 0.2f, 0.5f, 0.8f, 
                
                // линия 1 
                -0.7f, -0.1f, 0.7f, -0.1f, 
                
                // линия 2 
                -0.6f, -0.2f, 0.6f, -0.2f, 
                
                // точка 1 
                -0.5f, -0.3f, 
                
                // точка 2 
                0.0f, -0.3f, 
                
                // точка 3 
                0.5f, -0.3f, 
        }; 

rewrite onDrawFrame

    @Override 
    public void onDrawFrame(GL10 arg0) { 
        glClear(GL_COLOR_BUFFER_BIT); 
        glLineWidth(5); 
        glDrawArrays(GL_TRIANGLES, 0, 12); 
        glDrawArrays(GL_LINES, 12, 4); 
        glDrawArrays(GL_POINTS, 16, 3); 
    }

We call the glDrawArrays method three times.

The first call tells the system that you need to draw triangles and use 12 vertices, starting with the first one in the array (index 0).

The second call tells the system to draw lines and use 4 vertices, starting with the thirteenth in the array (index 12). We start from the thirteenth because we used the first 12 vertices in the array to set the triangles, and the vertices of the lines go from the thirteenth.

The third call tells the system that you need to draw points and use 3 vertices, starting with the seventeenth in the array (index 16). We start with the seventeenth because we used the first 12 vertices in the array to set the triangles, the next 4 vertices we used to set the lines, and the vertices of the dots go from the seventeenth.

run

The system drew 4 triangles (2 of them form a rectangle), 2 lines and three points.

Hopefully the relationship between the vertex array and the glDrawArrays method has become clearer. That is, in glDrawArrays we specify which figures to draw and how many vertices to use, and the vertices data is taken from the array of vertices. It may be unclear, for example, how the system determines that it takes exactly 2 vertices for the vertex. We will look into this in detail in the next lesson and the algorithm for transferring data to shaders will become fully understood.




Discuss in the forum [0 replies]

Lesson 169. OpenGL. shaders

Lesson 169. OpenGL. shaders


In this lesson:

– we create shaders
– draw a triangle

The original lessons are available on githab.

In the last lesson we created a simple project in which we simply painted the surface in green. At the same time we worked at the very top level and did not even touch the basic OpenGL mechanism, ie shaders.

Shaders are programs written in GLSL. In 3D graphics, the entire image is constructed from graphic primitives: dots, lines, triangles. To draw a primitive, the GPU must know the coordinates of its vertices and the color of the fill for each point. This is what the shaders are giving him. And, accordingly, there are two types of shaders:
– vertices that operate with vertices of graphical primitives.
– fragmentary, responsible for the color of each point of the graphic primitives

That is, if we draw, for example, a triangle, then the final coordinates of its vertices will be defined in the vertex shaders. This shader will be called once for each vertex.

And the color of each point of the triangle will be determined by the fragment shader. This shader will be called for each point of the triangle.

We are required to create these shaders and transmit data from our application. In this tutorial, we will create a Horse and Fragment Shader and draw a triangle with them.

To create an application, you can take a project from the past lesson or create a copy of it.

In the res folder, we create a raw folder and create a file in it: vertex_shader.glsl:

    attribute vec4 a_Position;

    void main() {
        gl_Position = a_Position;
    }

This is the apex shader. The syntax is similar to C or java. Let’s find out what is in this shader.

method main is the main shader method that will be called by the system.

Attribute a_Position with vec4 type is a vector of 4 float values. This attribute can store three-dimensional coordinates of the vertex of a graphical primitive. But in addition to the three coordinates of the vertex (x, y, z), we will still need another value, so we use vec4 rather than vec3 to transmit vertex data. We will pass the data into this attribute from our application.

Since we are about to draw a triangle, we will transmit data about three vertices. And for each of them will be executed this shader, and in a_Position will be data about the current vertex.

Variable gl_Position is a special variable into which we need to fit the vertex position. That is, this variable is the result of the work of the vertex shader. This data will then be used by the GPU to determine the position of the vertices.

In our shader, we simply pass values ​​from a_Position to gl_Position. That is, this shader is quite simple and does not affect the input data (a_Position), but only broadcasts them further (gl_Position).

The tops were sorted out. Now in res raw we create a fragment shader file.

fragment_shader.glsl:

    precision mediump float;
    uniform vec4 u_Color;

    void main() {
        gl_FragColor = u_Color;
    }

The first line sets the precision for the float values. There are three modes: lowp, mediump, highp. The names understood their accuracy. But of course, the higher the accuracy, the lower the performance.

Medium accuracy is sufficient to work with color. So we use it in our shader. In vertex shaders we did not specify the accuracy, there is already a default highp, because the calculation of vertices requires high accuracy.

method main is basic, similar to the top shaders.

Variable u_Color will contain the color. It also has vec4 type, it is quite suitable for 4 components of RGBA color. We will put value in this variable in our application. word uniform before it means that this value will always be the same for all fragments (points) that will be processed by this fragment shader.

Variable gl_FragColor is a special shader variable in which we need to put a color value for the current snippet. Recall that for each point (fragment) of the triangle, the system will call this Fragment shader, and the shader must (in gl_FragColor) return the color value that the system uses to draw the point.

In gl_FragColor we just put the value u_Color. That is, the Fragment Shader, just like the Rider, is very simple and broadcasts the data further without any changes.

Shaders are ready. Now our app needs to do a bunch of things to make these shaders earn:

– read shaders from files and compile them
– create a program from shaders
– find parameters in the program and transfer data there

We create classes.

FileUtils.java:

import android.content.Context;
import android.content.res.Resources;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;

public class FileUtils {
    public static String readTextFromRaw(Context context, int resourceId) {
        StringBuilder stringBuilder = new StringBuilder();
        try {
            BufferedReader bufferedReader = null;
            try {
                InputStream inputStream = context.getResources().openRawResource(resourceId);
                bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
                String line;
                while ((line = bufferedReader.readLine()) != null) {
                    stringBuilder.append(line);
                    stringBuilder.append("rn");
                }
            } finally {
                if (bufferedReader != null) {
                    bufferedReader.close();
                }
            }
        } catch (IOException ioex) {
            ioex.printStackTrace();
        } catch (Resources.NotFoundException nfex) {
            nfex.printStackTrace();
        }
        return stringBuilder.toString();
    }
}

There is only one method in this class readTextFromRawThe id will read the raw resource and return its contents as a string. That is, it will read the contents of the file-shaders and return the content to us in text form.

Next class – ShaderUtils.java:

import android.content.Context;

import static android.opengl.GLES20.GL_COMPILE_STATUS;
import static android.opengl.GLES20.GL_LINK_STATUS;
import static android.opengl.GLES20.glAttachShader;
import static android.opengl.GLES20.glCompileShader;
import static android.opengl.GLES20.glCreateProgram;
import static android.opengl.GLES20.glCreateShader;
import static android.opengl.GLES20.glDeleteProgram;
import static android.opengl.GLES20.glDeleteShader;
import static android.opengl.GLES20.glGetProgramiv;
import static android.opengl.GLES20.glGetShaderiv;
import static android.opengl.GLES20.glLinkProgram;
import static android.opengl.GLES20.glShaderSource;

public class ShaderUtils {
    public static int createProgram(int vertexShaderId, int fragmentShaderId) {
        final int programId = glCreateProgram();
        if (programId == 0) {
            return 0;
        }
        glAttachShader(programId, vertexShaderId);
        glAttachShader(programId, fragmentShaderId);
        glLinkProgram(programId);
        final int[] linkStatus = new int[1];
        glGetProgramiv(programId, GL_LINK_STATUS, linkStatus, 0);
        if (linkStatus[0] == 0) {
            glDeleteProgram(programId);
            return 0;
        }
        return programId;
    }

    static int createShader(Context context, int type, int shaderRawId) {
        String shaderText = FileUtils.readTextFromRaw(context, shaderRawId);
        return ShaderUtils.createShader(type, shaderText);
    }

    static int createShader(int type, String shaderText) {
        final int shaderId = glCreateShader(type);
        if (shaderId == 0) {
            return 0;
        }
        glShaderSource(shaderId, shaderText);
        glCompileShader(shaderId);
        final int[] compileStatus = new int[1];
        glGetShaderiv(shaderId, GL_COMPILE_STATUS, compileStatus, 0);
        if (compileStatus[0] == 0) {
            glDeleteShader(shaderId);
            return 0;
        }
        return shaderId;
    }
}

It contains all the methods for compiling shaders and creating programs from them. In principle, you can not get into the work of this class. We create it once and it won’t change for a few lessons. There will be no vertices, no coordinates, no colors, no calculations. We just put into this class all the logic of preparing shaders for use in our application. So for now you can see it superficially.

Let’s start with the createShader methods.

int createShader (int type, Context context, int shaderRawId)

Accepts context, type of shader, and raw resource id. Reads the content (source) of the shader into the string and calls the second version of the method

int createShader (int type, String shaderText)

This method accepts the type of shader and its contents in the form of a string, and then causes a bunch of OpenGL methods for creating and compiling the shader:

glCreateShader – creates an empty shader object and returns its id to a variable shaderId. Shader type is GL_VERTEX_SHADER (upright) or GL_FRAGMENT_SHADER (Fragment). Returns 0 if the shader failed to be created for any reason.

glShaderSource – takes the source of the shader from the string and associates it with the shaderId shader.

glCompileShader – compiles shaderId shader

glGetShaderiv – allows you to obtain the compiler status (GL_COMPILE_STATUS) of the shaderId shader. The method will put the status in the compileStatus array, in the element with index 0. If the compilation was successful, the status will be 1 (GL_TRUE), otherwise – 0 (GL_FALSE).

We next check if the compilation failed, that is, if compileStatus[0] == 0, then remove the shader object by the method glDeleteShader and return 0.

If everything is ok then we return shaderId. That is, the shader is ready and we have its id.

method

int createProgram (Context context, int vertexShaderRawId, int fragmentShaderRawId)

creates an application. program is just a couple of shaders: Horse + Fragment. This pair of shaders should work in conjunction, because the first is responsible for the vertices and the second for the colors, and neither of them alone will give us the final picture. Therefore, they are combined into a program.

The method accepts the input of the vertex and fragment shaders.

glCreateProgram – creates an empty program and returns its id to a variable programId. If instead of id we get 0, then something went wrong, we return 0 instead of id of the program.

Next we method glAttachShader attach shaders to the program. That is, we tell the system that shaders vertexShaderId and fragmentShaderId will be part of programId.

glLinkProgram – forms a program of priyatachnyh shaders.

glGetProgramiv – allows you to check the status of the formation of the program. Here everything is similar to the shader method glGetShaderiv. If something went wrong, then remove the program by method glDeleteProgram.

If everything is ok then we return the programId. That is, the program is ready and we have its id.

class OpenGLRenderer.java. We’ve already created one in the last lesson, but it’s going to have major changes, so here’s all the code

import android.content.Context;
import android.opengl.GLSurfaceView.Renderer;

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import static android.opengl.GLES20.GL_COLOR_BUFFER_BIT;
import static android.opengl.GLES20.GL_FLOAT;
import static android.opengl.GLES20.GL_FRAGMENT_SHADER;
import static android.opengl.GLES20.GL_TRIANGLES;
import static android.opengl.GLES20.GL_VERTEX_SHADER;
import static android.opengl.GLES20.glClear;
import static android.opengl.GLES20.glClearColor;
import static android.opengl.GLES20.glDrawArrays;
import static android.opengl.GLES20.glEnableVertexAttribArray;
import static android.opengl.GLES20.glGetAttribLocation;
import static android.opengl.GLES20.glGetUniformLocation;
import static android.opengl.GLES20.glUniform4f;
import static android.opengl.GLES20.glUseProgram;
import static android.opengl.GLES20.glVertexAttribPointer;
import static android.opengl.GLES20.glViewport;

public class OpenGLRenderer implements Renderer {
    private Context context;
    private int programId;
    private FloatBuffer vertexData;
    private int uColorLocation;
    private int aPositionLocation;

    public OpenGLRenderer(Context context) {
        this.context = context;
        prepareData();
    }

    @Override
    public void onSurfaceCreated(GL10 arg0, EGLConfig arg1) {
        glClearColor(0f, 0f, 0f, 1f);
        int vertexShaderId = ShaderUtils.createShader(context, GL_VERTEX_SHADER, R.raw.vertex_shader);
        int fragmentShaderId = ShaderUtils.createShader(context, GL_FRAGMENT_SHADER, R.raw.fragment_shader);
        programId = ShaderUtils.createProgram(vertexShaderId, fragmentShaderId);
        glUseProgram(programId);
        bindData();
    }

    @Override
    public void onSurfaceChanged(GL10 arg0, int width, int height) {
        glViewport(0, 0, width, height);
    }

    private void prepareData() {
        float[] vertices = {-0.5f, -0.2f, 0.0f, 0.2f, 0.5f, -0.2f,};
        vertexData = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
        vertexData.put(vertices);
    }

    private void bindData() {
        uColorLocation = glGetUniformLocation(programId, "u_Color");
        glUniform4f(uColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);
        aPositionLocation = glGetAttribLocation(programId, "a_Position");
        vertexData.position(0);
        glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 0, vertexData);
        glEnableVertexAttribArray(aPositionLocation);
    }

    @Override
    public void onDrawFrame(GL10 arg0) {
        glClear(GL_COLOR_BUFFER_BIT);
        glDrawArrays(GL_TRIANGLES, 0, 3);
    }
}

In the constructor, call the prepareData method, which will prepare the data for transmission to shaders.

IN onSurfaceCreated we put black the default color of cleaning. Then, using the methods of the ShaderUtils class, we create shaders, get their ids: vertexShaderId (mount) and fragmentShaderId (Fragment), create a programId from them, and glUseProgram we inform the system that this program must be used to construct the image. Next, we pass the data to shaders using the bindData method.

method onSurfaceChanged without changes, set the drawing area to the entire surface of the surface component.

Now the fun part.

method prepareData. In this method, we prepare data for transmission to shaders. We first create an array of 6 elements. I split these 6 elements into three lines for clarity, because in reality these are the coordinates of three points: (-0.5, -0.2), (0, 0.2) and (0.5, -0.2). These three points are the vertices of the triangle we are about to draw. Why such small values? Especially compared to the canvas where we used coordinates from 0 to 1000. Because OpenGL will bring its drawing area (that is, the screen) to the range [-1, 1] in width and height.

And we take this into account when drawing a triangle

Next we have to convert the float[] array in buffer FloatBufferBecause it is necessary to transfer data to shaders.

method allocateDirect allocate memory under the buffer. Since we have vertices in the array in float format, and the float size is 4 bytes, we need a memory byte: 4 * at the vertices.

method order specifies the byte order. If anyone wants to get into the subject, you are here. But for now, it doesn’t matter to us, and we’ll show ByteOrder.nativeOrder () here – the system default order.

method asFloatBuffer returns a created byte buffer as FloatBuffer.

method way we pass it vertices from the vertices.

method bindData. Here we will transfer the data to the shader.

method glGetUniformLocation we in the uColorLocation variable get the shader position of our uniform variable u_Color (see fragment_shader.glsl fragment shader code).

method glUniform4f pass uColorLocation 4 float values ​​which are RGBA components of blue (0,0,1,1). This data will go into the shader in the u_Color variable.

Similarly, by the method glGetAttribLocation in the aPositionLocation variable we get the attribute position of the a_Position variable (see vertex_shader.glsl vertex shader code).

method position we inform the system that the data from vertexData will have to be read from the element with index 0, that is from the very beginning.

method glVertexAttribPointer we tell the system that the shader for its a_Position attribute needs to read data from the vertexData array. And the parameters of this method allow you to specify the rules of reading. Let’s consider what parameters go to the input of this method

void glVertexAttribPointer (int indx, int size, int type, boolean normalized, int stride, Buffer ptr)

int indx – The variable indicates the position of the attribute in the shader. All clear here, we use the previously obtained aPositionLocation, which knows where a_Position sits.

int size – indicates to the system how many elements of the vertexData buffer it takes to fill in the a_Position attribute.

int type – pass GL_FLOAT because we have float values

boolean normalized – this flag is not relevant to us yet, we set false

int stride – Used when passing more than one attribute in an array. We only pass data for one attribute so far, so we set it to 0. But in the following lessons, we will still use this parameter.

Buffer ptr – data buffer, ie vertexData.

Let us dwell on the parameter in more detail size. If you remember, a_Position we have type vec4. That is, it consists of 4 float values. And ideally, if we want to draw three points, we have to send 3 * 4 = 12 values ​​so that the riding shader is said three times and the attribute is filled in for all 4 values.

That is, if we send such an array, for example [v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12] and we specify size = 4, then the system will take every 4 values, write them in an attribute and start the riding shader. Since we are going to draw three vertices (we will indicate this a little later), the shader will work three times and the values ​​of a_Position will be as follows on these launches:

First run: v1, v2, v3, v4
Second run: v5, v6, v7, v8
Third run: v9, v10, v11, v12

But we send only 6 values ​​(for example v1, v2, v3, v4, v5, v6) and tell the system that it needs to take only 2 values ​​to complete the attribute. That is, the system will take every two values, write them into an attribute, and run a riding shader. As a result, the shader will have the following a_Position values:

First run: v1, v2, 0, 1
Second run: v3, v4, 0, 1
Third run: v5, v6, 0, 1

The system will take the first two elements of the attribute from the array, and in the third and fourth we didn’t give it anything, so it will default. The default values ​​for vec4 are (0,0,0,1).

Everything, we passed the data to the shader. Finally, we need to include the aPositionLocation attribute by the method glEnableVertexAttribArray.

In the method onDrawFrame we clear the screen by default in color and method glDrawArrays we ask the system to draw a triangle for us. The input of this method are the following parameters:

int mode – here we specify what type of graphic primitives we want to draw. In our case, this is the triangle GL_TRIANGLES.

int first – we specify that to take vertices from an array of vertices it is necessary to start from element with index 0, that is from the first element of array

int count – the number of vertices to be used for drawing. We specify 3 because three vertices are required for a triangle. And in the array we passed data for three vertices.

The code for the class MainActivity.java can be taken from the last lesson, only it will be necessary to change the line a little:

 glSurfaceView.setRenderer(new OpenGLRenderer(this)); 

so we added Context to the renderer constructor.

The app is ready. All this is just to draw a blue triangle on a black background …)

run the program

Blue triangle is OK

A lot of information came out for one lesson, but less was broken. If something is unclear, do not be sad, this is the norm. In the following lessons, we will draw different graphic primitives, add color rendering, use textures, and this whole shader data system will be much clearer.




Discuss in the forum [3 replies]

Lesson 168. OpenGL. Introduction.

Lesson 168. OpenGL. Introduction.


In this lesson:

– we create the simplest example from OpenGL

We continue the topic of graphics, and move on to the next level, called OpenGL ES. It stands for OpenGL for Embedded Systems, ie OpenGL for Embedded Systems (android devices in our case).

A couple of years ago, I read a book on the subject, made examples from it, and in general, without much difficulty, understood everything there was written. But the book was on OpenGL ES version 1.0. Now this version is already outdated and versions 2.0, 3.0 and 3.1 are used. These API versions are substantially different from and incompatible with 1.0. Therefore, I myself will have to study the subject almost again.

The first lesson will be similar to Lesson 141. We’ll do a minimal set of actions to fill the screen with any color, but this time we’ll do it with OpenGL. By the way, I just want to warn you that OpenGL is not necessarily 3D. First we draw a little 2D, and then add the volume.

Well, as always, at first, most likely, little will be understood, but as you dive into the subject the overall picture will become clearer.

Let’s start creating our first minimal example. Let’s discuss its key elements.

1) The image has something to show. For this we will use the GLSurfaceView component (hereinafter referred to as surface).

2) Someone has to create an image, that is, to receive from us instructions on how and how to draw. This will be done by Renderer (hereinafter referred to as “Renderer”).

3) Well, you will need to check that the device supports OpenGL 2.0, otherwise nothing will work.

Let’s start by creating a rendering class. We will then pass the object of this rendering class to the surface, which in the course of our work will call rendering methods.

Render has three methods:

onSurfaceCreated – invoked when creating / re-creating a surface. That is, the method will be called when you start the program or, for example, in an already running application when the device goes out of sleep. This will install the OpenGL parameters and initialize the graphical objects.

onSurfaceChanged – Called when resizing surface. The most common example is changing the screen orientation.

onDrawFrame – Called when the surface is ready to display the next frame. In this method we will create images.

we create a class OpenGLRendererImplementing the Renderer interface:

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import static android.opengl.GLES20.GL_COLOR_BUFFER_BIT;
import static android.opengl.GLES20.glClear;
import static android.opengl.GLES20.glClearColor;
import static android.opengl.GLES20.glViewport;

import android.opengl.GLSurfaceView.Renderer;

public class OpenGLRenderer implements Renderer {

  @Override
 
public void onDrawFrame(GL10 arg0) {
   
glClear(GL_COLOR_BUFFER_BIT);
   
 
}

  @Override
 
public void onSurfaceChanged(GL10 arg0, int width, int height) {
   
glViewport(0, 0, width, height);

  }

  @Override
 
public void onSurfaceCreated(GL10 arg0, EGLConfig arg1) {
   
glClearColor(0f, 1f, 0f, 1f);
 
}

}

IN onSurfaceCreated we call the glClearColor method and pass it the RGBA components in the range from 0 to 1. This sets the default color to be displayed after full surface cleaning.

And in the method onDrawFrame we are just doing this cleaning. The glClear method with parameter GL_COLOR_BUFFER_BIT clears all the colors on the screen and sets the color specified by the glClearColor method.

In the method onSurfaceChanged we use the glViewPort method to specify the surface area that will be available to display the image. We indicate the left lower point – (0,0) and the size of the area – (width, height), that is, the image will be displayed on all surface.

Render ready. Now we need to hang the surface in Activity and adjust it.


import android.app.Activity;
import android.app.ActivityManager;
import android.content.Context;
import android.content.pm.ConfigurationInfo;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.widget.Toast;

public class MainActivity extends Activity {
 
 
private GLSurfaceView glSurfaceView;

  @Override
 
protected void onCreate(Bundle savedInstanceState) {
   
super.onCreate(savedInstanceState);
   
if (!supportES2()) {
     
Toast.makeText(this, "OpenGl ES 2.0 is not supported", Toast.LENGTH_LONG).show();
      finish
();
     
return;
   
}
   
glSurfaceView = new GLSurfaceView(this);
    glSurfaceView.setEGLContextClientVersion
(2);
    glSurfaceView.setRenderer
(new OpenGLRenderer());
    setContentView
(glSurfaceView);
 
}
 
 
@Override
 
protected void onPause() {
   
super.onPause();
    glSurfaceView.onPause
();
 
}
 
 
@Override
 
protected void onResume() {
   
super.onResume();
    glSurfaceView.onResume
();
 
}
 
 
private boolean supportES2() {
       
ActivityManager activityManager =
               
(ActivityManager) getSystemService(Context.ACTIVITY_SERVICE);
            ConfigurationInfo configurationInfo = activityManager.getDeviceConfigurationInfo
();
           
return (configurationInfo.reqGlEsVersion >= 0x20000);
 
}

}

IN onCreate first, with our supportES2 method, we determine that the device supports OpenGL ES 2.0 and higher. If not, then we close.

If everything is ok then
– create GLSurfaceView,
– we tell him by the setEGLContextClientVersion method that we will use OpenGL ES version 2
– we pass an instance of our OpenGLRenderer class using the setRenderer method. This renderer will now be responsible for drawing on the surface
– setContentView method sets surface as the primary View for Activity

In addition, you need to bind the surface to lifecycle Activity methods: onPause and onResume, Calling them surface methods.

You are done. run

The screen is green. The first simplest OpenGL application is ready. Not Need For Speed, of course, but where to start)

Three points I would like to dwell on

1) For some reason, the alpha component in the glClearColor method does not work. That is, pass the last parameters at least 0 though 1, transparency is not added. I do not have an answer to this question yet.

2) The viewport coordinates we set with the glViewport method do not affect the result in any way, and even if you set the viewport area to only half the surface, all the surface will be painted green. In this regard, I read that this is the norm. The glClear method works on all surfaces, regardless of the size of the viewport.

3) About launching applications. Usually it is written that OpenGL ES does not break into emulators. I did not check on a standard emulator, but on Genymotion it starts without problems. Extreme is always a real device, you can test it.




Discuss in the forum [5 replies]