Lesson 175. OpenGL. Textures.

Lesson 175. OpenGL. Textures.


In this lesson:

– we use textures

In the past lessons, we have drawn colored triangles. In this lesson, we will change the color to texture. We can take any picture and tell the system to “overlay” that image on a triangle instead of just filling it with color.

Before we get into practice, we will need to discuss two main points when working with textures:
– How to get a texture ready to use with OpenGL from a regular image
– how to apply a texture to a triangle

Creating texture from a picture

Let’s start by handing the picture to us in OpenGL. To do this, we have to learn three concepts: texture unit, texture target, texture object.

Texture object is a texture object that stores texture and some of its parameters. The features of working with OpenGL are such that you can’t just pick up and edit this object, or use it to display it. You need to put it in a specific slot. And then you can modify or use this texture object in your image.

The slots look something like this

Each large rectangle signed GL_TEXTURE (Where N = 0,1,2 …) is texture unit. GL_TEXTURE is the name of a constant that can be accessed. I only painted three units, but more.

Every small rectangle inside a large one is a texture target. It can still be called texture type. And as far as I understand, there are only two types in OpenGL ES:
GL_TEXTURE_2D is a regular two-dimensional texture
GL_TEXTURE_CUBE_MAP – Expanded cube texture. That is, this is such a thing, consisting of 6 squares

We will use GL_TEXTURE_2D in this lesson.

To work with a texture object, it must be placed in the target of any unit. Next, our work will go with this target. And it will already change the texture object.

That is, in order for us to use any 2D image as a texture on the screen, we will need to follow these steps.

1) Read the picture in Bitmap

2) Create a texture object

3) Make some unit active. The system will perform all further actions on textures in this unit. By default, the active unit is GL_TEXTURE0.

4) Place the created texture object (according to claim 2) into any texture target. In our examples, this will usually be GL_TEXTURE_2D. In the text below I will use this target. We place the object in the target to be able to work with that object. Now all the operations we want to do with the object we will address in the target.

5) The texture object we have created, but not configured. We need to do two things: throw in Bitmap (according to claim 1) and set up filtering. Filtering is responsible for which algorithms will be used if the texture has to be compressed or stretched to display it.

I remind you that we do not work directly with the texture object. But this object is already in the target, and we will work with target, and it will already bring all the information to the texture object.

That is, for GL_TEXTURE_2D it is necessary to specify the necessary filtering modes and transfer Bitmap to it. After that, our texture object is ready. You can go to the shader.

6) Fragment shader will work with the texture. It is he who is responsible for filling the figures with pixels. Only now, instead of a simple color, will it determine which point in the texture should be displayed for each point of the triangle.

In order for the shader to know exactly what texture it should use, we need to provide it with this information. Logically, it would seem, just to pass in it an object of texture. But, unfortunately, everything is a little more complicated than we would like. And in the shader we will pass not the texture object, but the unit number in which the texture is currently located.

But we remember that the texture is not just in the unit, but also in the target. How then will the shader understand what target the specified unit should look for texture? It will depend on what type of variable we use in the shader to represent the texture. We will use the sampler2D type in our example. And thanks to this type, the shader will understand that it needs to retrieve the texture from target GL_TEXTURE_2D.

We will take it further by example. Now, the most important idea to understand is that we do not work directly with the texture object. We put it in a specific target of a certain unit. After that, we can change it there via target, and the Fragment shader can then take it for display.

Using texture

Now the second important theoretical part. We need to understand how the texture will be “pulled” on the object. Consider the simplest example. We have a square that we have drawn using two triangles.

For simplicity, I only use X and Y coordinates here. Z is not absolutely important here.

So we used 4 vertices to draw a square. To impose a texture on this square, we need to map the vertices of the square and the coordinates of the texture.

The texture can be represented as follows

That is, each side of the texture is considered equal to 1 (even if the sides are not equal). And using these S and T coordinates we can point to any point in the texture.

If we want to hang a texture on our square, we just need to compare the angles of the square and the angles of the texture. That is, for each vertex of the square, you must specify a texture point that will correspond to that vertex.

In our example, we compare the coordinates of the vertex of the square and the coordinates of the texture point as follows:

top left (-1,1) -> top left texture point (0,0)

left lower vertex (-1, -1) -> left lower vertex texture (0,1)

right top (1.1) -> right top texture point (1.0)

bottom right vertex (1, -1) -> right bottom vertex texture (1,1)

So we have the vertices of the square mapped the corners of the texture and, as a result, the texture will evenly square and fill it completely.

It should be understood here that the texture will not be squared but two triangles. After all, we build images from triangles. One triangle will be overlaid on one texture and another triangle overlaid. As a result, two pieces of texture on two triangles will look like a whole square texture.

This is how one triangle looks like

Well, it is logical to assume that if the shaders are engaged in the comparison of the vertices of the triangle and the coordinates of the texture, then we will need to transmit this data to the shaders. The tops of the peaks are already passed to us, and in this lesson we will add texture coordinates to them. When the shader receives this data, it will know which vertex is at which point of texture corresponds. And for all other points of the triangle (which are between the vertices), the corresponding texture points will be calculated by interpolation.

This mechanism is similar to what we discussed in Lesson 171 when drawing a gradient. There we specified color for each vertex, and the Fragment shader interpolated them between the vertices, and we got a gradient. In the case of texture, the Fragment Shader will calculate the texture coordinates, not the color.

Let’s look at the code that will implement everything we’ve discussed. Download the source code and open the module lesson175_texture.

First, let’s look at the class TextureUtils. It has a method loadTexture. This method accepts the image resource id as input, and returns the id of the created texture object to the output, which will contain the image. Let’s consider this method in detail.

method glGenTextures create an empty texture object. How parameters are passed:
– how many objects you need to create. We need one texture, point to 1.
– int array into which method will place the id of the created objects
– offset of the array (the index of the array element from which the method will begin to populate the array). Here, as always, we pass 0.

We check if id is 0, then something went wrong and no texture object was created. Turn 0.

Following are the methods for getting Bitmap from a resource. You can read more about this in Lessons 157-159.

If Bitmap fails, we delete the texture object using the glDeleteTextures method. How parameters are passed:
– how many objects to delete. We need to remove 1 object.
– array of id objects
– offset of the array (the index of the array element from which the method will begin reading the array). Again 0.

Next, work begins with the units and target. method glActiveTexture we make the unit GL_TEXTURE0 active, ie the unit with the number 0. Now all further operations will be addressed to this unit. But the target will need to be specified in each operation.

method glBindTexture we at target GL_TEXTURE_2D place our texture object by passing its id there. Note, we have specified only target, no unit. Because we already set the unit one line earlier and the system, having received only the target, works with this target in the active unit.

method glTexParameters we can set the texture object parameters. This method has three parameters:
– target
– what parameter we will change
is the value we want to assign to this parameter

In our example, we use the glTexParameteri method to specify filtering parameters. Let me remind you that filtering is used when the size of the triangle does not match the size of the texture, and the texture has to be compressed or stretched so that it evenly sits on the triangle.

There are two filtering options we need to specify:
GL_TEXTURE_MIN_FILTER – which filtering mode will be applied when compressing the image
GL_TEXTURE_MAG_FILTER – which filtering mode will be applied when stretching the image

Both of these parameters are set to GL_LINEAR mode. What this mode means and what other modes are, I will briefly describe at the end of this tutorial so as not to be distracted right now.

method texImage2D we pass the bitmap to the texture object. Here we specify the target bitmap previously created. The other two parameters are left 0, they are not important to us yet.

method recycle we tell the system that we no longer need bitmap.

Finally, we call the method again glBindTexture, In which in target GL_TEXTURE_2D we pass 0. Thus, we untie our texture object from that target.

That is, we first placed the texture object in the target, performed all operations with it, and then released the target. As a result, our texture object is now customized, ready to go, and not tied to any target.

looking class OpenGLRenderer. Here, compared to past lessons, there are a few changes that do not apply to textures. I took the code to create the shaders and the program into a separate createAndUseProgram method. And in the getLocations method, I made calls to methods that return us the position of variables in the shader.

Now we are looking at innovations regarding textures. That is what and how we do to use texture. Let me remind you briefly that we are required to:

1) Create a texture object from the picture
2) Match the vertices of the triangle and the coordinates of the texture, and pass this data to the shader so that it knows how, it should overlay the texture on the triangle.
3) Put the texture object in the target of some unit
4) Transfer to the shader the unit number that currently contains the texture object

look at the method prepareData. In the vertices array, we specify 4 vertices to draw a square. For each vertex we set 5 numbers. The first three are the coordinates of the vertex, and the last two are the corresponding place on the textures.

In the texture variable, we place the texture object id created from the picture box.

In the method getLocations note two new variables from shaders:
a_Texture is an attribute in vertex shaders. We will pass texture coordinates to it.
u_TextureUnit is a uniform variable, we will pass the unit number into which we will place the texture.

In the method bindData we first pass the vertex coordinates to aPositionLocation. Then we pass the texture coordinates to aTextureLocation. That is, we pass data into two attributes from one array. We already did this in Lesson 171. If suddenly forgot, you can look there, I wrote everything in great detail.

method glActiveTexture we are active unit 0. It is still active by default, but suddenly somewhere in the code we changed it and made some other unit active. Therefore, just in case, we perform this operation.

method glBindTexture we place the texture object in target GL_TEXTURE_2D.

method glUniform1i we pass to the shader information that the texture it can find in unit 0.

In the method onDrawFrame we ask the system to draw us triangles of 4 vertices. As a result, a square will be drawn and a texture will be superimposed on it.

Now we look at the shaders.

First a riding shader vertex_shader.glsl. Here we still calculate the final coordinates (gl_Position) for each vertex using a matrix. And the a_Texture attribute comes with texture coordinates. And we write them immediately in varying the v_Texture variable. This will allow us in the fragment shader to obtain interpolated data by texture coordinates.

Fragment shader fragment_shader.glsl. In it we have a uniform variable u_TextureUnit, in which we get the unit number in which we need the texture. Note the type of variable. As a reminder, we passed 0 as an integer to this variable. And here we have some complicated type of sampler2D. It confused me a little at first, and I had to dig that moment. In the end, I came to the conclusion that when the system passes 0 to the shader in the sampler2D type, it looks into the unit 0 and places the texture content from target GL_TEXTURE_2D in sampler2D.

That is, the number passed to the shader (in our case 0) indicates which unit to watch. And the type of variable to which this number is passed (in our case sampler2D) indicates from which target the texture (from 2D target) should be taken. Of course, this only works if you put texture there with the glActiveTexture and glBindTexture methods.

In varying the v_Texture variable comes interpolated texture coordinates from the vertex shader. And the shader knows which point of texture should be displayed at the current point of the triangle.

It remains to use the texture coordinates and the texture itself to get the final snippet. This will execute the method texture2D, And in gl_FragColor we get the color of the desired point from the texture.

run the program

The texture is evenly squared. Rather, parts of the texture lie on triangles and as a result we see a square.

Partial use of texture

In the example we used the whole texture from (0,0) to (1,1). But this is not necessary. We can only use part of it.

Let’s look at this picture

It contains two pictures. And a square only needs one, such as the one on the left, up to 0.5. To square it, we just need to change the mapping of vertices and points of texture. Now the right vertices of the square will be compared not with the right angles of the picture, but with the middle.

Let’s display another square with this left half of the texture

supplement the array vertices:

float[] vertices = {
        -1,  1, 1,   0, 0,
        -1, -1, 1,   0, 1,
         1,  1, 1,   1, 0,
         1, -1, 1,   1, 1,
 
        -1,  4, 1,   0, 0,
        -1,  2, 1,   0, 1,
         1,  4, 1,   0.5f, 0,
         1,  2, 1,   0.5f, 1,
};

We added 4 more to the 4 vertices. This is also the square that will be drawn above the first one. The texture coordinates for it correspond to the left half of the texture.

Since we are going to use another texture, we need it in class OpenGLRenderer create another variable

private int texture2; 

In the method prepareData add the code to create the second texture object

texture2 = TextureUtils.loadTexture(context, R.drawable.boxes); 

rewrite onDrawFrame

public void onDrawFrame(GL10 arg0) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 
    glBindTexture(GL_TEXTURE_2D, texture);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
 
    glBindTexture(GL_TEXTURE_2D, texture2);
    glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
}

Here, we ask the system to draw triangles first from the first four vertices, then from the second four vertices. Each four will give us a square. And before drawing each square, we place the texture corresponding to it in the target so that the shader uses the first texture for the first square and the second for the second square.

run

We see another square. The shader used not the whole texture, but its left half, because we specified it with coordinates in an array of vertices.

Finally, a little more theory

several units

Why may several units be needed? There are times when a Fragment shader has to use several textures at once to get the final fragment. Then he can not do without several units, in the target-s which are placed different textures.

You can get the number of units available to you using the glGetIntegerv method

glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, cnt, 0);

Where, cnt is an int array[] from one element. As a result, int[0] will store the number of units.

filtration modes

Let’s talk a little more about filtering. What is it, when applied, what filtering modes are available?

So, we need a texture to “pull” on the triangle. The point of the triangle is called – fragment (hence the name of the fragment shader, which should draw each fragment of the triangle). And the point of texture is called Texel. When texture is superimposed on a triangle, their dimensions may not match, and the system has to adjust the size of the texture to the size of a triangle. That is to stuff several texels into one fragment (minification) if the texture is larger than a triangle. Or stretch one textel into several magnifications if the texture is smaller. In this case, filtering is applied to obtain the final fragment.

There are two basic modes of image filtering:
NEAREST – just take the nearest texel for each snippet. Works fast, but the quality is worse.
LINEAR – 4 nearest texels are taken for each fragment and their average value is calculated. Slower speed, but better quality.

In addition to filtering, mipmapping can be applied. It creates several copies of different sizes for texture – from the original to the very minimal. When performing the filtering, a copy of the texture closest to the triangle is taken. This will provide better quality and speed up the process, but will increase memory consumption because you need to keep a few smaller copies of the texture in memory. More details can be found here.

The following code is required to enable mipmapping:

glGenerateMipmap(GL_TEXTURE_2D); 

Call it as soon as you put the bitmap in the texture. For a guaranteed result, your texture should have POT (power of two) sizes. That is, the width and height of the texture must be equal to any degree of two: 1, 2, 4, 8, 16, 32, etc. The maximum size is 2048. The texture is not required to be square, that is, the width may not be equal to the height, the main thing is that both values ​​are POT.

There are two ways mipmapping can be used in filtering:
MIPMAP_NEAREST – выбирается копия текстуры наиболее подходящая по размеру и к ней применяется фильтрация, чтобы получить итоговый фрагмент из текселей
MIPMAP_LINEAR – выбираются две копии текстуры наиболее подходящие по размеру, к обоим применяется фильтрация. От фильтрации каждой копии мы получаем по фрагменту, а в качестве итогового фрагмента берем их среднее значение.

Второй способ даст лучшее качество, но первый – быстрее.

Эти два способа подбора копий в комбинации с двумя ранее рассмотренными режимами фильтрации дают нам 4 режима фильтрации:

GL_NEAREST_MIPMAP_NEAREST – фильтрация NEAREST, выбор копии MIPMAP_NEAREST. Тобто выбирается ближайшая копия текстуры, и к ней применяется NEAREST фильтрация.

GL_NEAREST_MIPMAP_LINEAR – фильтрация NEAREST, выбор копии MIPMAP_LINEAR. Тобто выбираются две ближайших копии текстуры, и к каждой копии применяется NEAREST фильтрация. Итоговым результатом будет среднее от двух полученных фрагментов.

GL_LINEAR_MIPMAP_NEAREST – фильтрация LINEAR, выбор копии MIPMAP_NEAREST. Тобто выбирается ближайшая копия текстуры, и к ней применяется LINEAR фильтрация.

GL_LINEAR_MIPMAP_LINEAR – фильтрация LINEAR, выбор копии MIPMAP_LINEAR. Тобто выбираются две ближайших копии текстуры, и к каждой копии применяется LINEAR фильтрация. Итоговым результатом будет среднее от двух полученных фрагментов.

Итого мы получаем 6 возможных режимов фильтрации:
GL_NEAREST
GL_LINEAR
GL_NEAREST_MIPMAP_NEAREST
GL_LINEAR_MIPMAP_NEAREST
GL_NEAREST_MIPMAP_LINEAR
GL_LINEAR_MIPMAP_LINEAR

Первые два применимы и для minification и для magnification. Остальные четыре – только для minification.

Если снова взглянуть на наш код в методе loadTexture класса TextureUtils:

GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);

Мы настраиваем два параметра:

GL_TEXTURE_MIN_FILTER – параметр для задания режима фильтрации при minification GL_TEXTURE_MAG_FILTER – параметр для задания режима фильтрации при magnification

В обоих случаях задаем LINEAR фильтрацию.

Как сделать куб

Я в телеграмм-канал сайта скидывал картинку крутящегося ящика и написал, что вы после этого урока сами сможете такое сделать.

Для этого вам нужно будет немного доработать текущий код.

Для начала, отмените все изменения, которые мы внесли в процессе этого урока.

В моем примере есть только одна сторона куба, состоящая из двух треугольников. Вам надо будет дорисовать остальные 5 сторон. Тобто добавить в массив вершин еще 10 треугольников и правильно сопоставить их с координатами текстуры. Ну и конечно, добавить их отрисовку в onDrawFrame

Текстуру можно для всех сторон использовать одну и ту же. Но если есть желание, можете поискать в инете еще текстуры и сделать куб с разными текстурами для каждой стороны. В этом случае вам надо будет для каждой текстуры создать объект текстуры и помещать его в target перед тем, как вызывать метод отрисовки треугольников соответствующей стороны куба.

А если хотите сделать поворот, то добавляйте model матрицу и настраивайте поворот вокруг оси Y. О том, как это сделать, мы говорили в прошлом уроке.




Discuss in the forum [13 replies]
Related Posts
Lesson 190. Notifications. channels

Android Oreo (API 26) has the ability to create message channels. In this lesson we will understand how to Read more

Lesson 189. Notifications. message grouping

Android 7 (API 24) has the ability to group messages. Even if you don't explicitly implement it, the system Read more

Lesson 188. Notifications. custom messages

Android enables us to create a layout for messages ourselves. Consider a simple example: layout / notification.xml Height 64dp Read more

Lesson 187. Notifications. Action buttons. Reply.

Android 4.1 has the ability to add buttons to messages. This is done using the addAction method. Intent deleteIntent Read more

Leave a Comment