Lesson 132. Camera. Display the image on the screen. Size preview. Device rotation processing

Lesson 132. Camera. Display the image on the screen. Size preview. Device rotation processing


In this lesson:

– Use the Camera object to get an image from the camera
– Customize the image to fit the screen size
– take into account the rotation of the device

Let’s figure out what the main objects we need to bring the image from the camera to the screen. Three of them: Camera, SurfaceView, SurfaceHolder.

Camera is used to get images from the camera. And to display this image in the application, we will use SurfaceView.

Normal translation of the word Surface I could not find. “Surface” is somehow too abstract. Therefore, I will call it – surface. This will mean some component that displays images from the camera.

SurfaceHolder (hereinafter referred to as holder) does not work directly with the surface, but through an intermediary. Camera is capable of working with this object. Also, the holder will let us know that the surface is ready for use, modified or no longer available.

And to sum it up, the Camera takes the holder and uses it to display images on the surface.

Let’s write a program in which we realize the output of the image from the camera to the screen.

Let’s create a project:

Project name: P1321_CameraScreen
Build Target: Android 2.3.3
Application name: CameraScreen
Package name: en.startandroid.develop.p1321camerascreen
Create Activity: MainActivity

screen main.xml:



    
    

SurfaceView in the center of the screen.

IN manifesto add camera rights:

MainActivity.java:

package ru.startandroid.develop.p1321camerascreen;

import java.io.IOException;

import android.app.Activity;
import android.graphics.Matrix;
import android.graphics.RectF;
import android.hardware.Camera;
import android.hardware.Camera.CameraInfo;
import android.hardware.Camera.Size;
import android.os.Bundle;
import android.view.Display;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.Window;
import android.view.WindowManager;

public class MainActivity extends Activity {

  SurfaceView sv;
  SurfaceHolder holder;
  HolderCallback holderCallback;
  Camera camera;

  final int CAMERA_ID = 0;
  final boolean FULL_SCREEN = true;

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    requestWindowFeature(Window.FEATURE_NO_TITLE);
    getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
        WindowManager.LayoutParams.FLAG_FULLSCREEN);
    setContentView(R.layout.main);

    sv = (SurfaceView) findViewById(R.id.surfaceView);
    holder = sv.getHolder();
    holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);

    holderCallback = new HolderCallback();
    holder.addCallback(holderCallback);
  }

  @Override
  protected void onResume() {
    super.onResume();
    camera = Camera.open(CAMERA_ID);
    setPreviewSize(FULL_SCREEN);
  }

  @Override
  protected void onPause() {
    super.onPause();
    if (camera != null)
      camera.release();
    camera = null;
  }

  class HolderCallback implements SurfaceHolder.Callback {

    @Override
    public void surfaceCreated(SurfaceHolder holder) {
      try {
        camera.setPreviewDisplay(holder);
        camera.startPreview();
      } catch (IOException e) {
        e.printStackTrace();
      }
    }

    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width,
        int height) {
      camera.stopPreview();
      setCameraDisplayOrientation(CAMERA_ID);
      try {
        camera.setPreviewDisplay(holder);
        camera.startPreview();
      } catch (Exception e) {
        e.printStackTrace();
      }
    }

    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {

    }

  }

  void setPreviewSize(boolean fullScreen) {

    // получаем размеры экрана
    Display display = getWindowManager().getDefaultDisplay();
    boolean widthIsMax = display.getWidth() > display.getHeight();

    // определяем размеры превью камеры
    Size size = camera.getParameters().getPreviewSize();
        
    RectF rectDisplay = new RectF();
    RectF rectPreview = new RectF();
    
    // RectF экрана, соотвествует размерам экрана
    rectDisplay.set(0, 0, display.getWidth(), display.getHeight());
    
    // RectF первью 
    if (widthIsMax) {
      // превью в горизонтальной ориентации
      rectPreview.set(0, 0, size.width, size.height);
    } else {
      // превью в вертикальной ориентации
      rectPreview.set(0, 0, size.height, size.width);
    }

    Matrix matrix = new Matrix();
    // подготовка матрицы преобразования
    if (!fullScreen) {
      // если превью будет "втиснут" в экран (второй вариант из урока)
      matrix.setRectToRect(rectPreview, rectDisplay,
          Matrix.ScaleToFit.START);
    } else {
      // если экран будет "втиснут" в превью (третий вариант из урока)
      matrix.setRectToRect(rectDisplay, rectPreview,
          Matrix.ScaleToFit.START);
      matrix.invert(matrix);
    }
    // преобразование
    matrix.mapRect(rectPreview);

    // установка размеров surface из получившегося преобразования
    sv.getLayoutParams().height = (int) (rectPreview.bottom);
    sv.getLayoutParams().width = (int) (rectPreview.right);
  }

  void setCameraDisplayOrientation(int cameraId) {
    // определяем насколько повернут экран от нормального положения
    int rotation = getWindowManager().getDefaultDisplay().getRotation();
    int degrees = 0;
    switch (rotation) {
    case Surface.ROTATION_0:
      degrees = 0;
      break;
    case Surface.ROTATION_90:
      degrees = 90;
      break;
    case Surface.ROTATION_180:
      degrees = 180;
      break;
    case Surface.ROTATION_270:
      degrees = 270;
      break;
    }
    
    int result = 0;
    
    // получаем инфо по камере cameraId
    CameraInfo info = new CameraInfo();
    Camera.getCameraInfo(cameraId, info);

    // задняя камера
    if (info.facing == CameraInfo.CAMERA_FACING_BACK) {
      result = ((360 - degrees) + info.orientation);
    } else
    // передняя камера
    if (info.facing == CameraInfo.CAMERA_FACING_FRONT) {
      result = ((360 - degrees) - info.orientation);
      result += 360;
    }
    result = result % 360;
    camera.setDisplayOrientation(result);
  }
}

We look at the code.

IN onCreate we configure Activity so that it is untitled and full screen. Then we define the surface, get its holder and set its type = SURFACE_TYPE_PUSH_BUFFERS (type setting is only required in Android version below 3.0).

Next for the holder we create a callback object HolderCallback (about it a little further), through which the holder will inform us about the states of the surface.

IN onResume we access the camera using the open method. We pass the camera id to the input if there are several (back and front). This method is available from API level 9. At the end of this lesson, you will find information on how to get the camera id.

There is also an open method without requiring an id to log in. It will give access to the rear camera. It is also available in earlier versions.

Then we call the setPreviewSize method, in which we adjust the size of the surface. We will discuss it in detail below.

IN onPause release the camera with the release method so other applications can use it.

class HolderCallback, Implements the SurfaceHolder.Callback interface. Recall that through it the holder informs us about the state of the surface.

It has three methods:

surfaceCreated – surface created. We can give the camera a holder object using the setPreviewDisplay method and start broadcasting the image with the startPreview method.

surfaceChanged – The format or size of the surface was changed. In this case, we stop the stop (stopPreview), configure the camera based on the rotation of the device (setCameraDisplayOrientation, details below), and restart the view.

surfaceDestroyed – surface is no longer available. Do not use this method.

These methods, by the way, have one strangeness. The help to the surfaceChanged method states that it will be called not only when changing, but also when creating a surface, that is, immediately after surfaceCreated. But in the help to the camera, methods of starting the view (setPreviewDisplay, startPreview) are called in both surfaceCreated and surfaceChanged. That is, when creating a surface, we start the view twice. I do not understand why this duplication is needed.

If you clean the surfaceCreated method, everything is still working. But in the lesson, I probably would not risk doing so. Suddenly, I don’t understand something, and that makes some sense. If anyone knows – write on the forum.

Size of previews

method setPreviewSize. A little non-trivial, especially if you’ve never worked with Matrix and RectF objects.

In it, we define the size of the surface based on the screen and the image from the camera, so that the image is displayed with the correct aspect ratio and full screen.

Further calculations can be skipped if you do not want to break your brain and get into the mechanism. Although I tried to make these calculations clear, interesting and even painted pictures. If you understand everything, it will be great!) Someday this knowledge will be needed.

So, we have a picture that comes from the camera – let’s call it a preview. And we have a screen where we need to display these previews.

Let’s look at a specific example for clarity. Galaxy Tab, rear camera, normal horizontal position.

There is a screen. Size: 1280×752. Aspect Ratio: 1280/752 = 1.70

There are previews. Size: 640×480. Aspect Ratio: 640/480 = 1.33.

Let’s say we put the camera in a circle.

We want to get a full screen image. What are the options? There are three of them.

1) Stretch previews on the screen. Bad option, because for this aspect ratio should be the same, and we have different. But let’s try to see the result.

To do this, we need to multiply the width of the preview by 1280/640 = 2. And the height by 752/480 = 1.57. As a result, we have:

you can see that the picture has deformed and has become stretched horizontally. It does not suit us.

2) Press the preview into the aspect ratio screen. To do this, we will resize the preview (keeping the aspect ratio) until it rests on the inside of the screen in height or width. In our case, it will rest on height.

To do this, we need to multiply the width and height of the preview by less than the numbers: 1280/640 = 2 and 752/480 = 1.57, ie 1.57.

We see what happened

got a lot better. Now the picture of the previews is not distorting. The only thing that is a little confusing is the blank areas on the sides of the screen. But nothing bothers to paint them black and let everyone think it was intended. But we will see a complete and distorted picture. For example, it is usually done in video players.

3) Press the screen into the preview. That is to make the second option on the contrary. Resize the screen (keeping the aspect ratio) until it rests within the height or width of the preview.

To do this, we would need to divide the width and height of the screen by more of the numbers: 1280/640 = 2 and 752/480 = 1.57, ie 2.

But because we can’t physically resize the screen, we will resize the preview to achieve the result described.

To do this, we need to multiply the width and height of the preview by more than the numbers: 1280/640 = 2 and 752/480 = 1.57, ie 2.

result

The image is not distorted and takes up a full screen. But there is a nuance. We don’t see the whole picture. It goes beyond the top and bottom of the screen.

Just in case, this is just one example. Others may be different. For example, in the second embodiment, the empty regions may not be on the sides but on the top and bottom. And with smaller accessories, the preview size will be larger than the screen size. But the overall content and algorithm do not change.

We looked at three options and saw that the first one is quite bad, and the second and third are quite suitable for implementation.

From the pictures we return to the code. The setPreviewSize (boolean fullScreen) method implements the second (if fullScreen == false) and the third (if fullScreen == true) options.

The beauty of the method is that all the transformations for us are done by the Matrix. And we don’t have to multiply or divide anything ourselves.

We first get screen sizes and previews. For the screen we immediately find out what is more: width or height. That is, if the width is larger, then the device is in a horizontal orientation, if the height is more in the vertical.

For transformations, the matrix will require RectF objects from us. If you have never worked with them, then this is just an object that contains the coordinates of the rectangle: left, top, right, bottom.

As left and top, we will always use 0, and in the right and bottom we put the width and height of the screen or preview. This way we get rectangles that are exactly the same size as the screen and previews.

rectDisplay – screen, rectPreview – preview. In previews, the width is usually always greater than the height. If the device is horizontal, then we create rectPreview according to its size. And if the device is upright, the camera image will also be rotated vertically, so the width and height will change places.

Now the most interesting is the preparation of the transformation. We use the setRectToRect method. It takes two RectF inputs. And it calculates what transformations need to be done to squeeze the first into the second. I will not talk about the third parameter of the method now, we always use START. (If still interesting, ask questions in the forum, we’ll discuss there)

That is, this method does not change objects yet. This is just a matrix setup. Now the matrix knows what calculations it will need to do with the coordinates of the object, which we will later provide.

We look at the code. If (! FullScreen), then this is the second option, that is, the preview will be squeezed into the screen. To do this, we simply inform the matrix that we will need to squeeze a preview object into a screen object. That is, if you turn to the second option, the matrix understands that it will need to multiply the sides of the object by 1.57. And when we then give her the object with the size of the preview – she will do it and we will get the size we need.

If fullScreen (the third variant), then the algorithm is a little more complicated. We inform the matrix that we need to squeeze an object with a screen size into a preview object. We look at the third option. We first found out that the screen would need to be split into two. But then we realized that we can’t resize the screen and we need to do the opposite – not split the screen by two, but previews by two. We can explain this to the matrix by calling the invert method. The matrix will take the algorithm from the matrix passed to it (that is, from itself) and do the opposite. That is, instead of dividing by two – multiplies.

I have great hope that I have made clear. If it is not clear – reread times 5 and check the description of the options and pictures in the example above. If still not clear, return to this thread within a week. The brain by that time will already absorb and somehow will conclude this info. And re-reading can be much easier. At least I usually have it) I can read something – I don’t understand anything. But after a week / month / six months, look back there and wonder: “what was the incomprehensible thing here?”

So, we have prepared the matrix for transformation, it is necessary only to hand over to it the object which it will undergo this transformation. To do this, we use the mapRect method and pass it a preview object. As in the example above, we will carry out all the transformations with it.

After the transformations, we take the coordinates obtained and adjust the surface that reflects the previews.

turning the preview

If the brain is not yet destroyed, we will fix it now! disassemble the method setCameraDisplayOrientationWhich will make the previews rotate.

Again, let’s look at an example where the tablet is horizontal, the camera is the back. Suppose we are looking at the following object through the camera:

We see it on the screen, everything is ok.

Important note. Due to the standard camera attachment, the following example will not play, because the standard attachment processes the rotation of the device. And I want to demonstrate what it would be like if it wasn’t processed.

I turn the tablet clockwise (right) 90 degrees. In this case, of course, the camera also returns. Now I see the following picture on the screen:

By the way, you will see the same picture if you tilt your head 90 degrees to the right)

That is, although the system reacted to the rotation and returned the main image, but the camera returns us this kind of rotated view. We see him.

What should be done to fix this? Rotate the image 90 clockwise. That is to make the same turn that the camera did.

The Axiom of Turning the Camera came out: as far as which direction the camera is rotated, the same and in the same direction we need to rotate the previews to get the correct picture.

To do this, we will use the setDisplayOrientation method. He accepts at the entrance the angle at which the camera will turn behind the clock preview before giving it to us. That is, from us expect the angle of rotation of the previews clockwise. We can find out by figuring out how far it is rotated behind the clock camera (see Camera Rotation Axiom).

To do this, we use the following construct – getWindowManager (). GetDefaultDisplay (). GetRotation (). It returns us the degrees by which the system rotates the image clockwise so that it is displayed normally when the device is rotated.

That is, when you tilt the device 90 clockwise, the system must rotate the image 90 clockwise to compensate for the rotation. (Now it’s not about the camera, it’s just about the image the phone shows, like Home)

Device Rotation Axiom: As much as and in which direction the device is rotated, the same, but the other way around, the system rotates the image to get it in the correct orientation.

It follows that getWindowManager (). GetDefaultDisplay (). GetRotation () tells us how the device is turned against the clock.

By the way, from getRotation we get constants, and then in switch we will turn them into degrees.

Therefore, the degrees variable contains the number of degrees the phone is rotated against the clock.

Is the brain still whole? Then keep this in mind: the camera on your device can be rotated about that device.

This is usually done on smartphones. That is, the camera is rotated 90 degrees there. And its normal orientation coincides with the horizontal orientation of the device. That in both previews and on the screen width came out more height.

And here we also need to take this turn into account when turning the previews. You can get camera data using the getCameraInfo method. The input requires a camera id and a CameraInfo object to contain the camera info.

We are interested in the CameraInfo.orientation field, which returns how many hours it takes to rotate a preview to get a normal image. That is, based on the Axiom of Camera Rotation, the camera itself is rotated by the same clockwise.

Well, we brainwash that fact. The camera can be rear and front (front). And for them in different ways it is necessary to consider turns)

The CameraInfo.facing field contains information about which camera is rear or front.

Let’s try to calculate. Let me remind you that the setDisplayOrientation method expects from us a degree of rotation of the clock previews. That is, we can simply calculate the camera rotation by the clock (Camera Rotation Axiom) and get the desired value.

To find out the total clock rotation of the camera in space, you need to rotate the device clockwise and CameraInfo.orientation. This is for the rear camera. And for the front, CameraInfo.orientation should be subtracted because it looks in our direction. And everything for us on the clock, for her – against.

Everything, we think. We have degrees – the number of degrees to which the phone is turned against the clock. To convert this number to degrees by the clock, you just need to subtract them from 360.

That is (360 degrees) is the rotation of the device clockwise. I specifically highlighted this phrase in parentheses for clarity. Then we add or subtract (built-in or rear camera) the built-in camera rotation to this value. In the case of the front camera, just in case, we add 360 degrees to avoid a negative number. Finally, we determine the total number of degrees from 0 to 360, calculating the remainder of the division by 360.

And we solemnly convey this value to the camera.

Rarely a brain thing – working with a camera, right? As a result, when you run it all, you should see an adequate camera image.

There are two constants at the beginning of the code: CAMERA_ID and FULL_SCREEN.

If you have two cameras, you can send not 0 in CAMERA_ID but 1, and get a picture from the front camera.

Well, by changing the FULL_SCREEN change the preview type.

the rest

How to determine if a camera is in a device? The context.getPackageManager () construct will report this. HasSystemFeature (PackageManager.FEATURE_CAMERA)

You can get the camera id using the getNumberOfCameras method (available from Level 9 API). It will return us any number of N cameras available on the device. Accordingly, their IDs will be 0, 1, …, N-1. From this id you already get CameraInfo and determine what camera it is.

The open method can return Exception on startup if for some reason the camera could not be accessed. It makes sense to process and issue messages to the user, not to fly out with an error.

Turning may work wrongly on some devices. For example, I tested on HTC Desire (4.2.2) and Samsung Galaxy Tab (4.2.2) – everything was ok. And on Samsung Galaxy Ace (2.3.6) there is a feeling that the camera just ignores the degree of rotation, which I tell her.

In the next lesson:

– take a picture
– writing a video




Discuss in the forum [12 replies]

Leave a Comment