Lesson 134. Camera. settings

Lesson 134. Camera. settings

In this lesson:

– change the camera settings

There is one topic left to do with the camera – camera settings. That is, resolutions, effects, tricks, quality, flash, and more. You can see the complete list of settings in Help (note the minimum version of the API).

The camera.Parameters object is used to work with the settings. It has a large number of methods that can be divided into several groups.

methods is<…>Supported let you know if this option / setting is by camera.

methods get<…>Supported and getMax<…> will provide you with a set of supported values ​​or a maximum value of the option / setting.

methods get<…> return the current setting value.

methods set<…> will set the current setting value.

In this tutorial, let’s look at how to work with a couple of settings: color effects and flash modes. Also, I will describe the work with the settings that fall out of the general algorithm.

As a reminder, in Lesson 132, we learned how to combine screen sizes and camera previews and take into account device rotation. And in Lesson 133, how to save a photo or video. In this tutorial, I will not use it to avoid repeating or complicating material.

Let’s create a project:

Project name: P1341_CameraFeatures
Build Target: Android 2.3.3
Application name: CameraFeatures
Package name: en.startandroid.develop.p1341camerafeatures
Create Activity: MainActivity

IN strings.xml add rows:

Color Effect
Flash Mode



On the screen we have SurfaceView for output image and two spinner settings. The first will provide a choice of color effects, and the second – the modes of flash. I talked about spinners in Lesson 56.

The manifesto adds the rights to work with the camera


package ru.startandroid.develop.p1341camerafeatures;

import java.util.List;

import android.app.Activity;
import android.hardware.Camera;
import android.hardware.Camera.Parameters;
import android.os.Bundle;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import android.widget.AdapterView;
import android.widget.AdapterView.OnItemSelectedListener;
import android.widget.ArrayAdapter;
import android.widget.Spinner;

public class MainActivity extends Activity {

  SurfaceView surfaceView;
  Camera camera;

  protected void onCreate(Bundle savedInstanceState) {

    surfaceView = (SurfaceView) findViewById(R.id.surfaceView);

    SurfaceHolder holder = surfaceView.getHolder();
    holder.addCallback(new SurfaceHolder.Callback() {
      public void surfaceCreated(SurfaceHolder holder) {
        try {
        } catch (Exception e) {

      public void surfaceChanged(SurfaceHolder holder, int format,
          int width, int height) {

      public void surfaceDestroyed(SurfaceHolder holder) {

  protected void onResume() {
    camera = Camera.open();

  protected void onPause() {
    if (camera != null)
    camera = null;

  void initSpinners() {
    // Цветовые эффекты
    // получаем список цветовых эффектов
    final List colorEffects = camera.getParameters()
    Spinner spEffect = initSpinner(R.id.spEffect, colorEffects, camera
    // обработчик выбора
    spEffect.setOnItemSelectedListener(new OnItemSelectedListener() {
      public void onItemSelected(AdapterView arg0, View arg1,
          int arg2, long arg3) {
        Parameters params = camera.getParameters();

      public void onNothingSelected(AdapterView arg0) {

    // Режимы вспышки
    // получаем список режимов вспышки
    final List flashModes = camera.getParameters()
    // настройка спиннера
    Spinner spFlash = initSpinner(R.id.spFlash, flashModes, camera
    // обработчик выбора
    spFlash.setOnItemSelectedListener(new OnItemSelectedListener() {
      public void onItemSelected(AdapterView arg0, View arg1,
          int arg2, long arg3) {
        Parameters params = camera.getParameters();

      public void onNothingSelected(AdapterView arg0) {

  Spinner initSpinner(int spinnerId, List data, String currentValue) {
    // настройка спиннера и адаптера для него
    Spinner spinner = (Spinner) findViewById(spinnerId);
    ArrayAdapter adapter = new ArrayAdapter(this,
        android.R.layout.simple_spinner_item, data);

    // определеяем какое значение в списке является текущей настройкой
    for (int i = 0; i < data.size(); i++) {
      String item = data.get(i);
      if (item.equals(currentValue)) {

    return spinner;


IN onCreate as always, we define SurfaceView, Holder and Callback for Holder.

IN onResume we connect to the camera and call the initSpinners method, which fills the spinners with settings. This is slightly lower.

IN onPause release resources.

IN initSpinners we take turns spinners. First one that is for choosing color effects.

We use the getParameters method to get the current camera setup. And the getSupportedColorEffects method to get the color effects supported by this device from these settings. The result comes as a list of rows.

Next we call our initSpinner method (we will discuss it below), which will fill the spinner. At the entrance we pass him:
- id spinner
- a set of values ​​that it will display
- the current value of the color effects settings that we get from the settings using the getColorEffect method

With the setOnItemSelectedListener method, we set the value picker with the spinner. When selected, we get the settings, set the selected color effect by the setColorEffect method and assign these settings to the camera by the setParameters method. After that the camera will pick up the new settings and we will see in the preview the result.

Similarly, we configure the spinner for flash modes. We get the available modes, get the current one, fill the spinner and transfer the selected mode to the camera.

method initSpinner finds the spinner, creates a data adapter for it, and sets the current spinner value according to the camera installed.

The reader on the forum is quite right to notice that it makes sense to add a null check to the code for the lists we get from the getSupportedColorEffects and getSupportedFlashModes methods. Because the camera may not support these settings at all.

We all save and launch the application.

We see a screen like this

Let's try to apply a color effect. we press spinner Color Effect and we see the options.

You will most likely have others, it depends on the camera device.

I'll choose negatives and I get this picture

Again, choose the color effect none.

Now let's check the flash modes. press Flash Mode.

auto - The camera decides for itself whether to use the flash or not
he - The flash will be used when the picture is taken
off - The flash is not used in the picture
torch - Flashlight mode

I'll choose torch and the flash starts to burn, this is visible in the monitor display.

Almost all other settings change according to the same algorithm. I will not consider them all. If there are any questions on them, let's discuss in the forum.

There are a couple of special settings, with a different algorithm. Let's talk about them. The examples here I can not do, so without them. I will use the code and pictures from the help.

Light metering and focus

For good camera shooting you need:
- Know what to focus on so you don't get a blurry picture
- to determine the level of illumination, so as not to get a translucent or darkened picture.

We can specify the areas of the screen that will be used for these purposes. The help is an example of the task of measuring areas of light.

    // Create an instance of Camera
    mCamera = getCameraInstance();

    // set Camera parameters
    Camera.Parameters params = mCamera.getParameters();

    if (params.getMaxNumMeteringAreas() > 0){ // check that metering areas are supported
        List meteringAreas = new ArrayList();

        Rect areaRect1 = new Rect(-100, -100, 100, 100);    // specify an area in center of image
        meteringAreas.add(new Camera.Area(areaRect1, 600)); // set weight to 60%
        Rect areaRect2 = new Rect(800, -1000, 1000, -800);  // specify an area in upper right of image
        meteringAreas.add(new Camera.Area(areaRect2, 400)); // set weight to 40%


Here we get the camera and its settings. The getMaxNumMeteringAreas method determines the number of areas that the camera can take into account. If value = 0, this function is not supported.

Then we create a list and put a couple of areas in it. The area is a rect object. It should be in the range of -1000, -1000 to 1000.1000. That is, the camera preview is represented as a coordinate system in which the center (0,0) is in the center of the screen and both axes in both directions have a length of 1000. And in this system we set Rect.

Picture with help

The figure shows where it will be located Rect with values ​​of 333,333,666,666.

The created Rect is rotated to the Camera.Area object. This indicates the weight, from 1 to 1000. The greater the weight of the area, the greater the value will be given to the data obtained from it by the measurement of light.

We pass the resulting list with two areas to the camera parameters using the setMeteringAreas method. And in the end we give the parameters to the camera.

The focus areas are similar. The getMaxNumFocusAreas method gives us the number of supported focus areas. We create a list of areas, specifying coordinates and weights, and list the method setFocusAreas.

face recognition

To use this option, we need the following steps in the application:

- Determine whether this option is supported
- create a listener who will receive data on recognized individuals
- send this listener to the camera
- to include the mode of identification of persons after each start of the preview

Help code examples:

  class MyFaceDetectionListener implements Camera.FaceDetectionListener {

    public void onFaceDetection(Face[] faces, Camera camera) {
      if (faces.length > 0) {
        Log.d("FaceDetection", "face detected: " + faces.length
            + " Face 1 Location X: " + faces[0].rect.centerX()
            + "Y: " + faces[0].rect.centerY());

This is the creation of a listener that implements the Camera.FaceDetectionListener interface. The onFaceDetection method will retrieve information about recognized faces (Face) and log first-person coordinates (rect attribute). However, judging by Help, these coordinates will not be onscreen, but from the system we know already (-1000, -1000) - (1000.1000). But the same help kindly provides the code for adjusting the conversion matrix, taking into account the location and rotation of the camera.

Face must also have attributes:

id is the ID of the person
leftEye - coordinates of the center of the left eye, can be null
rightEye - coordinates of the center of the right eye, can be null
mouth - coordinates of the center of the mouth, maybe null
score - the confidence of the system that it is a real person. It varies from 0 to 100. It is recommended to filter individuals with a score <50.

mCamera.setFaceDetectionListener(new MyFaceDetectionListener());

Here we give the created listener to the camera using the setFaceDetectionListener method.

  public void startFaceDetection(){
      // Try starting Face Detection
      Camera.Parameters params = mCamera.getParameters();

      // start face detection only *after* preview has started
      if (params.getMaxNumDetectedFaces() > 0){
          // camera supports face detection, so can start it:

This is a method to enable recognition mode. Preliminary verification of the option is supported.

And then call the startFaceDetection method after every call to startPreview. This usually occurs in surfaceCreated and surfaceChanged (see Lesson 132).

In the next lesson:

- We study Loader and AsyncTaskLoader

Discuss in the forum [3 replies]

Leave a Comment