Site icon TECHZIZOU

Build Android app for custom object detection (TF 2.x)

Using TensorFlow Object Detection API (For old version TF app)


Training a Deep Learning model for custom object detection using TensorFlow 2.x in Google Colab and converting it to a TFLite model for deploying on mobile devices like Android, iOS, Raspberry Pi, IoT devices using the sample TFLite object detection app from TensorFlow’s GitHub.

IMPORTANT:

This tutorial is for TensorFlow 2.5 using the TensorFlow Object Detection API.

This is for an older java version. You can switch to previous versions in GitHub. use the following version for the java version reference app used in this tutorial:

https://github.com/tensorflow/examples/tree/demo/lite/examples/object_detection/android


Roadmap


Objective: Build android app for custom object detection

In this article, I will be training an object detection model for a custom object and converting it to a TFlite model so it can be deployed on Android, iOS, IoT devices. Follow the 21 steps mentioned below. (The first 16 steps are the same as my previous article on training an ML model using TF 2. Since TFLite has support for only SSD models at the moment, we will be using an SSD model here)

( But first ✅Subscribe to my YouTube channel ???????? https://bit.ly/3Ap3sdi ????????)

  1. Import Libraries
  2. Create customTF2, training, and data folders in your google drive
  3. Create and upload your image files and XML files
  4. Upload the generate_tfrecord.py file to the customTF2 folder in your drive
  5. Mount drive and link your folder
  6. Clone the TensorFlow models git repository & Install TensorFlow Object Detection API
  7. Test the model builder
  8.  Navigate to /mydrive/customTF2/data/ and Unzip the images.zip and annotations.zip files into the data folder
  9. Create test_labels & train_labels
  10. Create CSV and “label_map.pbtxt” files
  11. Create ‘train.record’ & ‘test.record’ files
  12. Download pre-trained model checkpoint
  13. Get the model pipeline config file, make changes to it, and put it inside the data folder
  14. Load Tensorboard
  15. Train the model
  16. Test your trained model
  17. Install tensorflow-nightly
  18. Export SSD TFlite graph
  19. Convert saved model to TFlite model
  20. Create TFLite metadata
  21. Download the TFLite model with metadata and deploy it on a mobile device

HOW TO BEGIN?

–Open my Colab notebook on your browser.

–Click on File in the menu bar and click on Save a copy in drive. This will open a copy of my Colab notebook on your browser which you can now use.

–Next, once you have opened the copy of my notebook and are connected to the Google Colab VM, click on Runtime in the menu bar and click on Change runtime type. Select GPU and click on save.


LET’S BEGIN !!

1) Import Libraries

import os
import glob
import xml.etree.ElementTree as ET
import pandas as pd
import tensorflow as tf

2) Create customTF2, training, and data folders in your google drive

Create a folder named customTF2 in your google drive.

Create another folder named training inside the customTF2 folder ( training folder is where the checkpoints will be saved during training ).

Create another folder named data inside the customTF2 folder.


3) Create and upload your image files and its their corresponding labeled XML files.

Create a folder named images for your custom dataset images and create another folder named annotations for its corresponding PASCAL_VOC format labeled XML files.

Next, create their zip files and upload them to the customTF2 folder in your drive.

Make sure all the image files have their extension as “.jpg” only. 

Other formats like <.png> , <.jpeg> or even <.JPG> will give errors since the generate_tfrecord and xml_to_csv scripts here have only <.jpg> in them. If you have other format images, then make changes in the scripts accordingly.

For Datasets, you can check out my Dataset Sources at the bottom of this article in the credits section. 

Collecting Images Dataset and labeling them to get their PASCAL_VOC XML annotations.

Labeling your Dataset

Input image example (Image1.jpg)

Original Photo by Ali Pazani from Pexels


You can use any software for labeling like the labelImg tool.

labelImg GUI for Image1.jpg


I use an open-source labeling tool called OpenLabeling with a very simple UI.

OpenLabeling Tool GUI


Click on the link below to know more about the labeling process and other software for it:

NOTE: Garbage In = Garbage Out. Choosing and labeling images is the most important part. Try to find good-quality images. The quality of the data goes a long way towards determining the quality of the result.

The output PASCAL_VOC labeled XML file looks like as shown below:

Image1.xml

4) Upload the generate_tfrecord.py file to the customTF2 folder in your drive.

You can find the generate_tfrecord.py file here


5) Mount drive and link your folder

#mount drive

from google.colab import drive
drive.mount('/content/gdrive')

# this creates a symbolic link so that now the path /content/gdrive/My Drive/ is equal to /mydrive

!ln -s /content/gdrive/My Drive/ /mydrive
!ls /mydrive

6) Clone the TensorFlow models git repository & Install TensorFlow Object Detection API

# clone the tensorflow models on the colab cloud vm

!git clone --q https://github.com/tensorflow/models.git

# navigate to /models/research folder to compile protos

%cd models/research

# Compile protos.

!protoc object_detection/protos/*.proto --python_out=.

# Install TensorFlow Object Detection API.

!cp object_detection/packages/tf2/setup.py .
!python -m pip install 

7) Test the model builder

!python object_detection/builders/model_builder_tf2_test.py

8) Navigate to /mydrive/customTF2/data/ and Unzip the images.zip and annotations.zip files into the data folder

%cd /mydrive/customTF2/data/

# unzip the datasets and their contents so that they are now in /mydrive/customTF2/data/ folder

!unzip /mydrive/customTF2/images.zip -d .
!unzip /mydrive/customTF2/annotations.zip -d .

9) Create test_labels & train_labels

Current working directory is /mydrive/customTF2/data/

Divide annotations into test_labels(20%) and train_labels(80%).


10) Create the CSV files and the “label_map.pbtxt” file

Current working directory is /mydrive/customTF2/data/

Run xml_to_csv script below to create test_labels.csv and train_labels.csv

This script also creates the label_map.pbtxt file using the classes mentioned in the xml files.

The 3 files that are created i.e. train_labels.csv, test_labels.csv, and label_map.pbtxt look like as shown below:

train_labels.csv
test_labels.csv
label_map.pbtxt

The train_labels.csv contains the name of all the train images, the classes in those images, and their annotations.

The test_labels.csv contains the name of all the test images, the classes in those images, and their annotations.

The label_map.pbtxt file contains the names of the classes from your labeled XML files. 

NOTE: I have 2 classes i.e. with_mask and without_mask.

Label map id 0 is reserved for the background label.


11) Create train.record & test.record files

Current working directory is /mydrive/customTF2/data/

Run the generate_tfrecord.py script to create train.record and test.record files

#Usage:
#!python generate_tfrecord.py output.csv output_pb.txt /path/to/images output.tfrecords

#For train.record
!python /mydrive/customTF2/generate_tfrecord.py train_labels.csv  label_map.pbtxt images/ train.record

#For test.record
!python /mydrive/customTF2/generate_tfrecord.py test_labels.csv  label_map.pbtxt images/ test.record

The total number of image files is 1370. Since we divided the labels into two categories viz. train_labels(80%) and test_labels(20%), the number of files for “train.record” is 1096, and the number of files for “test.record” is 274.


12) Download pre-trained model checkpoint

Current working directory is /mydrive/customTF2/data/

You can choose any model for training depending upon your data and requirement. Read this blog for more info on which. The official list of detection model checkpoints for TensorFlow 2.x can be found here.

However, since TFLite doesn’t support all models right now, the options for this are limited at the moment. TensorFlow is working towards adding more models with TFLite support. Read more about TFLite compatible models for all ML modules like object detection, image classification, image segmentation, etc here.

Currently, TFLite supports only SSD models (excluding EfficientDet)

In this tutorial, I will use the ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 model.

# Download the pre-trained model ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz into the data folder & unzip it

!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz

!tar -xzvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz

13) Get the model pipeline config file, make changes to it and put it inside the data folder

Current working directory is /mydrive/customTF2/data/

Download ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config from /content/models/research/object_detection/configs/tf2. Make the required changes to it and upload it to the /mydrive/custom/data folder.

OR

Edit the config file from /content/models/research/object_detection/configs/tf2 in colab vm and copy the edited config file to the /mydrive/customTF2/data folder.

You can also find the pipeline config file inside the model checkpoint folder we just downloaded in the previous step.

You need to make the following changes:

Max batch size= available GPU memory bytes / 4 / (size of tensors + trainable parameters)

Next, copy the edited config file.

# copy the edited config file from the configs/tf2 directory to the data/ folder in your drive

!cp /content/models/research/object_detection/configs/tf2/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config /mydrive/customTF2/data

The workspace at this point:

There are many data augmentation options that you can add. Check the full list here. For beginners, the above changes are sufficient.

Data Augmentation Suggestions (optional)

First, you should train the model using the sample config file with the above basic changes and see how well it does. If you are overfitting, then you might want to do some more image augmentations.

In the sample config file: random_horizontal_flip & ssd_random_crop are added by default. You could try adding the following as well:

(Note: Each image augmentation will increase the training time drastically)

  1. from train_config {}:
data_augmentation_options {
    random_adjust_contrast {
    }
  }
  data_augmentation_options {
    random_rgb_to_gray {
    }
  }
  data_augmentation_options {
    random_vertical_flip {
    }
  }
  data_augmentation_options {
    random_rotation90 {
    }
  }
  data_augmentation_options {
    random_patch_gaussian {
    }
  }

2. In model {} > ssd {} > box_predictor {}: set use_dropout to true This will help you to counter overfitting.

3. In eval_config : {} set the number of testing images you have in num_examples and remove max_eval to evaluate indefinitely

eval_config: {
  num_examples: 274 # set this to the number of test images we divided earlier
  num_visualizations: 20 # the number of visualization to see in tensorboard
}

14) Load Tensorboard

%load_ext tensorboard
%tensorboard --logdir '/content/gdrive/MyDrive/customTF2/training'

15) Train the model

Navigate to the object_detection folder in Colab VM

%cd /content/models/research/object_detection

15 (a) Training using model_main_tf2.py

Here {PIPELINE_CONFIG_PATH} points to the pipeline config and {MODEL_DIR} points to the directory in which training checkpoints and events will be written.

# Run the command below from the content/models/research/object_detection directory

"""
PIPELINE_CONFIG_PATH=path/to/pipeline.config
MODEL_DIR=path to training checkpoints directory
NUM_TRAIN_STEPS=50000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1

python model_main_tf2.py -- \
--model_dir=$MODEL_DIR --num_train_steps=$NUM_TRAIN_STEPS \
--sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
--pipeline_config_path=$PIPELINE_CONFIG_PATH \
--alsologtostderr
"""

!python model_main_tf2.py --pipeline_config_path=/mydrive/customTF2/data/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --model_dir=/mydrive/customTF2/training --alsologtostderr

NOTE :

For best results, you should stop the training when the loss is less than 0.1 if possible, else train the model until the loss does not show any significant change for a while. The ideal loss should be below 0.05 (Try to get the loss as low as possible without overfitting the model. Don’t go too high on training steps to try and lower the loss if the model has already converged viz. if it does not reduce loss significantly any further and takes a while to go down.)

Ideally, we want the loss to be as low as possible but we should be careful so that the model does not over-fit. You can set the number of steps to 50000 and check if the loss goes below 0.1 and if not, then you can retrain the model with a higher number of steps.

The output will normally look like it has “frozen”, but DO NOT rush to cancel the process. The training outputs logs only every 100 steps by default, therefore if you wait for a while, you should see a log for the loss at step 100. The time you should wait can vary greatly, depending on whether you are using a GPU and the chosen value for batch_size in the config file, so be patient.


15 (b) Evaluation using model_main_tf2.py (Optional)

You can run this in parallel by opening another colab notebook and running this command simultaneously along with the training command above (don’t forget to mount the drive, clone the TF git repo and install the TF2 object detection API there as well). This will give you validation loss, mAP, etc so you have a better idea of how your model is performing.

Here {CHECKPOINT_DIR} points to the directory with checkpoints produced by the training job. Evaluation events are written to {MODEL_DIR/eval}.

# Run the command below from the content/models/research/object_detection directory

"""
PIPELINE_CONFIG_PATH=path/to/pipeline.config
MODEL_DIR=path to training checkpoints directory
CHECKPOINT_DIR=${MODEL_DIR}
NUM_TRAIN_STEPS=50000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1

python model_main_tf2.py -- \
--model_dir=$MODEL_DIR --num_train_steps=$NUM_TRAIN_STEPS \
--checkpoint_dir=${CHECKPOINT_DIR} \
--sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
--pipeline_config_path=$PIPELINE_CONFIG_PATH \
--alsologtostderr
"""

!python model_main_tf2.py --pipeline_config_path=/mydrive/customTF2/data/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --model_dir=/mydrive/customTF2/training/ --checkpoint_dir=/mydrive/customTF2/training/ --alsologtostderr

RETRAINING THE MODEL ( in case you get disconnected )

If you get disconnected or lose your session on Colab VM, you can start your training where you left off as the checkpoint is saved on your drive inside the training folder. To restart the training simply run steps 1, 5, 6, 7, 14, and 15.

Note that since we have all the files required for training like the record files, our edited pipeline config file, the label_map file, and the model checkpoint folder, we do not need to create these again.

The model_main_tf2.py script saves the checkpoint every 1000 steps. The training automatically restarts from the last saved checkpoint itself.

However, if you see that it doesn’t restart training from the last checkpoint you can make 1 change in the pipeline config file. Change fine_tune_checkpoint to where your latest trained checkpoints have been written and have it point to the latest checkpoint as shown below:

fine_tune_checkpoint: "/mydrive/customTF2/training/ckpt-X" (where ckpt-X is the latest checkpoint)

Read this TensorFlow Object Detection API tutorial to know more about the training process for TF2.


16) Test your trained model

Export inference graph

Current working directory is /content/models/research/object_detection

!python exporter_main_v2.py --trained_checkpoint_dir=/mydrive/customTF2/training --pipeline_config_path=/content/gdrive/MyDrive/customTF2/data/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --output_directory /mydrive/customTF2/data/inference_graph

Note: The trained_checkpoint_dir parameter in the above command needs the path to the training directory. There is a file called “checkpoint” which has all the model paths and the latest model checkpoint path saved in it. So it automatically uses the latest checkpoint. In my case, the checkpoint file had ckpt-36 written in it for the latest model_checkpoint_path.

For pipeline_config_path give the path to the edited config file we used to train the model above.


Test your trained Object Detection model on images

Current working directory is /content/models/research/object_detection

This step is optional.
# Different font-type and font-size for labels text

!wget https://freefontsdownload.net/download/160187/arial.zip
!unzip arial.zip -d .

%cd utils/
!sed -i "s/font = ImageFont.truetype('arial.ttf', 24)/font = ImageFont.truetype('arial.ttf', 50)/" visualization_utils.py
%cd ..

Test your trained model

For testing on webcam capture or videos, use this colab notebook.


CONVERTING THE TRAINED SSD MODEL TO TFLITE MODEL

17) Install tf-nightly

(TFLite converter works better with tf-nightly. Using tf-nightly is recommended. You can also try using the latest TensorFlow stable version.)

!pip install tf-nightly

NOTE: If running the above command asks you to restart runtime and you lose all the local variables in your colab VM, you have to run steps 5 & 6 again to mount the drive, clone the TF models repo and install object detection API. After running those steps again run step 18 below.


18) Export SSD TFlite graph

Current working directory is /content/models/research/object_detection

%cd /content/models/research/object_detection

!python export_tflite_graph_tf2.py --pipeline_config_path /content/gdrive/MyDrive/customTF2/data/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --trained_checkpoint_dir /mydrive/customTF2/training --output_directory /mydrive/customTF2/data/tflite 

19) Convert the TensorFlow saved model to TFlite model

Check input and output tensor names

!saved_model_cli show --dir /mydrive/customTF2/data/tflite/saved_model --tag_set serve --all

Convert to TFlite – Use either Method (a) or Method (b) to convert to TFLite .

Method (a) :- Using the commad-line tool

Convert the saved model to TFLite using the tflite_convert command. This is the simple method used for basic model conversion. For beginners, I recommend using this method first. Although the second method i.e. Python_API is highly recommended everywhere as it has more support and features available, you can start out by using the first command-line tool method for testing, and then you can use the second one later on which also allows us to apply optimizations and post-training quantizations, etc. 

Run the following code block to create the TFLite model using tflite_convert command-line tool.

# The default inference type is Floating-point.

%cd /mydrive/customTF2/data/

!tflite_convert --saved_model_dir=tflite/saved_model --output_file=tflite/detect.tflite


Method (b):- Using Python API 

Convert saved model to TFLite using Python API. This is the better option as per the TensorFlow documentation since we can apply other features and optimizations like post-training quantizations that can reduce the model size and also improve CPU and hardware accelerator latency. You can change the code below depending on your requirements. Read the links just under these code blocks below to get a broader understanding of this and why it is better and also try it once you understand the basics of this entire process of creating a TFLite model.

# Navigate to the data folder

%cd /mydrive/customTF2/data/

Run the following code block to create the TFLite model using Python API. You can use whatever inference type or apply any optimizations you want.

'''*******************************
   FOR FLOATING-POINT INFERENCE
**********************************'''

import tensorflow as tf

saved_model_dir = '/mydrive/customTF2/data/tflite/saved_model'

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("/mydrive/customTF2/data/tflite/detect.tflite", "wb").write(tflite_model)


'''**************************************************
#  FOR FLOATING-POINT INFERENCE WITH OPTIMIZATIONS
#**************************************************'''

# import tensorflow as tf
# converter = tf.lite.TFLiteConverter.from_saved_model('/mydrive/customTF2/data/tflite/saved_model',signature_keys=['serving_default'])
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.experimental_new_converter = True
# converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,  
#                                        tf.lite.OpsSet.SELECT_TF_OPS]
# tflite_model = converter.convert()

# with tf.io.gfile.GFile('/mydrive/customTF2/data/tflite/detect.tflite', 'wb') as f:
#   f.write(tflite_model)


'''**********************************
    FOR DYNAMIC RANGE QUANTIZATION 
*************************************
 The model is now a bit smaller with quantized weights, but other variable data is still in float format.'''


# import tensorflow as tf

# converter = tf.lite.TFLiteConverter.from_saved_model('/mydrive/customTF2/data/tflite/saved_model',signature_keys=['serving_default'])
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# tflite_quant_model = converter.convert()

# with tf.io.gfile.GFile('/mydrive/customTF2/data/tflite/detect.tflite', 'wb') as f:
#   f.write(tflite_quant_model)


# '''***********************************************************************
#    FOR INTEGER WITH FLOAT FALLBACK QUANTIZATION WITH DEFAULT OPTMIZATIONS 
# **************************************************************************
# Now all weights and variable data are quantized, and the model is significantly smaller compared to the original TensorFlow Lite model.
# However, to maintain compatibility with applications that traditionally use float model input and output tensors, 
# the TensorFlow Lite Converter leaves the model input and output tensors in float'''

# import tensorflow as tf
# import numpy as np

# saved_model_dir = '/mydrive/customTF2/data/tflite/saved_model'

# def representative_dataset():
#     for _ in range(100):
#       data = np.random.rand(1, 320, 320, 3)
#       yield [data.astype(np.float32)]

# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.representative_dataset = representative_dataset
# tflite_quant_model = converter.convert()

# with open('/mydrive/customTF2/data/tflite/detect.tflite', 'wb') as f:
#   f.write(tflite_quant_model)


# '''*********************************
#   FOR FULL INTEGER QUANTIZATION
# ************************************
# The internal quantization remains the same as previous float fallback quantization method, 
# but you can see the input and output tensors here are also now integer format'''

# import tensorflow as tf
# import numpy as np

# saved_model_dir = '/mydrive/customTF2/data/tflite/saved_model'

# def representative_dataset():
#     for _ in range(100):
#       data = np.random.rand(1, 320, 320, 3)
#       yield [data.astype(np.float32)]

# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.representative_dataset = representative_dataset
# converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# converter.inference_input_type = tf.uint8  
# converter.inference_output_type = tf.uint8 
# tflite_quant_model_full_int = converter.convert()

# with open('/mydrive/customTF2/data/tflite/detect.tflite', 'wb') as f:
#   f.write(tflite_quant_model_full_int)


To learn more about the above 2 methods, optimizations, post-training quantization and why we might need them, check out the links below:

TensorFlow Lite converter

Post-training quantization | TensorFlow Lite

Model optimization | TensorFlow Lite

Post-training quantization – Colab Notebook by TensorFlow

NOTE: All the above conversions create “detect.tflite” model. I have used one conversion and commented out the rest. If you run more than one conversion the second one will overwrite the first one. My suggestion is to run one of these conversions first and create the TFLite model with metadata in the next step. Once you have the “detect.tflite” with the metadata for one model, download it, then you can come back and re-run this step for the other model with optimizations and post-training quantization and then also create its TFLite model with metadata. I have written the same name for all the conversions as it is also used in the commands below. If you use a different name, make changes in the next steps accordingly.


20) Create TFLite metadata

The new TensorFlow Lite sample for object detection requires that the final TFLite model should have metadata attached to it in order to run. You can read more about this on the official TensorFlow site here. Run the following steps to get the TFLite model with metadata.

Install tflite_support_nightly package

pip install tflite_support_nightly

Create a separate folder named “tflite_with_metadata” inside the “tflite” folder to save the final TFLite model with metadata added to it.

%cd /mydrive/customTF2/data/
%cd tflite/
!mkdir tflite_with_metadata
%cd ..

Create and Upload “labelmap.txt” file

Create and upload “labelmap.txt” file which we will use inside Android Studio later as well. This file is different than the “label_map.pbtxt” while which we used in Steps 10 & 11. This “labelmap.txt” file only has the names of the classes written in each line and nothing more. Upload this file to /mydrive/customTF2/data folder.The labelmap.txt file looks like as shown below:

Run the code in the code block below to create the TFLite model with metadata.

(NOTE: Change the paths in lines 14, 15 & 16 & 87 in the code block below to your paths. That is only if you’re using different paths for your files. But if you’re following this tutorial you can leave it as it is

from tflite_support.metadata_writers import object_detector
from tflite_support.metadata_writers import writer_utils
from tflite_support import metadata
import flatbuffers
import os
from tensorflow_lite_support.metadata import metadata_schema_py_generated as _metadata_fb
from tensorflow_lite_support.metadata.python import metadata as _metadata
from tensorflow_lite_support.metadata.python.metadata_writers import metadata_info
from tensorflow_lite_support.metadata.python.metadata_writers import metadata_writer
from tensorflow_lite_support.metadata.python.metadata_writers import writer_utils

ObjectDetectorWriter = object_detector.MetadataWriter

_MODEL_PATH = "/mydrive/customTF2/data/tflite/detect.tflite"
_LABEL_FILE = "/mydrive/customTF2/data/labelmap.txt"
_SAVE_TO_PATH = "/mydrive/customTF2/data/tflite/tflite_with_metadata/detect.tflite"

writer = ObjectDetectorWriter.create_for_inference(
    writer_utils.load_file(_MODEL_PATH), [127.5], [127.5], [_LABEL_FILE])
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)

# Verify the populated metadata and associated files.
displayer = metadata.MetadataDisplayer.with_model_file(_SAVE_TO_PATH)
print("Metadata populated:")
print(displayer.get_metadata_json())
print("Associated file(s) populated:")
print(displayer.get_packed_associated_file_list())

model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "SSD_Detector"
model_meta.description = (
    "Identify which of a known set of objects might be present and provide "
    "information about their positions within the given image or a video "
    "stream.")

# Creates input info.
input_meta = _metadata_fb.TensorMetadataT()
input_meta.name = "image"
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
input_meta.content.contentProperties.colorSpace = (
    _metadata_fb.ColorSpaceType.RGB)
input_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.ImageProperties)
input_normalization = _metadata_fb.ProcessUnitT()
input_normalization.optionsType = (
    _metadata_fb.ProcessUnitOptions.NormalizationOptions)
input_normalization.options = _metadata_fb.NormalizationOptionsT()
input_normalization.options.mean = [127.5]
input_normalization.options.std = [127.5]
input_meta.processUnits = [input_normalization]
input_stats = _metadata_fb.StatsT()
input_stats.max = [255]
input_stats.min = [0]
input_meta.stats = input_stats

# Creates outputs info.
output_location_meta = _metadata_fb.TensorMetadataT()
output_location_meta.name = "location"
output_location_meta.description = "The locations of the detected boxes."
output_location_meta.content = _metadata_fb.ContentT()
output_location_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.BoundingBoxProperties)
output_location_meta.content.contentProperties = (
    _metadata_fb.BoundingBoxPropertiesT())
output_location_meta.content.contentProperties.index = [1, 0, 3, 2]
output_location_meta.content.contentProperties.type = (
    _metadata_fb.BoundingBoxType.BOUNDARIES)
output_location_meta.content.contentProperties.coordinateType = (
    _metadata_fb.CoordinateType.RATIO)
output_location_meta.content.range = _metadata_fb.ValueRangeT()
output_location_meta.content.range.min = 2
output_location_meta.content.range.max = 2

output_class_meta = _metadata_fb.TensorMetadataT()
output_class_meta.name = "category"
output_class_meta.description = "The categories of the detected boxes."
output_class_meta.content = _metadata_fb.ContentT()
output_class_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.FeatureProperties)
output_class_meta.content.contentProperties = (
    _metadata_fb.FeaturePropertiesT())
output_class_meta.content.range = _metadata_fb.ValueRangeT()
output_class_meta.content.range.min = 2
output_class_meta.content.range.max = 2
label_file = _metadata_fb.AssociatedFileT()
label_file.name = os.path.basename("labelmap.txt")
label_file.description = "Label of objects that this model can recognize."
label_file.type = _metadata_fb.AssociatedFileType.TENSOR_VALUE_LABELS
output_class_meta.associatedFiles = [label_file]

output_score_meta = _metadata_fb.TensorMetadataT()
output_score_meta.name = "score"
output_score_meta.description = "The scores of the detected boxes."
output_score_meta.content = _metadata_fb.ContentT()
output_score_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.FeatureProperties)
output_score_meta.content.contentProperties = (
    _metadata_fb.FeaturePropertiesT())
output_score_meta.content.range = _metadata_fb.ValueRangeT()
output_score_meta.content.range.min = 2
output_score_meta.content.range.max = 2

output_number_meta = _metadata_fb.TensorMetadataT()
output_number_meta.name = "number of detections"
output_number_meta.description = "The number of the detected boxes."
output_number_meta.content = _metadata_fb.ContentT()
output_number_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.FeatureProperties)
output_number_meta.content.contentProperties = (
    _metadata_fb.FeaturePropertiesT())

# Creates subgraph info.
group = _metadata_fb.TensorGroupT()
group.name = "detection result"
group.tensorNames = [
    output_location_meta.name, output_class_meta.name,
    output_score_meta.name
]
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = [
    output_location_meta, output_class_meta, output_score_meta,
    output_number_meta
]
subgraph.outputTensorGroups = [group]
model_meta.subgraphMetadata = [subgraph]

b = flatbuffers.Builder(0)
b.Finish(
    model_meta.Pack(b),
    _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()

NOTE: You can add your own name and description for the metadata information in the block above. I have used the general SSD_Detector name and description.


21) Download the TFLite model with metadata and adjust the TFLite Object Detection sample app

IMPORTANT: Download the TensorFlow Lite examples archive from here and unzip it. (You can also switch to a previous version on GitHub using the Branches/Tags dropdown to download a previous version. The latest version might only have the files for Kotlin. Download any older version for the Java files that we need in this tutorial)

OR

Use this link to download the older version I have used in this tutorial from my GitHub repository.

Next, extract the files. You will find an object detection folder inside.

C:Users\zizou\Downloads\examples-master\examples-master\lite\examples\object_detection

Next, copy the detect.tflite model with metadata and the labelmap.txt file inside the assets folder in the object detection Android app.

…examples-master\examples-master\lite\examples\object_detection\android\app\src\main\assets

Next, make changes in the code as mentioned on the TensorFlow 2 GitHub link given below.

Running TF2 Detection API Models on mobile

The changes to the code are as follows:

// apply from:'download_model.gradle'

For a quantized model

 private static final boolean TF_OD_API_IS_QUANTIZED = true;
 private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
 private static final String TF_OD_API_LABELS_FILE = "labelmap.txt"; 

For a floating-point model

 private static final boolean TF_OD_API_IS_QUANTIZED = false;
 private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
 private static final String TF_OD_API_LABELS_FILE = "labelmap.txt";

IMPORTANT:

private static final int TF_OD_API_INPUT_SIZE = 320;

Implementation solutions

This object detection Android reference app demonstrates two implementation solutions:

(1) lib_task_api that leverages the out-of-box API from the TensorFlow Lite Task Library;

(2) lib_interpreter that creates the custom inference pipeline using the TensorFlow Lite Interpreter Java API.

For beginners, you can just leave it as it is. The default implementation is lib_task_api. 

IMPORTANT:

You can use the default implementation i.e. lib_task_api with the model trained using the above tutorial and that will work fine even with the latest TensorFlow version. However for the lib_interpreter implementation, for TensorFlow 2.6 and above you might have to tweak the code in the lib_interpreter files if you get an output tensor dimension error.

Change the order of the indices in the “TFLiteObjectDetectionAPIModel.java” file inside lib_interpreter.

Change the order from:

outputMap.put(0, outputLocations);
outputMap.put(1, outputClasses);
outputMap.put(2, outputScores);
outputMap.put(3, numDetections);

To:

outputMap.put(0, outputScores);
outputMap.put(1, outputLocations);
outputMap.put(2, numDetections);
outputMap.put(3, outputClasses);

To learn more about these implementations read the following TensorFlow docs.

https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview

https://www.tensorflow.org/lite/inference_with_metadata/lite_support

The build.gradle inside app folder shows how to change flavorDimensions "tfliteInference" to switch between the two solutions.

Inside Android Studio, you can change the build variant to whichever one you want to build and run — just go to Build > Select Build Variant and select one from the drop-down menu. See configure product flavors in Android Studio for more details.


NOTE:

The dataset I have collected for mask detection contains mostly close-up images. For more long-shot images you can search online. There are many sites where you can download labeled and unlabeled datasets. I have given a few links at the bottom under Dataset Sources. I have also given a few links for mask datasets. Some of them have more than 10,000 images.

Though there are certain tweaks and changes we can make to our training config file or add more images to the dataset for every type of object class through augmentation, we have to be careful so that it does not cause overfitting which affects the accuracy of the model.

For beginners, you can start simply by using the config file I have uploaded on my GitHub. I have also uploaded my mask images dataset along with the PASCAL_VOC format labeled text files, which although might not be the best but will give you a good start on how to train your own custom object detector using an SSD model. You can find a labeled dataset of better quality or an unlabeled dataset and label it yourself later.

I have trained this app on a particular scenario for a person wearing or not wearing a face mask and my dataset mostly had close-up images as mentioned above. If we use this app for other scenarios it might give some false positives. You can train the model for your scenario by training on the right dataset. Moreover, if you want to exclude certain objects in your scenario, you can train those objects and then write code in your app to exclude those objects and include only the ones you want.

There are many ways you can customize these ML apps and also handle the false positives in these apps. You can find scripts for that online. This tutorial shows you how to get started with mobile ML. Have fun!

Original Video by Max Fischer from Pexels

My GitHub

Files for training

Train Object Detection model using TF 2


My custom mask detection app:


My mask dataset

https://www.kaggle.com/techzizou/labeled-mask-dataset-pascal-voc-format

My Colab notebook for this

Google Colaboratory Notebook


Check out my Youtube Videos on this!

Part — 1

Part — 2



CREDITS

Documentation / References

Dataset Sources

You can download datasets for many objects from the sites mentioned below. These sites also contain images of many classes of objects along with their annotations/labels in multiple formats such as the YOLO_DARKNET txt files and the PASCAL_VOC xml files.

Mask Dataset Sources

More Mask Datasets


TROUBLESHOOTING:


ERROR 1 ) ANDROID ERROR: Output tensor at index 0 is expected to have 3 dimensions, found 2.

The lib_task_api implementation works fine with the latest version of TensorFlow 2. 

However, for lib_interpreter implementation, you have to make some changes in the code for it to work with the TFLite model created using Tensorflow 2.6 and above. Make the following change and it will work:

Change the order of the indices in the “TFLiteObjectDetectionAPIModel.java” file inside lib_interpreter.

Change the order from:

outputMap.put(0, outputLocations);
outputMap.put(1, outputClasses);
outputMap.put(2, outputScores);
outputMap.put(3, numDetections);

To:

outputMap.put(0, outputScores);
outputMap.put(1, outputLocations);
outputMap.put(2, numDetections);
outputMap.put(3, outputClasses);

ERROR 2) OPENCV ERROR

If you get an error for _registerMatType cv2 above, this might be because of OpenCV version mismatches in Colab. Run !pip list|grep opencv to see the versions of OpenCV packages installed i.e. opencv-python, opencv-contrib-python & opencv-python-headless. The versions will be different which is causing this error. This error will go away when colab updates its supported versions. For now, you can fix this by simply uninstalling and installing OpenCV packages.

Check versions:

!pip list|grep opencv

Use the following 2 commands if only the opencv-python-headless is of different version

!pip uninstall opencv-python-headless --y

!pip install opencv-python-headless==4.1.2.30

Or use the following commands if other opencv packages are of different versions. Uninstall and install all with the same version.

!pip uninstall opencv-python --y
!pip uninstall opencv-contrib-python --y
!pip uninstall opencv-python-headless --y

!pip install opencv-python==4.5.4.60
!pip install opencv-contrib-python==4.5.4.60
!pip install opencv-python-headless==4.5.4.60

ERROR 3) DNN Error

DNN library is not found

This error is due to the version mismatches in the Google Colab environment. Here this can be due to 2 reasons. One, as of now since the default TensorFlow version in Google Colab is 2.8.0(as of now) but the default TensorFlow version for the Object Detection API which we install in step 6 is 2.9.0, this causes an error.

Second, the default cuDNN version for Google Colab as of now is 8.0.5 but for TF 2.8 and above it should have 8.1.0. This also causes version mismatches.

This error will go away when Colab updates its packages. But for temporary fixes, after searching on many forums online and looking at responses from members of the Google Colab team, the following are the 2 possible solutions I can recommend:

SOLUTION 1)

This is the easiest fix, however as per the comment of a Google Colab team member on a forum, this is not the best practice and is not safe. This can also cause mismatches with other packages or libraries. But as a temporary workaround here, this will work.

Run the following command before the training step. This will update the cudnn version and you will have no errors after that.

!apt install --allow-change-held-packages libcudnn8=8.1.0.77-1+cuda11.2

SOLUTION 2)

In this method, you can edit the package versions to be installed in the TensorFlow Object Detection API so that it is the same as the default version for Colab.

We divide step 6 into 2 sections.

Section 1:

# clone the tensorflow models on the colab cloud vm
!git clone --q https://github.com/tensorflow/models.git

#navigate to /models/research folder to compile protos
%cd models/research

# Compile protos.
!protoc object_detection/protos/*.proto --python_out=.

The above section 1 will clone the TF models git repository.

After that, you can edit the file at object_detection/packages/tf2/setup.py .
Change the code in the REQUIRED PACKAGES to include the following 4 lines after the pandas package line:

    'tensorflow==2.8.0',
    'tf-models-official==2.8.0',
    'tensorflow_io==0.23.1',
    'keras==2.8.0'

Note: I have written TensorFlow 2.8.0 here above as it is the default Google colab version as of now. 

Next, after that, you can run section 2 of step 6 shown below to install the TF2 OD API with the updated setup.py file.

Section 2:

# Install TensorFlow Object Detection API.
!cp object_detection/packages/tf2/setup.py .
!python -m pip install .

This will install the TensorFlow Object Detection API with TensorFlow 2.8 0 and other required packages with the updated versions we specified in the setup.py file.

Now you will be able to run the training step without any errors.


ERROR 4) TypeError: EndVector() missing 1 required positional argument: ‘vectorNumElems’

This error is caused due to flatbuffers version mismatches. Downgrade the flatbuffers version from 2.0 to 1.12 and it will fix this error.

!pip install flatbuffers==1.12

Exit mobile version