Seeing the human face correctly…

ML (Machine Learning) Kit was one of the key highlights in Google I/O 2018. This kit is comprised of

  1. Image Labeling
  2. Text- Recognition
  3. Face- Detection
  4. Barcode- scanning
  5. Landmark- detection

From google site…
From google site…

For text recognition using MLKit, please refer my article here.

For image labeling using MLKit, please refer my article here.

Let’s Begin….

To start with we need to import mlkit package

mlkit 0.5.0 — A Flutter plugin to use the Firebase ML Kit.

We simply need to import this package in our pubspec.yaml file as

Ml Kit package
Ml Kit package

along with this line in your dart file : import ‘package:mlkit/mlkit.dart’;

Now, create a project using Firebase by referring the article.

Please note that the name of your app should be same in the Firebase, your project’s manifest.xml (if playing in android) and in the app-level build.gradle (if playing in android).

Face Detection using MLKit in Flutter….
Face Detection using MLKit in Flutter….

Include the google-services.json in the app directory of android. (one of the steps in Firebase project setup).

VisionFace of MlKit library, is used for storing the parameters which are detected from the selected image.

VisionFaceDetectorOptions is used for setting any of the face detector’s default settings. The settings available are :

  1. modeType : VisionFaceDetectorMode.Accurate / Fast
  2. landmarkType : VisionFaceDetectorLandmark.All / None
  3. classificationType : VisionFaceDetectorClassification.All / None
  4. minFaceSize : a double value (with default as 0.1)
  5. isTrackingEnabled : a boolean value (with default as false)

Next, create an instance of FirebaseVisionFaceDetector. 

You can use the above settings with the instance of FirebaseVisionFaceDetector as

detector.detectFromBinary(_file?.readAsBytesSync(), options)

where detector is the instance of FirebaseVisionFaceDetector…

options is the settings of VisionFaceDetectorOptions…

Next, select the face using the floating action button. We are using the images from the device in this program, but we can anytime change it to select from camera by altering the source in the following line.


For further details, please refer my article here.

After selecting the image, details from the image are extracted and shown on top of the image using Stack widget. Parameters which we receive from the library are :

  1. HeadEulerY : Head is rotated to right by headEulerY degrees
  2. HeadEulerZ : Head is tilted sideways by headEulerZ degrees
  3. LeftEyeOpenProbability : left Eye Open Probability
  4. RightEyeOpenProbability : right Eye Open Probability
  5. SmilingProbability : Smiling probability
  6. Tracking ID : If face tracking is enabled
  7. Rect : Rectangle path of the face from the entire image.

Finally, we have outlined the face from the whole image using Rect class of Flutter.

Video demonstration :

For complete source code visit,

1 Comment

  • I put in the code but it appears that it appears that I have 1 problem.
    The argument type ‘void Function(ImageInfo, bool)’ can’t be assigned to the parameter type ‘ImageStreamListener’.

    Is there any solution regarding this issue?

Valuable comments