We will cover briefly:

  1. What’s Dart Frog
  2. Create APIs (routes and middleware)
  3. Integrate Rekognition library 
  4. Architect a dart backend
  5. Unit tests for the APIs using Mocktail
  6. Deploy the APIs to Cloud Run

What’s Dart Frog

Dart Frog logo

It’s a fast, minimalistic backend framework for Dart developed by Very Good Ventures. It’s built on top of a shelf and mason

This framework’s objective is to assist programmers in efficiently creating backends in Dart. Dart Frog’s current efforts are concentrated on streamlining the backends that combine, compose, and standardize data from many sources. It also aims to increase the efficiency of Flutter/Dart developers by providing a unified tech stack that makes it possible to share models, tooling, and more!

Starting with Dart Frog

Dart Frog requires Dart ">=2.17.0 <3.0.0" This framework comes with a CLI tool called dart_frog . For installing,

# 📦 Install the dart_frog cli from pub.dev
dart pub global activate dart_frog_cli

At this point, dart_frog should be available as a command. You can verify by running dart_frogin your terminal.

Dart Frog Terminal

Let’s create a sample project using the command

# 🚀 Create a new project called "experiment_with_dartfrog"
dart_frog create experiment_with_dartfrog

This is the project structure

Structure of Dart Frog Project

It comes with the following dependencies

dart_frog (dependency)
mocktail (dev dependency)
test (dev dependency)
very_good_analysis (dev dependency)

and a simple API that returns a response as Welcome to Dart Frog For hosting the project locally and starting the server

# 🏁 Start the dev server
dart_frog dev

# 📦 Create a production build
dart_frog build

This will start the server on localhost:8080 with hot-reload enabled

Create APIs (routes and middleware)

We will be creating two APIs — rekognizeIfCelebrity and rekognizeTest 

A route in Dart Frog is made up of the onRequest function exported from a dart file located in the routes directory. Based on the filename of each endpoint, a routes file is assigned to it. Files named as index.dartwill map to a / endpoint.

Routes in Dart Frog

For instance, a routes/api/v1/rekognizeIfCelebrity/index.dart that exports an onRequest method, will be accessible via the api/v1/rekognizeIfCelebrityendpoint. 

All route handlers have access to RequestContext which can be used to access the incoming request as well as dependencies.

FutureOr<Response> onRequest(RequestContext context) async {
  final method = context.request.method;

  if (method == HttpMethod.post) {
    return _post(context);
  }

  return Response(statusCode: HttpStatus.methodNotAllowed);
}

In this snippet, we extract the methodand only allow the POST. We customize the status code of the response via the statusCode parameter on the Response object. We added the async keyword to await futures within our handler before returning a Response.

Middleware

Dart Frog’s middleware enables you to run code both before and after a request is handled. The inbound request and outward response can both be changed, dependencies can be given, and more! This may be helpful for logging, authorization validation, etc.

final _dataSource = ImageRekognitionImpl();

Handler middleware(Handler handler) {
  return handler
      .use(requestLogger())
      .use(provider<ImageRekognitionRepo>((_) => _dataSource));
}

A middleware function is exported from _middleware.dart located in the routes's subfolder The only middleware that can ever exist in a route directory is the routes/_middleware.dart, which is the middleware that handles all incoming requests.

Dependency Injection

Middlewares are also useful to provide dependencies to a RequestContext via a provider. An instance of type T can be created and provided to the request context by a particular kind of middleware called a provider.

We then access the provided dependency from within a route handler using context.read<T>()

In the above example, we chain the different middlewares For instance, requestLogger (provided by the framework) and ImageRekognitionImplour data source using the dependency injection.

Integrate Rekognition library

We use AWS Rekognition for detecting whether the uploaded image is of a celebrity or not.

What is AWS Rekognition?

Amazon Rekognition is an image and video analysis solution that uses machine deep learning to identify objects in an image. It is a highly scalable solution capable of doing highly accurate facial analysis, face comparison, and face search. 

Amazon Rekognition includes a simple, easy-to-use API that can quickly 

  • analyze any image uploaded via API or 
  • the file that’s stored in Amazon S3

There are various use cases for using Rekognition and one of them is Celebrity recognition. According to the documentation,

Amazon Rekognition can recognize celebrities within supplied images and in videos. Amazon Rekognition can recognize thousands of celebrities across a number of categories, such as politics, sports, business, entertainment, and media.

Use the RecognizeCelebrities non-storage API operation to identify celebrities inside photos and to obtain more details about those who have been identified. You can use this to identify as many as 64 celebritiesin an image and return links to famous web pages.

The input image can be provided as an image byte array (base64-encoded image bytes) or as an Amazon S3 object.

{
    "Image": {
        "Bytes": "/AoSiyvFpm....."
    }
}

The response is provided as an array of recognized celebrities and an array of unrecognized faces. A list of URLs, such as the celebrity’s IMDB or Wikidata link, are included in the celebrity object along with the celebrity’s name.

{
    "CelebrityFaces": [{
        "Face": {
            "Confidence": 99.99589538574219,
            "Emotions": [{
                "Confidence": 96.3981749057023,
                "Type": "Happy"
                }
            ],
        },
        "Id": "3Ir0du6",
        "KnownGender": {
            "Type": "Male"
        },
        "MatchConfidence": 98.0,
        "Name": "Jeff Bezos",
        "Urls": ["www.imdb.com/name/nm1757263"]
    }],
    "OrientationCorrection": "",
}

Enter Dart!

We install the package aws_rekognition_api into our dart app as

dart pub add aws_rekognition_api

Note: This requires the keys of your AWS account (access key and secret key), and the region

final service = Rekognition(
  region: <regionKey>,
  credentials: AwsClientCredentials(
    accessKey: <accessKey>,
    secretKey: <secretkey>,
  ),
);

Architect a dart backend

We create two packages — 

  • rekognition: This includes the AWS package, the implementation of the APIs (interface implementation), response model classes, and storing of private keys. All of this is consolidated as a library for other packages to install (as a dependency)
  • rekognition_data_source: This includes the abstract class and the rekognition package (the above one). This package would be consumed by the clients.
Packages

For creating a package, we use

dart create -t package <PACKAGE_NAME>

This ends up with pubspec.yaml and libdirectory. The implementation code is placed under lib/src and is considered private.

Dart package

It should not be necessary for other packages to import src/. To make APIs lib/src public, export lib/src files.

rekognition_data_source

This package contains a file rekognition_data_source_base that is an abstract class and has two method declarations:

abstract class ImageRekognitionRepo {
  
  Future<Map<String, List<CelebrityModel>>> recognizeCelebrity();
  
  Future<Map<String, List<CelebrityModel>>> recognizeIfCelebrity(
    Uint8List? imageBytes,
  );

}
  • recognizeCelebrity : This endpoint checks the image (that is already present inside the AWS bucket) if it’s of a celebrity.
  • recognizeIfCelebrity : This endpoint is flexible and takes input in the form of imageBytes 

This class is exposed as a library for other packages to be imported.

library rekognition_data_source;
export 'src/rekognition_data_source_base.dart';

rekognition

This package adds the dependency rekognition_data_source and implements the abstract class functions. We also add the aws_rekognition_api dependency inside this package.

We create a file rekognition_base and implement the functions

class ImageRekognitionImpl implements ImageRekognitionRepo

Before we jump on to the implementation, we need a way to store sensitive keys inside our codebase. Introducing envied 

Envied — Storing sensitive keys

We create a .env file that contains the following. Note: The keys can be obtained from your AWS account credentials or using the AWS CLI

accesskey=<YOUR_ACCESS_KEY>
secretkey=<YOUR_SECRET_KEY>
regionkey=<YOUR_REGION>

Next, we create a dart class env.dart inside which we create the EnviedField The varName of the fields should match with the ones present inside the .env file

part 'env.g.dart';

@Envied(path: '.env', obfuscate: true)
abstract class Env {
  @EnviedField(varName: 'accesskey')
  static final accessKey = _Env.accessKey;

  @EnviedField(varName: 'secretkey')
  static final secretkey = _Env.secretkey;

  @EnviedField(varName: 'regionkey')
  static final regionKey = _Env.regionKey;
}
  • path The file path of the .env file, relative to the project root, and will be used to generate environment variables.
  • obfuscate The values are to be encrypted using a randomly generated key that is then XOR’d with the encrypted value when being accessed the first time.
  • varName The environment variable name specified in the .env file

Next, we install two dependencies envied_generator and build_runner 

$ dart pub add --dev envied_generator
$ dart pub add --dev build_runner

Envied generates the part file part ‘env.g.dart’ which contains the values from your .env file using build_runner. We can access them using

print(Env.accessKey); // "VALUE"

Implementation of APIS

Using the envied we create a Rekognition object, wherein we pass in the accessKey and secretKey to the AwsClientCredentials

final service = Rekognition(
  region: Env.regionKey,
  credentials: AwsClientCredentials(
    accessKey: Env.accessKey,
    secretKey: Env.secretkey,
  ),
);

We call in the method service.recognizeCelebrities but this time, we provide the image input as Uint8List 

final celebrities = await service.recognizeCelebrities(
  image: Image(bytes: imageBytes),
);

Coming back to the routes/api/v1/recognizeIfCelebrity , we expose this endpoint by creating a file called index.dart 

Exposing the route

Inside this file (index.dart), we create a function onRequest that marks the endpoint exposed. We get access to the incoming method type using the context.request.method . Using this, we only allow the HttpMethod type POST and mark all the other methods as HttpStatus.methodNotAllowed 

FutureOr<Response> onRequest(RequestContext context) async {
  final method = context.request.method;

  if (method == HttpMethod.post) {
    return _post(context);
  }

  return Response(statusCode: HttpStatus.methodNotAllowed);
}

Inside our post method, we use the context.read to access the rekognition data source (provided inside the middleware.dart ) Since we accept the image bytes in this API, the Content-Type is application/x-www-form-urlencoded, thereby we use context.request.formData() to read the contents of the request body.

final dataSource = context.read<ImageRekognitionRepo>();

final requestData = await context.request.formData();
final bytes = requestData['image']!;

final imageBytes = base64Decode(bytes);

final celebrities = await dataSource.recognizeIfCelebrity(imageBytes);
return Response.json(body: celebrities);

The requestData a variable provides access to the formData and from that, we access the parameter image (see in the client section for passing the input to the API). Next, we decode the bytes string into Uint8List using the base64Decode This function comes from the dart framework under the dart:convert 

After getting the image bytes, we call our internal API recognizeIfCelebrity passing in the bytes and the response is sent over as a JSON via the Response.json constructor.

Finally, we expose the files as a library rekognition which can be imported by other packages.

library rekognition;

export 'src/rekognition_base.dart';
export 'package:aws_rekognition_api/rekognition-2016-06-27.dart';
export 'src/models/models.dart';

At this point, this is what our package rekognition looks like:

Package Rekognition

Creating clients using Flutter

recognizeCelebrity

We call in the method service.recognizeCelebrities which takes in a required parameter image This image input can either be provided as

  • bytes (in the next endpoint)
  • s3 object: Pass the images stored in an S3 bucket by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.
final celebrities = await service.recognizeCelebrities(
  image: Image(
    s3Object: S3Object(
      bucket: 'YOUR BUCKET NAME',
      name: 'YOUR IMAGE NAME',
    ),
  ),
);

Note: The bucket should exist in your AWS account. Inside the bucket, the image should also be available. For this demo, we created a public bucket and made the image also as public.

We place the image of a celebrity inside the bucket.

AWS Bucket

The response from the API is in the form of RecognizeCelebritiesResponse having a parameter celebrityFaces (details of each celebrity found in the image). This class is converted into our CelebrityModel and sent back to the client.

Inside our routes/api/v1/recognizeTest , we expose this endpoint by creating a file called index.dart

Exposing the route

We create a function onRequest that marks the endpoint exposed. We get access to the incoming method type using the context.request.method . Using this, we only allow the HttpMethod type GET and mark all the other methods as HttpStatus.methodNotAllowed

Inside our get method, we use the context.read to access the rekognition data source. 

Future<Response> _get(RequestContext context) async {
  final dataSource = context.read<ImageRekognitionRepo>();
  final celebrities = await dataSource.recognizeCelebrity();

  return Response.json(body: celebrities);
}

And finally, call our internal API recognizeCelebrity The response is sent over as a JSON via the Response.json constructor.

Creating clients using Flutter

flutter create client

The client app should support picking image files, from the local file explorer and sending over them to the AWS Apis. Choosing files from the system is done through file_picker . We include the package rekognition (created in the above section) and import the http for doing network calls.

Flutter Client

We create an ApiClient which is simply a wrapper over the HTTP library. It includes a method called recognizeIfCelebrity which takes in the input parameter as Uint8List 

Future<List<CelebrityModel>?> recognizeIfCelebrity(
   Uint8List imageBytes,
)

Inside this method, we take in the Uint8List and convert it into a string using the base64Encode This function comes from the dart framework under the dart:convert and is used for encoding bytes using the base64 encoding.

final uri = Uri.parse('$_baseUrl/api/v1/rekognizeIfCelebrity');
final imageData = base64Encode(imageBytes);

final response = await _client.post(
   uri,
   body: {'image': imageData},
);

We send over the string (encoded as base64) to our endpoint. The response we get back is in the form of a Map<String, dynamic> . This is then converted to our CelebrityModel (which comes from the rekognition package) and sent to the front end as a List<CelebrityModel> 

Picking Files

Once the user clicks on the Pick File button, we call the FilePicker.pickFiles for retrieving the file from the underlying platform. 

Pick File

We set the withData property to true. This allows picked files to have their byte data immediately available on memory Uint8List which is useful for server uploads.

final _paths = (await FilePicker.platform.pickFiles(
   withData: true,
)).files;

// Get the bytes and call the service
final imageBytes = _paths?.first.bytes;
celebrities = await apiClient.recognizeIfCelebrity(imageBytes!);

Next, we get the bytes from the file and call our endpoint method by passing the imageBytes On the UI, we show the 

  • image uploaded
  • celebrity name
  • matching confidence score by the rekognition API

Write unit tests for the APIs using Mocktail

We make use of package:test and package:mocktail for unit testing of the two route handlers and middleware.

Mocktail is a mock library for Dart inspired by mockito. It focuses on providing a familiar, simple API for creating mocks in Dart without the need for manual mocks or code generation.

Unit Tests
Unit Tests

Note: The tests should follow <name>_test.dart pattern

We create mocks for the RequestContext Requestand others using package:mocktail For mocking our data source, the only thing we need to add is a stub for context.read

We also provide a default return value in the registerFallbackValueIt is recommended to place all registerFallbackValue calls within setUpAll.

We instantiate the mocks under the setUp which is a function that gets called before each value is tested, with the value that will be tested.

setUp(() {
  context = _MockRequestContext();
  request = _MockRequest();
  uri = _MockUri();
  dataSource = _MockCelebrityDataSource();

  // Mocking data source
  when(() => context.read<ImageRekognitionRepo>()).thenReturn(dataSource);

  when(() => context.request).thenReturn(request);
  when(() => request.uri).thenReturn(uri);

  when(() => uri.queryParameters).thenReturn({});
});

We create two groups of tests, inside the first, we check if the HttpMethods other than POST those are disallowed.

test('when method is DELETE', () async {
  // Arrange
  when(() => request.method).thenReturn(HttpMethod.delete);
  
  // Act  
  final response = await routefirst.onRequest(context);
  
  // Assert
  expect(response.statusCode, equals(HttpStatus.methodNotAllowed));
});

We stub the request method to return the HttpMethod.delete Next, we invoke onRequest with the mock request context to get a Response Then, we assert if the response is what we expect. In this case, we’re checking the statusCode and if the code is HttpStatus.methodNotAllowed

In the second group of tests, we test for the POST endpoint and if the response is 200

test('responds with a 200', () async {
  // Arrange  
  when(() => request.method).thenReturn(HttpMethod.post);
  when(() => request.formData()).thenAnswer((_) async {
    return {'image': ''};
  });
  when(() => dataSource.recognizeIfCelebrity(any())).thenAnswer((_) async {
    return celebrityMap;
  });

  // Act
  final response = await routefirst.onRequest(context);

  // Assert
  expect(response.statusCode, equals(HttpStatus.ok));
  expect(
    response.json(),
    completion({
      'data': [celebrityResp.first.toJson()]
    }),
  );
});

We stub the request method to return the HttpMethod.post Next, we stub the formData and make it equal to the desired result.

Note: FormData returns a future containing the form data as a Map.

Finally, we call the endpoint and return the fake response as

final celebrityResp = [
  CelebrityModel(
    id: 'Z3He8D',
    matchConfidence: 99,
    urls: const ['www.wikidata.org/wiki/Q47213'],
    name: 'Warren Buffett',
  )
];

final celebrityMap = {'data': celebrityResp};

Next, we invoke onRequest the mock request. In the assertions, the status code is compared to HttpStatus.ok and also whether our response is in the form of a JSON and matches the output.

completion: Matches a [Future] that completes successfully with a value that matches [matcher].

We run the tests using

dart test test/routes/rekognizeIfCelebrity_test.dart

and we get the following results.

Unit tests passing
Unit tests passing

Deploy the APIs to Cloud Run

Once the endpoints are ready, we proceed with deployment. As of now, we can use Cloud Run AWS App Runner and Digital Ocean

Before deploying to Cloud Run, make sure you already have

You’ll also need the gcloud (CLI) installed on your machine.

Next, create the build version of our API by

dart_frog build

This creates a /build directory with all the files needed to deploy your API.

After this, we run the command for deploying APIs to Cloud Run

gcloud run deploy [SERVICE_NAME] \
  --source build \
  --project=[PROJECT_ID] \
  --region=[REGION] \
  --allow-unauthenticated
  • [SERVICE_NAME]: Name of your Cloud Run service you want to create/update
  • [PROJECT_ID]: ID of your GCP project
  • [REGION]: Region you wish to deploy to (ex: us-central1)

Running this command will do the following:

Container Registry
Container Registry
  • Deploy the image to the specified Cloud Run service
Cloud Run
Cloud Run

You can now access your API at the Service URL that is printed in the last line of output.

Cloud Run URL
Cloud Run URL

If we try to curl on the generated URL, we get the same response as we got from the localhost

Curl the Cloud Run URL
Curl the Cloud Run URL

Source code

Valuable comments