Learn R Programming

keras (version 2.8.0)

application_mobilenet_v3: Instantiates the MobileNetV3Large architecture

Description

Instantiates the MobileNetV3Large architecture

Usage

application_mobilenet_v3_large(
  input_shape = NULL,
  alpha = 1,
  minimalistic = FALSE,
  include_top = TRUE,
  weights = "imagenet",
  input_tensor = NULL,
  classes = 1000L,
  pooling = NULL,
  dropout_rate = 0.2,
  classifier_activation = "softmax",
  include_preprocessing = TRUE
)

application_mobilenet_v3_small( input_shape = NULL, alpha = 1, minimalistic = FALSE, include_top = TRUE, weights = "imagenet", input_tensor = NULL, classes = 1000L, pooling = NULL, dropout_rate = 0.2, classifier_activation = "softmax", include_preprocessing = TRUE )

Arguments

input_shape

Optional shape vector, to be specified if you would like to use a model with an input image resolution that is not c(224, 224, 3). It should have exactly 3 inputs channels c(224, 224, 3). You can also omit this option if you would like to infer input_shape from an input_tensor. If you choose to include both input_tensor and input_shape then input_shape will be used if they match, if the shapes do not match then we will throw an error. E.g. c(160, 160, 3) would be one valid value.

alpha

controls the width of the network. This is known as the depth multiplier in the MobileNetV3 paper, but the name is kept for consistency with MobileNetV1 in Keras.

  • If alpha < 1.0, proportionally decreases the number of filters in each layer.

  • If alpha > 1.0, proportionally increases the number of filters in each layer.

  • If alpha = 1, default number of filters from the paper are used at each layer.

minimalistic

In addition to large and small models this module also contains so-called minimalistic models, these models have the same per-layer dimensions characteristic as MobilenetV3 however, they don't utilize any of the advanced blocks (squeeze-and-excite units, hard-swish, and 5x5 convolutions). While these models are less efficient on CPU, they are much more performant on GPU/DSP.

include_top

Boolean, whether to include the fully-connected layer at the top of the network. Defaults to TRUE.

weights

String, one of NULL (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded.

input_tensor

Optional Keras tensor (i.e. output of layer_input()) to use as image input for the model.

classes

Integer, optional number of classes to classify images into, only to be specified if include_top is TRUE, and if no weights argument is specified.

pooling

String, optional pooling mode for feature extraction when include_top is FALSE.

  • NULL means that the output of the model will be the 4D tensor output of the last convolutional block.

  • avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.

  • max means that global max pooling will be applied.

dropout_rate

fraction of the input units to drop on the last layer.

classifier_activation

A string or callable. The activation function to use on the "top" layer. Ignored unless include_top = TRUE. Set classifier_activation = NULL to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be NULL or "softmax".

include_preprocessing

Boolean, whether to include the preprocessing layer (Rescaling) at the bottom of the network. Defaults to TRUE.

Value

A keras Model instance

Details

Reference:

The following table describes the performance of MobileNets v3:

MACs stands for Multiply Adds

Classification Checkpoint MACs(M) Parameters(M) Top1 Accuracy Pixel1 CPU(ms)
mobilenet_v3_large_1.0_224 217 5.4 75.6 51.2
mobilenet_v3_large_0.75_224 155 4.0 73.3 39.8
mobilenet_v3_large_minimalistic_1.0_224 209 3.9 72.3 44.1
mobilenet_v3_small_1.0_224 66 2.9 68.1 15.8
mobilenet_v3_small_0.75_224 44 2.4 65.4 12.8
mobilenet_v3_small_minimalistic_1.0_224 65 2.0 61.9 12.2

For image classification use cases, see this page for detailed examples.

For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.

See Also