You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by lx...@apache.org on 2017/08/14 22:14:04 UTC

[incubator-mxnet] branch v0.11.0 updated: New code signing key & README file changes (#7464)

This is an automated email from the ASF dual-hosted git repository.

lxn2 pushed a commit to branch v0.11.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v0.11.0 by this push:
     new 8e2146a  New code signing key & README file changes (#7464)
8e2146a is described below

commit 8e2146a042e14b5bf76fd0551145dff6ffaf8cbf
Author: lxn2 <lx...@users.noreply.github.com>
AuthorDate: Mon Aug 14 15:14:02 2017 -0700

    New code signing key & README file changes (#7464)
    
    * add Naveen's Code Signing Key (#7460)
    
    * Updating CoreML readme file (#7459)
    
    * Fixing CoreML converter's README: typos/grammar/etc.
    
    * CoreML converter README update: Talk about layers first and then about models.
    
    * Providing examples on converting various standard models; calling out issues with InceptionV3.
---
 KEYS                   | 59 ++++++++++++++++++++++++++++++++++++++
 tools/coreml/README.md | 77 +++++++++++++++++++++++++++++++-------------------
 2 files changed, 107 insertions(+), 29 deletions(-)

diff --git a/KEYS b/KEYS
index 19ec1a3..070f38d 100644
--- a/KEYS
+++ b/KEYS
@@ -130,3 +130,62 @@ TZQhIRekaaV+bCQQxnwDOJ31bIUUpxaMdvygjq55Gri/5C75TsMNcgbhqYWLGKe2
 kRsGTxyO+fQ6/Q==
 =FuXU
 -----END PGP PUBLIC KEY BLOCK-----
+pub   rsa4096 2017-08-14 [SC]
+      AA3EBCC3E65A768AE3D2A64B8EF47B8720E8C549
+uid           [ultimate] Naveen Swamy (CODE SIGNING KEY) <ns...@apache.org>
+sig 3        8EF47B8720E8C549 2017-08-14  Naveen Swamy (CODE SIGNING KEY) <ns...@apache.org>
+sub   rsa4096 2017-08-14 [E]
+sig          8EF47B8720E8C549 2017-08-14  Naveen Swamy (CODE SIGNING KEY) <ns...@apache.org>
+
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+
+mQINBFmSC4cBEADFOKHTd2QFZk94eCCh5kqDTcZk2zgu+tNb2PY0v/EVC/rEZN2O
+IS+Y16gO7DQEnyreoPBe9QdwT85iCshhl80x6ojfRHztCcXADzNLPc0knhPNeRUt
+feQOwbxtWmIyglQRPbeRkhQtZbceHMLT0tjpDdU2ogI1tt4OfFkCdXX2k9nxeCfQ
+KKVMvK/vPFtkcLrTDPzG31XDvbJdHzKjHXVR1D88gVX23+YTZQX2ZFD4aWyix8xy
+LcH1PE0oNY3Ja6YSXqgxPa+cvOslyd0HMO8EzJTfv65jEqf2CDJTxIER8ihfyjLa
+GQAH8pNHZFrIDrOVNQXgNq0oG629rtFJVBb9MLTEi3zMf4aKddcE57j0aodEGXEs
+eWWmULty4s/fhFb7DaEQ9TJpcMJYE89/zVP342nAMTjMAsPsW2RnaL7Q8uGDN3aT
+O87ifl6LERp5CHJQxyZPm3no6WPEaI9WdoXPsz10EnzGP95zYRM/lsKEXu3ur0P3
+1xQXXfFyzvVeeor0Yyf7Oh63TJ76A+tTLiXMeFGd7xs65vh6yUHuhQZmqygFi0fI
+zO8Wc1hr5LxEh0kFIKAngL0AL4ukf5Aii6wFvOj0kx6AxlsP8Jas4dQd3e1G3Apo
+lij78wpeqLRPl04XTp8HNu5+wq5qj/GwNlx0SMwVT1h/2SC1cUaKi0DUuwARAQAB
+tDNOYXZlZW4gU3dhbXkgKENPREUgU0lHTklORyBLRVkpIDxuc3dhbXlAYXBhY2hl
+Lm9yZz6JAk4EEwEIADgCGwMCHgECF4AWIQSqPrzD5lp2iuPSpkuO9HuHIOjFSQUC
+WZIMrAULCQgHAwUVCgkICwUWAgMBAAAKCRCO9HuHIOjFSRaoD/9P2ktLKFjEwm3j
+sf/HDqmKd4jNHtCv/FUhzM0kb4F4gxXcnoFavDUdyLdTisEYx033Enkyv3jSBKB8
+bYxH4awmQ/47pexEPnpLPrw6Rpsbiuk8O2RLMWw2ObRATrNXg088YbBXgg4xrxXd
+4tjpd8FB1TJJnsmvrAawScjwz8ZxPQTaCqxb7oyrkRJYgswPmVD2MrB4LAjxMbpW
+pUkrQSxt6OEmteZXQd1Wn9UnD88YQEfaviCevo7cpsFrUHHXH9ihUI+fjihc+NpB
+LW9O4gVXY0O9BOMIU4xqHvFMht0s7Tjj698xoANosvGtO7mV/OKCtEHuqQCKzP4/
+9QS9PJrci/msBd/UwYqtYggACFnAtijOT70a7PRp3zHK5um5lsIsxuGJWJutlXiB
+cCrvgrdEaEXSUQsghygsUNzYzohAzYyV3FYuvaxuFwkLKewMzSOLW5DewPpZTTSa
+pO+CsmiDL2RJYS2dbz84elq1FUlNZZevFmrZmtpKClOrQ/2A6lHvs/dH5Qs4Ews/
+Wl0Hwsk2ET1VbJEVjK+CZd9CwYXZBaW2ntLr88LfrbsbXg5HW9cowmMdbMq9Rb1L
+4z/OaOUTp+M7nfQP9F5/6JmGICM/2RC2DYwkqrwQe+mvp6P6QNGe2z7OG19sHMyb
+qDWc+N4+VcribZV3AQsdloX7Y6GscrkCDQRZkguHARAAustOuroA9Oieela+WUZP
+0M9srwsH1XHpfKHgGgPAFXVQZ2YGXl9uxG73v4kat5kOdwPERPbuEYqOM/FyIs87
+8AxgQ+dh1YB7boDslubqUAbXPaxso4ZRyxDidmdR+XRi9ZZRNTYdiA+RhS7/Y3lp
+Fb2Xr4xZWtqRzuNOTp1OQ51uOaFRAj/hDZJi7v73LNIocnrk8mFDCUGaHcNzUqxY
+FvVkzi8fr8diM9Y1DJsTuQicJdYFQAIfFneddp2YyHTlB6IxbBLME3DJcN6pF6Eq
+1pTP77Nss4voR/0RXgByZ4OeMgFudnuN+bz8mBVtr/ToWb/c8hhYBOrbBcegSXMg
+gqPIk8FjYblmPqW1qUpI4fV66TIh2XT/bOoDZ8+FGRKznD2gWzeOOeq8vLG+rQN9
+ko0YMgrdqvtioD9vOd2CKpE5eZbalRjAttqC92mcURC2t/oVEB8kOdURenkOMzCN
+T4MpMrzIL2x98tmiq8/wP7HDH+Yq4HSGnpHTK5INO9rmKpewiSKdLU1HKeCjF4mn
+P9kfWCCz6U6bHO4vm6UQ0EgV8nM616laDWE49DFO/9WqoPzK3CanLp/Gy2pdK3CQ
+R71OzB8XOMratmA5oL/c8hIZdF1i63KjLCSaQ7w6VR/j2gh61ftO0rtD8NmksphM
+X25F37SwZ6ro8QQKONkhWncAEQEAAYkCNgQYAQgAIBYhBKo+vMPmWnaK49KmS470
+e4cg6MVJBQJZkguHAhsMAAoJEI70e4cg6MVJxZ0QAKCHbB2DgoED0JZ4xnADcc7t
+o1Bz5SQgAWfh9eJD1Ou4cqhk9u2Bh5mX/z6UBc6ZeSsgI55NWxaZh0LiaeKqIufY
+2+4a8PfuJPLQ1Q94NMMTAyA2tpIqsFk6V+5IB/heC94L3US8H3v9CvvlZyErhSsu
+OVoIxM5S0f6W3vA3nX5iNUQHzRllAMkzoFmTET6ZzWskwOCjQ/qr/tasehpsYTaJ
+pUWRZA7ExbIAIclnjuQM9FsMVzsaJcxqw2gbJFjVPumysz9NKOghAGzRH4JBnxpu
+wAo/UH+668R1GpFDZpHFKwEdh3zXffo6Zq9lQmAJ5NTa7L5JUGuzlIF40asLG2MN
+0ywDW9/oHuCDaM0tITSmRLn6v+QVApoGD89svQ6yCZ5MeqRfP+H6CSFf6fQ3E4Cu
+kIoH1GBllwnRmoQrAKyR4a7OqTVm6B+LyA+jTaa79g5UjDN7qlbGQ8MR5rE/yutP
+8PNCFmE/EsImQ7NREfRKqle0+mSAWqKkdg4pX5bJNbVQX2LOLgMF5LJdUtwq8ISJ
+7/k9J/FTJyuqgwXvkUOq7eEehxUpvX85gzJ5tpMSN+jYgPeMWcd8mTvVgwWDd7Qu
+TNxwR0b9K/mLKGh58n1vVT79QReQFQ4wWFyQkmFkL9ybG04wTKe00VDNP987nSBg
+FuSamX64+S6T8IwAuP9U
+=KRiV
+-----END PGP PUBLIC KEY BLOCK-----
diff --git a/tools/coreml/README.md b/tools/coreml/README.md
index 32cde33..e29eebe 100644
--- a/tools/coreml/README.md
+++ b/tools/coreml/README.md
@@ -21,59 +21,45 @@ Let's say you want to use your MXNet model in an iPhone App. For the purpose of
 python mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,227,227"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="squeezenetv11.mlmodel"
 ```
 
-  The above command will save the converted model into squeezenet-v11.mlmodel in CoreML format. Internally MXNet first loads the model and then we walk through the entire symbolic graph converting each operator into its CoreML equivalent. Some of the parameters are used by MXNet in order to load and generate the symbolic graph in memory while others are used by CoreML either to pre-process the input before the going through the neural network or to process the output in a particular way. 
+  The above command will save the converted model in CoreML format to file squeezenet-v11.mlmodel. Internally, the model is first loaded by MXNet recreating the entire symbolic graph in memory. The converter walks through this symbolic graph converting each operator into its CoreML equivalent. Some of the supplied arguments to the converter are used by MXNet to generate the graph while others are used by CoreML either to pre-process the input (before passing it to the neural network) or  [...]
 
   In the command above:
 
-  * _model-prefix_: refers to the MXNet model prefix (may include the directory path).
-  * _epoch_: refers to the suffix of the MXNet model file.
-  * _input-shape_: refers to the input shape information in a JSON string format where the key is the name of the input variable (="data") and the value is the shape of that variable. If the model takes multiple inputs, input-shape for all of them need to be provided.
+  * _model-prefix_: refers to the prefix of the file containing the MXNet model that needs to be converted (may include the directory path). E.g. for squeezenet model above the model files are squeezenet_v1.1-symbol.json and squeezenet_v1.1-0000.params and, therefore, model-prefix is "squeezenet_v1.1" (or "<directory-where-model-exists>/squeezenet_v1.1")
+  * _epoch_: refers to the suffix of the MXNet model filename. For squeezenet model above, it'll be 0.
+  * _input-shape_: refers to the input shape information in a JSON string format where the key is the name of the input variable (i.e. "data") and the value is the shape of that variable. If the model takes multiple inputs, input-shape for all of them need to be provided.
   * _mode_: refers to the coreml model mode. Can either be 'classifier', 'regressor' or None. In this case, we use 'classifier' since we want the resulting CoreML model to classify images into various categories.
-  * _pre-processing-arguments_: In the Apple world images have to be of type Image. By providing image_input_names as "data", we are saying that the input variable "data" is of type Image.
+  * _pre-processing-arguments_: In the Apple world, images have to be of type "Image". By providing image_input_names as "data", the converter will assume that the input variable "data" is of type "Image".
   * _class-labels_: refers to the name of the file which contains the classification labels (a.k.a. synset file).
-output-file: the file where the CoreML model will be dumped.
+  * _output-file_: the file where resulting CoreML model will be stored.
 
 3. The generated ".mlmodel" file can directly be integrated into your app. For more instructions on how to do this, please see [Apple CoreML's tutorial](https://developer.apple.com/documentation/coreml/integrating_a_core_ml_model_into_your_app).
 
 
 ### Providing class labels
-You could provide a file containing class labels (as above) so that CoreML will return the predicted category the image belongs to. The file should have a label per line and labels can have any special characters. The line number of the label in the file should correspond with the index of softmax output. E.g.
+You could provide a file containing class labels (as above) so that CoreML will return the category a given image belongs to. The file should have a label per line and labels can have any special characters. The line number of the label in the file should correspond with the index of softmax output. E.g.
 
 ```bash
 python mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,227,227"}' --mode=classifier --class-labels classLabels.txt --output-file="squeezenetv11.mlmodel"
 ```
 
-### Providing label names
-You may have to provide the label names of the MXNet model's outputs. For example, if you try to convert [vgg16](http://data.mxnet.io/models/imagenet/vgg/), you may have to provide label-name as "prob_label". By default "softmax_label" is assumed.
-
-```bash
-python mxnet_coreml_converter.py --model-prefix='vgg16' --epoch=0 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="vgg16.mlmodel" --label-names="prob_label"
-```
- 
-### Adding a pre-processing to CoreML model.
-You could ask CoreML to pre-process the images before passing them through the model.
+### Adding a pre-processing layer to CoreML model.
+You could ask CoreML to pre-process the images before passing them through the model. The following command provides image re-centering parameters for red, blue and green channel.
 
 ```bash
 python mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,224,224"}' --pre-processing-arguments='{"red_bias":127,"blue_bias":117,"green_bias":103}' --output-file="squeezenet_v11.mlmodel"
 ```
 
-If you are building an app for a model that takes image as an input, you will have to provide image_input_names as pre-processing arguments. This tells CoreML that a particular input variable is of type Image. E.g.:
- 
+If you are building an app for a model that takes "Image" as an input, you will have to provide image_input_names as pre-processing arguments. This tells CoreML that a particular input variable is of type Image. E.g.:
+
 ```bash
 python mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,224,224"}' --pre-processing-arguments='{"red_bias":127,"blue_bias":117,"green_bias":103,"image_input_names":"data"}' --output-file="squeezenet_v11.mlmodel"
 ```
 
 ## Currently supported
-### Models
-This is a (growing) list of standard MXNet models that can be successfully converted using the converter. This means that any other model that uses similar operators as these models can also be successfully converted.
-
-1. Inception: [Inception-BN](http://data.mxnet.io/models/imagenet/inception-bn/), [Inception-V3](http://data.mxnet.io/models/imagenet/inception-v3.tar.gz)
-2. [NiN](http://data.dmlc.ml/models/imagenet/nin/)
-2. [Resnet](http://data.mxnet.io/models/imagenet/resnet/)
-3. [Squeezenet](http://data.mxnet.io/models/imagenet/squeezenet/)
-4. [Vgg](http://data.mxnet.io/models/imagenet/vgg/)
-
 ### Layers
+List of MXNet layers that can be converted into their CoreML equivalent:
+
 1. Activation
 2. Batchnorm
 3. Concat
@@ -87,9 +73,42 @@ This is a (growing) list of standard MXNet models that can be successfully conve
 11. Softmax
 12. Transpose
 
+### Models
+Any MXNet model that uses the above operators can be converted easily. For instance, the following standard models can be converted:
+
+1. [Inception-BN](http://data.mxnet.io/models/imagenet/inception-bn/)
+
+```bash
+python mxnet_coreml_converter.py --model-prefix='Inception-BN' --epoch=126 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="InceptionBN.mlmodel"
+```
+
+2. [NiN](http://data.dmlc.ml/models/imagenet/nin/)
+
+```bash
+python mxnet_coreml_converter.py --model-prefix='nin' --epoch=0 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="nin.mlmodel"
+```
+
+3. [Resnet](http://data.mxnet.io/models/imagenet/resnet/)
+
+```bash
+python mxnet_coreml_converter.py --model-prefix='resnet-50' --epoch=0 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="resnet50.mlmodel"
+```
+
+4. [Squeezenet](http://data.mxnet.io/models/imagenet/squeezenet/)
+
+```bash
+python mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,227,227"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="squeezenetv11.mlmodel"
+```
+
+5. [Vgg](http://data.mxnet.io/models/imagenet/vgg/)
+
+```bash
+python mxnet_coreml_converter.py --model-prefix='vgg16' --epoch=0 --input-shape='{"data":"3,224,224"}' --mode=classifier --pre-processing-arguments='{"image_input_names":"data"}' --class-labels classLabels.txt --output-file="vgg16.mlmodel"
+```
+
 ## Known issues
-Currently there are no known issues.
+* [Inception-V3](http://data.mxnet.io/models/imagenet/inception-v3.tar.gz) model can be converted into CoreML format but is unable to run on Xcode.
 
-## This tool has been tested on environment with:
+## This tool has been tested with:
 * MacOS - High Sierra 10.13 Beta.
 * Xcode 9 beta 5.

-- 
To stop receiving notification emails like this one, please contact
['"commits@mxnet.apache.org" <co...@mxnet.apache.org>'].