You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/06/06 00:08:03 UTC

[GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"
URL: https://github.com/apache/incubator-mxnet/pull/11154
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/mms/index.md b/docs/mms/index.md
deleted file mode 100644
index ff6edae414b..00000000000
--- a/docs/mms/index.md
+++ /dev/null
@@ -1,114 +0,0 @@
-# Model Server for Apache MXNet (incubating)
-
-[Model Server for Apache MXNet (incubating)](https://github.com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server (MMS), is an open source project aimed at providing a simple yet scalable solution for model inference. It is a set of command line tools for packaging model archives and serving them. The tools are written in Python, and have been extended to support containers for easy deployment and scaling. MMS also supports basic logging and advanced metrics with Amazon CloudWatch integration.
-
-
-## Multi-Framework Model Support with ONNX
-
-MMS supports both *symbolic* MXNet and *imperative* Gluon models. While the name implies that MMS is just for MXNet, it is in fact much more flexible, as it can support models in the [ONNX](https://onnx.ai) format. This means that models created and trained in PyTorch, Caffe2, or other ONNX-supporting frameworks can be served with MMS.
-
-To find out more about MXNet's support for ONNX models and using ONNX with MMS, refer to the following resources:
-
-* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
-* [Export an ONNX Model to Serve with MMS](https://github.com/awslabs/mxnet-model-server/docs/export_from_onnx.md)
-
-## Getting Started
-
-To install MMS with ONNX support, make sure you have Python installed, then for Ubuntu run:
-
-```bash
-sudo apt-get install protobuf-compiler libprotoc-dev
-pip install mxnet-model-server
-```
-
-Or for Mac run:
-
-```bash
-conda install -c conda-forge protobuf
-pip install mxnet-model-server
-```
-
-
-## Serving a Model
-
-To serve a model you must first create or download a model archive. Visit the [model zoo](https://github.com/awslabs/mxnet-model-server/docs/model_zoo.md) to browse the models. MMS options can be explored as follows:
-
-```bash
-mxnet-model-server --help
-```
-
-Here is an easy example for serving an object classification model. You can use any URI and the model will be downloaded first, then served from that location:
-
-```bash
-mxnet-model-server \
-  --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model
-```
-
-
-### Test Inference on a Model
-
-Assuming you have run the previous `mxnet-model-server` command to start serving the object classification model, you can now upload an image to its `predict` REST API endpoint. The following will download a picture of a kitten, then upload it to the prediction endpoint.
-
-```bash
-curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
-curl -X POST http://127.0.0.1:8080/squeezenet/predict -F "data=@kitten.jpg"
-```
-
-The predict endpoint will return a prediction response in JSON. It will look something like the following result:
-
-```
-{
-  "prediction": [
-    [
-      {
-        "class": "n02124075 Egyptian cat",
-        "probability": 0.9408261179924011
-      },
-...
-```
-
-For more examples of serving models visit the following resources:
-
-* [Quickstart: Model Serving](https://github.com/awslabs/mxnet-model-server/README.md#serve-a-model)
-* [Running the Model Server](https://github.com/awslabs/mxnet-model-server/docs/server.md)
-
-
-## Create a Model Archive
-
-Creating a model archive involves rounding up the required model artifacts, then using the `mxnet-model-export` command line interface. The process for creating archives is likely to evolve. As the project adds features, we recommend that you review the following resources to get the latest instructions:
-
-* [Quickstart: Export a Model](https://github.com/awslabs/mxnet-model-server/README.md#export-a-model)
-* [Model Artifacts](https://github.com/awslabs/mxnet-model-server/docs/export_model_file_tour.md)
-* [Loading and Serving Gluon Models](https://github.com/awslabs/mxnet-model-server/tree/master/examples/gluon_alexnet)
-* [Creating a MMS Model Archive from an ONNX Model](https://github.com/awslabs/mxnet-model-server/docs/export_from_onnx.md)
-* [Create an ONNX model (that will run with MMS) from PyTorch](https://github.com/onnx/onnx-mxnet/blob/master/README.md#quick-start)
-
-
-## Using Containers
-
-Using Docker or other container services with MMS is a great way to scale your inference applications. You can use Docker to pull the latest version:
-
-```
-docker pull awsdeeplearningteam/mms_gpu
-```
-
-It is recommended that you review the following resources for more information:
-
-* [MMS Docker Hub](https://hub.docker.com/u/awsdeeplearningteam/)
-* [Using MMS with Docker Quickstart](https://github.com/awslabs/mxnet-model-server/docker/README.md)
-* [MMS on Fargate](https://github.com/awslabs/mxnet-model-server/docs/mms_on_fargate.md)
-* [Optimized Container Configurations for MMS](https://github.com/awslabs/mxnet-model-server/docs/optimized_config.md)
-* [Orchestrating, monitoring, and scaling with MMS, Amazon Elastic Container Service, AWS Fargate, and Amazon CloudWatch)](https://aws.amazon.com/blogs/machine-learning/apache-mxnet-model-server-adds-optimized-container-images-for-model-serving-at-scale/)
-
-
-## Community & Contributions
-
-The MMS project is open to contributions from the community. If you like the idea of a flexible, scalable, multi-framework serving solution for your models and would like to provide feedback, suggest features, or even jump in and contribute code or examples, please visit the [project page on GitHub](https://github.com/awslabs/mxnet-model-server). You can create an issue there, or join the discussion on the forum.
-
-* [MXNet Forum - MMS Discussions](https://discuss.mxnet.io/c/mxnet-model-server)
-
-
-## Further Reading
-
-* [GitHub](https://github.com/awslabs/mxnet-model-server)
-* [MMS Docs](https://github.com/awslabs/mxnet-model-server/docs)


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Re: [GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

Posted by Mu Li <li...@gmail.com>.
Hi Sheng,

I suggest to put down a reason for such actions later. It may confuse other contributors, e.g., Steffen raised his concern in a private thread.

Best
Mu

> On Jun 10, 2018, at 7:42 PM, Sheng Zha <sz...@gmail.com> wrote:
> 
> Thanks, Henri. I was reverting the commit on a PR that another committer
> didn't intend to merge but only realized afterwards. Given that it wasn't
> convenient for him to revert and the negative effect, I committed the
> revert and cc'd the original committer in the PR, both as notification and
> as a proof of the claim.
> 
> -sz
> 
>> On Sun, Jun 10, 2018 at 10:21 PM, Hen <ba...@apache.org> wrote:
>> 
>> It wasn't clear why this was commit was reverted. Things that stood out as
>> odd:
>> 
>> * I didn't see an email to dev@ on the topic of a revert.
>> * Rather than reverting, if there is a minor item requiring a fix it could
>> simply be fixed; if a major item then it should be raised on dev@.
>> * I didn't see a reason to revert in the revert PR (11154).
>> * The original PR has github:szha asking for github:piiswrong to review
>> with no context; I'm concerned that it was implied that the commit could
>> not go in without this review.
>> * I don't see anything in the original PR to earn a revert. At best
>> 'github:john-andrilla' being asked if "a flexible, scalable,
>> multi-framework serving solution" was okay.
>> * I find it odd that github:lupesko is a reviewer.
>> 
>> Hen
>> 
>> 
>> 
>>> On Tue, Jun 5, 2018 at 5:08 PM, GitBox <gi...@apache.org> wrote:
>>> 
>>> szha closed pull request #11154: Revert "[MXNET-503] Website landing page
>>> for MMS (#11037)"
>>> URL: https://github.com/apache/incubator-mxnet/pull/11154
>>> 
>>> 
>>> 
>>> 
>>> This is a PR merged from a forked repository.
>>> As GitHub hides the original diff on merge, it is displayed below for
>>> the sake of provenance:
>>> 
>>> As this is a foreign pull request (from a fork), the diff is supplied
>>> below (as it won't show otherwise due to GitHub magic):
>>> 
>>> diff --git a/docs/mms/index.md b/docs/mms/index.md
>>> deleted file mode 100644
>>> index ff6edae414b..00000000000
>>> --- a/docs/mms/index.md
>>> +++ /dev/null
>>> @@ -1,114 +0,0 @@
>>> -# Model Server for Apache MXNet (incubating)
>>> -
>>> -[Model Server for Apache MXNet (incubating)](https://github.
>>> com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server
>>> (MMS), is an open source project aimed at providing a simple yet scalable
>>> solution for model inference. It is a set of command line tools for
>>> packaging model archives and serving them. The tools are written in
>> Python,
>>> and have been extended to support containers for easy deployment and
>>> scaling. MMS also supports basic logging and advanced metrics with Amazon
>>> CloudWatch integration.
>>> -
>>> -
>>> -## Multi-Framework Model Support with ONNX
>>> -
>>> -MMS supports both *symbolic* MXNet and *imperative* Gluon models. While
>>> the name implies that MMS is just for MXNet, it is in fact much more
>>> flexible, as it can support models in the [ONNX](https://onnx.ai)
>> format.
>>> This means that models created and trained in PyTorch, Caffe2, or other
>>> ONNX-supporting frameworks can be served with MMS.
>>> -
>>> -To find out more about MXNet's support for ONNX models and using ONNX
>>> with MMS, refer to the following resources:
>>> -
>>> -* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
>>> -* [Export an ONNX Model to Serve with MMS](https://github.com/
>>> awslabs/mxnet-model-server/docs/export_from_onnx.md)
>>> -
>>> -## Getting Started
>>> -
>>> -To install MMS with ONNX support, make sure you have Python installed,
>>> then for Ubuntu run:
>>> -
>>> -```bash
>>> -sudo apt-get install protobuf-compiler libprotoc-dev
>>> -pip install mxnet-model-server
>>> -```
>>> -
>>> -Or for Mac run:
>>> -
>>> -```bash
>>> -conda install -c conda-forge protobuf
>>> -pip install mxnet-model-server
>>> -```
>>> -
>>> -
>>> -## Serving a Model
>>> -
>>> -To serve a model you must first create or download a model archive.
>> Visit
>>> the [model zoo](https://github.com/awslabs/mxnet-model-server/
>>> docs/model_zoo.md) to browse the models. MMS options can be explored as
>>> follows:
>>> -
>>> -```bash
>>> -mxnet-model-server --help
>>> -```
>>> -
>>> -Here is an easy example for serving an object classification model. You
>>> can use any URI and the model will be downloaded first, then served from
>>> that location:
>>> -
>>> -```bash
>>> -mxnet-model-server \
>>> -  --models squeezenet=https://s3.amazonaws.com/model-server/
>>> models/squeezenet_v1.1/squeezenet_v1.1.model
>>> -```
>>> -
>>> -
>>> -### Test Inference on a Model
>>> -
>>> -Assuming you have run the previous `mxnet-model-server` command to start
>>> serving the object classification model, you can now upload an image to
>> its
>>> `predict` REST API endpoint. The following will download a picture of a
>>> kitten, then upload it to the prediction endpoint.
>>> -
>>> -```bash
>>> -curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
>>> -curl -X POST http://127.0.0.1:8080/squeezenet/predict -F
>>> "data=@kitten.jpg"
>>> -```
>>> -
>>> -The predict endpoint will return a prediction response in JSON. It will
>>> look something like the following result:
>>> -
>>> -```
>>> -{
>>> -  "prediction": [
>>> -    [
>>> -      {
>>> -        "class": "n02124075 Egyptian cat",
>>> -        "probability": 0.9408261179924011
>>> -      },
>>> -...
>>> -```
>>> -
>>> -For more examples of serving models visit the following resources:
>>> -
>>> -* [Quickstart: Model Serving](https://github.com/
>>> awslabs/mxnet-model-server/README.md#serve-a-model)
>>> -* [Running the Model Server](https://github.com/
>>> awslabs/mxnet-model-server/docs/server.md)
>>> -
>>> -
>>> -## Create a Model Archive
>>> -
>>> -Creating a model archive involves rounding up the required model
>>> artifacts, then using the `mxnet-model-export` command line interface.
>> The
>>> process for creating archives is likely to evolve. As the project adds
>>> features, we recommend that you review the following resources to get the
>>> latest instructions:
>>> -
>>> -* [Quickstart: Export a Model](https://github.com/
>>> awslabs/mxnet-model-server/README.md#export-a-model)
>>> -* [Model Artifacts](https://github.com/awslabs/mxnet-model-server/
>>> docs/export_model_file_tour.md)
>>> -* [Loading and Serving Gluon Models](https://github.com/
>>> awslabs/mxnet-model-server/tree/master/examples/gluon_alexnet)
>>> -* [Creating a MMS Model Archive from an ONNX Model](https://github.com/
>>> awslabs/mxnet-model-server/docs/export_from_onnx.md)
>>> -* [Create an ONNX model (that will run with MMS) from PyTorch](
>>> https://github.com/onnx/onnx-mxnet/blob/master/README.md#quick-start)
>>> -
>>> -
>>> -## Using Containers
>>> -
>>> -Using Docker or other container services with MMS is a great way to
>> scale
>>> your inference applications. You can use Docker to pull the latest
>> version:
>>> -
>>> -```
>>> -docker pull awsdeeplearningteam/mms_gpu
>>> -```
>>> -
>>> -It is recommended that you review the following resources for more
>>> information:
>>> -
>>> -* [MMS Docker Hub](https://hub.docker.com/u/awsdeeplearningteam/)
>>> -* [Using MMS with Docker Quickstart](https://github.
>>> com/awslabs/mxnet-model-server/docker/README.md)
>>> -* [MMS on Fargate](https://github.com/awslabs/mxnet-model-server/
>>> docs/mms_on_fargate.md)
>>> -* [Optimized Container Configurations for MMS](https://github.com/
>>> awslabs/mxnet-model-server/docs/optimized_config.md)
>>> -* [Orchestrating, monitoring, and scaling with MMS, Amazon Elastic
>>> Container Service, AWS Fargate, and Amazon CloudWatch)](https://aws.
>>> amazon.com/blogs/machine-learning/apache-mxnet-model-
>>> server-adds-optimized-container-images-for-model-serving-at-scale/)
>>> -
>>> -
>>> -## Community & Contributions
>>> -
>>> -The MMS project is open to contributions from the community. If you like
>>> the idea of a flexible, scalable, multi-framework serving solution for
>> your
>>> models and would like to provide feedback, suggest features, or even jump
>>> in and contribute code or examples, please visit the [project page on
>>> GitHub](https://github.com/awslabs/mxnet-model-server). You can create
>> an
>>> issue there, or join the discussion on the forum.
>>> -
>>> -* [MXNet Forum - MMS Discussions](https://discuss.
>>> mxnet.io/c/mxnet-model-server)
>>> -
>>> -
>>> -## Further Reading
>>> -
>>> -* [GitHub](https://github.com/awslabs/mxnet-model-server)
>>> -* [MMS Docs](https://github.com/awslabs/mxnet-model-server/docs)
>>> 
>>> 
>>> 
>>> 
>>> ----------------------------------------------------------------
>>> This is an automated message from the Apache Git Service.
>>> To respond to the message, please log on GitHub and use the
>>> URL above to go to the specific comment.
>>> 
>>> For queries about this service, please contact Infrastructure at:
>>> users@infra.apache.org
>>> 
>>> 
>>> With regards,
>>> Apache Git Services
>>> 
>> 

Re: [GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

Posted by Sheng Zha <sz...@gmail.com>.
Thanks, Henri. I was reverting the commit on a PR that another committer
didn't intend to merge but only realized afterwards. Given that it wasn't
convenient for him to revert and the negative effect, I committed the
revert and cc'd the original committer in the PR, both as notification and
as a proof of the claim.

-sz

On Sun, Jun 10, 2018 at 10:21 PM, Hen <ba...@apache.org> wrote:

> It wasn't clear why this was commit was reverted. Things that stood out as
> odd:
>
> * I didn't see an email to dev@ on the topic of a revert.
> * Rather than reverting, if there is a minor item requiring a fix it could
> simply be fixed; if a major item then it should be raised on dev@.
> * I didn't see a reason to revert in the revert PR (11154).
> * The original PR has github:szha asking for github:piiswrong to review
> with no context; I'm concerned that it was implied that the commit could
> not go in without this review.
> * I don't see anything in the original PR to earn a revert. At best
> 'github:john-andrilla' being asked if "a flexible, scalable,
> multi-framework serving solution" was okay.
> * I find it odd that github:lupesko is a reviewer.
>
> Hen
>
>
>
> On Tue, Jun 5, 2018 at 5:08 PM, GitBox <gi...@apache.org> wrote:
>
> > szha closed pull request #11154: Revert "[MXNET-503] Website landing page
> > for MMS (#11037)"
> > URL: https://github.com/apache/incubator-mxnet/pull/11154
> >
> >
> >
> >
> > This is a PR merged from a forked repository.
> > As GitHub hides the original diff on merge, it is displayed below for
> > the sake of provenance:
> >
> > As this is a foreign pull request (from a fork), the diff is supplied
> > below (as it won't show otherwise due to GitHub magic):
> >
> > diff --git a/docs/mms/index.md b/docs/mms/index.md
> > deleted file mode 100644
> > index ff6edae414b..00000000000
> > --- a/docs/mms/index.md
> > +++ /dev/null
> > @@ -1,114 +0,0 @@
> > -# Model Server for Apache MXNet (incubating)
> > -
> > -[Model Server for Apache MXNet (incubating)](https://github.
> > com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server
> > (MMS), is an open source project aimed at providing a simple yet scalable
> > solution for model inference. It is a set of command line tools for
> > packaging model archives and serving them. The tools are written in
> Python,
> > and have been extended to support containers for easy deployment and
> > scaling. MMS also supports basic logging and advanced metrics with Amazon
> > CloudWatch integration.
> > -
> > -
> > -## Multi-Framework Model Support with ONNX
> > -
> > -MMS supports both *symbolic* MXNet and *imperative* Gluon models. While
> > the name implies that MMS is just for MXNet, it is in fact much more
> > flexible, as it can support models in the [ONNX](https://onnx.ai)
> format.
> > This means that models created and trained in PyTorch, Caffe2, or other
> > ONNX-supporting frameworks can be served with MMS.
> > -
> > -To find out more about MXNet's support for ONNX models and using ONNX
> > with MMS, refer to the following resources:
> > -
> > -* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
> > -* [Export an ONNX Model to Serve with MMS](https://github.com/
> > awslabs/mxnet-model-server/docs/export_from_onnx.md)
> > -
> > -## Getting Started
> > -
> > -To install MMS with ONNX support, make sure you have Python installed,
> > then for Ubuntu run:
> > -
> > -```bash
> > -sudo apt-get install protobuf-compiler libprotoc-dev
> > -pip install mxnet-model-server
> > -```
> > -
> > -Or for Mac run:
> > -
> > -```bash
> > -conda install -c conda-forge protobuf
> > -pip install mxnet-model-server
> > -```
> > -
> > -
> > -## Serving a Model
> > -
> > -To serve a model you must first create or download a model archive.
> Visit
> > the [model zoo](https://github.com/awslabs/mxnet-model-server/
> > docs/model_zoo.md) to browse the models. MMS options can be explored as
> > follows:
> > -
> > -```bash
> > -mxnet-model-server --help
> > -```
> > -
> > -Here is an easy example for serving an object classification model. You
> > can use any URI and the model will be downloaded first, then served from
> > that location:
> > -
> > -```bash
> > -mxnet-model-server \
> > -  --models squeezenet=https://s3.amazonaws.com/model-server/
> > models/squeezenet_v1.1/squeezenet_v1.1.model
> > -```
> > -
> > -
> > -### Test Inference on a Model
> > -
> > -Assuming you have run the previous `mxnet-model-server` command to start
> > serving the object classification model, you can now upload an image to
> its
> > `predict` REST API endpoint. The following will download a picture of a
> > kitten, then upload it to the prediction endpoint.
> > -
> > -```bash
> > -curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
> > -curl -X POST http://127.0.0.1:8080/squeezenet/predict -F
> > "data=@kitten.jpg"
> > -```
> > -
> > -The predict endpoint will return a prediction response in JSON. It will
> > look something like the following result:
> > -
> > -```
> > -{
> > -  "prediction": [
> > -    [
> > -      {
> > -        "class": "n02124075 Egyptian cat",
> > -        "probability": 0.9408261179924011
> > -      },
> > -...
> > -```
> > -
> > -For more examples of serving models visit the following resources:
> > -
> > -* [Quickstart: Model Serving](https://github.com/
> > awslabs/mxnet-model-server/README.md#serve-a-model)
> > -* [Running the Model Server](https://github.com/
> > awslabs/mxnet-model-server/docs/server.md)
> > -
> > -
> > -## Create a Model Archive
> > -
> > -Creating a model archive involves rounding up the required model
> > artifacts, then using the `mxnet-model-export` command line interface.
> The
> > process for creating archives is likely to evolve. As the project adds
> > features, we recommend that you review the following resources to get the
> > latest instructions:
> > -
> > -* [Quickstart: Export a Model](https://github.com/
> > awslabs/mxnet-model-server/README.md#export-a-model)
> > -* [Model Artifacts](https://github.com/awslabs/mxnet-model-server/
> > docs/export_model_file_tour.md)
> > -* [Loading and Serving Gluon Models](https://github.com/
> > awslabs/mxnet-model-server/tree/master/examples/gluon_alexnet)
> > -* [Creating a MMS Model Archive from an ONNX Model](https://github.com/
> > awslabs/mxnet-model-server/docs/export_from_onnx.md)
> > -* [Create an ONNX model (that will run with MMS) from PyTorch](
> > https://github.com/onnx/onnx-mxnet/blob/master/README.md#quick-start)
> > -
> > -
> > -## Using Containers
> > -
> > -Using Docker or other container services with MMS is a great way to
> scale
> > your inference applications. You can use Docker to pull the latest
> version:
> > -
> > -```
> > -docker pull awsdeeplearningteam/mms_gpu
> > -```
> > -
> > -It is recommended that you review the following resources for more
> > information:
> > -
> > -* [MMS Docker Hub](https://hub.docker.com/u/awsdeeplearningteam/)
> > -* [Using MMS with Docker Quickstart](https://github.
> > com/awslabs/mxnet-model-server/docker/README.md)
> > -* [MMS on Fargate](https://github.com/awslabs/mxnet-model-server/
> > docs/mms_on_fargate.md)
> > -* [Optimized Container Configurations for MMS](https://github.com/
> > awslabs/mxnet-model-server/docs/optimized_config.md)
> > -* [Orchestrating, monitoring, and scaling with MMS, Amazon Elastic
> > Container Service, AWS Fargate, and Amazon CloudWatch)](https://aws.
> > amazon.com/blogs/machine-learning/apache-mxnet-model-
> > server-adds-optimized-container-images-for-model-serving-at-scale/)
> > -
> > -
> > -## Community & Contributions
> > -
> > -The MMS project is open to contributions from the community. If you like
> > the idea of a flexible, scalable, multi-framework serving solution for
> your
> > models and would like to provide feedback, suggest features, or even jump
> > in and contribute code or examples, please visit the [project page on
> > GitHub](https://github.com/awslabs/mxnet-model-server). You can create
> an
> > issue there, or join the discussion on the forum.
> > -
> > -* [MXNet Forum - MMS Discussions](https://discuss.
> > mxnet.io/c/mxnet-model-server)
> > -
> > -
> > -## Further Reading
> > -
> > -* [GitHub](https://github.com/awslabs/mxnet-model-server)
> > -* [MMS Docs](https://github.com/awslabs/mxnet-model-server/docs)
> >
> >
> >
> >
> > ----------------------------------------------------------------
> > This is an automated message from the Apache Git Service.
> > To respond to the message, please log on GitHub and use the
> > URL above to go to the specific comment.
> >
> > For queries about this service, please contact Infrastructure at:
> > users@infra.apache.org
> >
> >
> > With regards,
> > Apache Git Services
> >
>

Re: [GitHub] szha closed pull request #11154: Revert "[MXNET-503] Website landing page for MMS (#11037)"

Posted by Hen <ba...@apache.org>.
It wasn't clear why this was commit was reverted. Things that stood out as
odd:

* I didn't see an email to dev@ on the topic of a revert.
* Rather than reverting, if there is a minor item requiring a fix it could
simply be fixed; if a major item then it should be raised on dev@.
* I didn't see a reason to revert in the revert PR (11154).
* The original PR has github:szha asking for github:piiswrong to review
with no context; I'm concerned that it was implied that the commit could
not go in without this review.
* I don't see anything in the original PR to earn a revert. At best
'github:john-andrilla' being asked if "a flexible, scalable,
multi-framework serving solution" was okay.
* I find it odd that github:lupesko is a reviewer.

Hen



On Tue, Jun 5, 2018 at 5:08 PM, GitBox <gi...@apache.org> wrote:

> szha closed pull request #11154: Revert "[MXNET-503] Website landing page
> for MMS (#11037)"
> URL: https://github.com/apache/incubator-mxnet/pull/11154
>
>
>
>
> This is a PR merged from a forked repository.
> As GitHub hides the original diff on merge, it is displayed below for
> the sake of provenance:
>
> As this is a foreign pull request (from a fork), the diff is supplied
> below (as it won't show otherwise due to GitHub magic):
>
> diff --git a/docs/mms/index.md b/docs/mms/index.md
> deleted file mode 100644
> index ff6edae414b..00000000000
> --- a/docs/mms/index.md
> +++ /dev/null
> @@ -1,114 +0,0 @@
> -# Model Server for Apache MXNet (incubating)
> -
> -[Model Server for Apache MXNet (incubating)](https://github.
> com/awslabs/mxnet-model-server), otherwise known as MXNet Model Server
> (MMS), is an open source project aimed at providing a simple yet scalable
> solution for model inference. It is a set of command line tools for
> packaging model archives and serving them. The tools are written in Python,
> and have been extended to support containers for easy deployment and
> scaling. MMS also supports basic logging and advanced metrics with Amazon
> CloudWatch integration.
> -
> -
> -## Multi-Framework Model Support with ONNX
> -
> -MMS supports both *symbolic* MXNet and *imperative* Gluon models. While
> the name implies that MMS is just for MXNet, it is in fact much more
> flexible, as it can support models in the [ONNX](https://onnx.ai) format.
> This means that models created and trained in PyTorch, Caffe2, or other
> ONNX-supporting frameworks can be served with MMS.
> -
> -To find out more about MXNet's support for ONNX models and using ONNX
> with MMS, refer to the following resources:
> -
> -* [MXNet-ONNX Docs](../api/python/contrib/onnx.md)
> -* [Export an ONNX Model to Serve with MMS](https://github.com/
> awslabs/mxnet-model-server/docs/export_from_onnx.md)
> -
> -## Getting Started
> -
> -To install MMS with ONNX support, make sure you have Python installed,
> then for Ubuntu run:
> -
> -```bash
> -sudo apt-get install protobuf-compiler libprotoc-dev
> -pip install mxnet-model-server
> -```
> -
> -Or for Mac run:
> -
> -```bash
> -conda install -c conda-forge protobuf
> -pip install mxnet-model-server
> -```
> -
> -
> -## Serving a Model
> -
> -To serve a model you must first create or download a model archive. Visit
> the [model zoo](https://github.com/awslabs/mxnet-model-server/
> docs/model_zoo.md) to browse the models. MMS options can be explored as
> follows:
> -
> -```bash
> -mxnet-model-server --help
> -```
> -
> -Here is an easy example for serving an object classification model. You
> can use any URI and the model will be downloaded first, then served from
> that location:
> -
> -```bash
> -mxnet-model-server \
> -  --models squeezenet=https://s3.amazonaws.com/model-server/
> models/squeezenet_v1.1/squeezenet_v1.1.model
> -```
> -
> -
> -### Test Inference on a Model
> -
> -Assuming you have run the previous `mxnet-model-server` command to start
> serving the object classification model, you can now upload an image to its
> `predict` REST API endpoint. The following will download a picture of a
> kitten, then upload it to the prediction endpoint.
> -
> -```bash
> -curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
> -curl -X POST http://127.0.0.1:8080/squeezenet/predict -F
> "data=@kitten.jpg"
> -```
> -
> -The predict endpoint will return a prediction response in JSON. It will
> look something like the following result:
> -
> -```
> -{
> -  "prediction": [
> -    [
> -      {
> -        "class": "n02124075 Egyptian cat",
> -        "probability": 0.9408261179924011
> -      },
> -...
> -```
> -
> -For more examples of serving models visit the following resources:
> -
> -* [Quickstart: Model Serving](https://github.com/
> awslabs/mxnet-model-server/README.md#serve-a-model)
> -* [Running the Model Server](https://github.com/
> awslabs/mxnet-model-server/docs/server.md)
> -
> -
> -## Create a Model Archive
> -
> -Creating a model archive involves rounding up the required model
> artifacts, then using the `mxnet-model-export` command line interface. The
> process for creating archives is likely to evolve. As the project adds
> features, we recommend that you review the following resources to get the
> latest instructions:
> -
> -* [Quickstart: Export a Model](https://github.com/
> awslabs/mxnet-model-server/README.md#export-a-model)
> -* [Model Artifacts](https://github.com/awslabs/mxnet-model-server/
> docs/export_model_file_tour.md)
> -* [Loading and Serving Gluon Models](https://github.com/
> awslabs/mxnet-model-server/tree/master/examples/gluon_alexnet)
> -* [Creating a MMS Model Archive from an ONNX Model](https://github.com/
> awslabs/mxnet-model-server/docs/export_from_onnx.md)
> -* [Create an ONNX model (that will run with MMS) from PyTorch](
> https://github.com/onnx/onnx-mxnet/blob/master/README.md#quick-start)
> -
> -
> -## Using Containers
> -
> -Using Docker or other container services with MMS is a great way to scale
> your inference applications. You can use Docker to pull the latest version:
> -
> -```
> -docker pull awsdeeplearningteam/mms_gpu
> -```
> -
> -It is recommended that you review the following resources for more
> information:
> -
> -* [MMS Docker Hub](https://hub.docker.com/u/awsdeeplearningteam/)
> -* [Using MMS with Docker Quickstart](https://github.
> com/awslabs/mxnet-model-server/docker/README.md)
> -* [MMS on Fargate](https://github.com/awslabs/mxnet-model-server/
> docs/mms_on_fargate.md)
> -* [Optimized Container Configurations for MMS](https://github.com/
> awslabs/mxnet-model-server/docs/optimized_config.md)
> -* [Orchestrating, monitoring, and scaling with MMS, Amazon Elastic
> Container Service, AWS Fargate, and Amazon CloudWatch)](https://aws.
> amazon.com/blogs/machine-learning/apache-mxnet-model-
> server-adds-optimized-container-images-for-model-serving-at-scale/)
> -
> -
> -## Community & Contributions
> -
> -The MMS project is open to contributions from the community. If you like
> the idea of a flexible, scalable, multi-framework serving solution for your
> models and would like to provide feedback, suggest features, or even jump
> in and contribute code or examples, please visit the [project page on
> GitHub](https://github.com/awslabs/mxnet-model-server). You can create an
> issue there, or join the discussion on the forum.
> -
> -* [MXNet Forum - MMS Discussions](https://discuss.
> mxnet.io/c/mxnet-model-server)
> -
> -
> -## Further Reading
> -
> -* [GitHub](https://github.com/awslabs/mxnet-model-server)
> -* [MMS Docs](https://github.com/awslabs/mxnet-model-server/docs)
>
>
>
>
> ----------------------------------------------------------------
> This is an automated message from the Apache Git Service.
> To respond to the message, please log on GitHub and use the
> URL above to go to the specific comment.
>
> For queries about this service, please contact Infrastructure at:
> users@infra.apache.org
>
>
> With regards,
> Apache Git Services
>