You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/14 21:43:20 UTC

[GitHub] szha closed pull request #9072: Remove torch support

szha closed pull request #9072: Remove torch support
URL: https://github.com/apache/incubator-mxnet/pull/9072
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/faq/index.md b/docs/faq/index.md
index 68c7d41cb8..e5807f42fc 100644
--- a/docs/faq/index.md
+++ b/docs/faq/index.md
@@ -58,8 +58,6 @@ and full working examples, visit the [tutorials section](../tutorials/index.md).
 
 * [How do I set MXNet's environmental variables?](http://mxnet.io/how_to/env_var.html)
 
-* [How do I use MXNet as a front end for Torch?](http://mxnet.io/how_to/torch.html)
-
 ## Questions about Using MXNet
 If you need help with using MXNet, have questions about applying it to a particular kind of problem, or have a discussion topic, please use our [forum](https://discuss.mxnet.io).
 
diff --git a/docs/faq/torch.md b/docs/faq/torch.md
deleted file mode 100644
index 26def878c2..0000000000
--- a/docs/faq/torch.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# How to Use MXNet As an (Almost) Full-function Torch Front End
-
-This topic demonstrates how to use MXNet as a front end to two of Torch's major functionalities:
-
-* Call Torch's tensor mathematical functions with MXNet.NDArray 
-
-* Embed Torch's neural network modules (layers) into MXNet's symbolic graph 
-## Compile with Torch
-
-
-* Install Torch using the [official guide](http://torch.ch/docs/getting-started.html).
-	* If you haven't already done so, copy `make/config.mk` (Linux) or `make/osx.mk` (Mac) into the MXNet root folder as `config.mk`. In `config.mk` uncomment the lines `TORCH_PATH = $(HOME)/torch` and `MXNET_PLUGINS += plugin/torch/torch.mk`.
-    * By default, Torch should be installed in your home folder (so `TORCH_PATH = $(HOME)/torch`). Modify TORCH_PATH to point to your torch installation, if necessary. 
-* Run `make clean && make` to build MXNet with Torch support.
-
-## Tensor Mathematics
-The mxnet.th module supports calling Torch's tensor mathematical functions with mxnet.nd.NDArray. See [complete code](https://github.com/dmlc/mxnet/blob/master/example/torch/torch_function.py):
-
- ```Python
-    import mxnet as mx
-    x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-    print x.asnumpy()
-    y = mx.th.abs(x)
-    print y.asnumpy()
-
-    x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-    print x.asnumpy()
-    mx.th.abs(x, x) # in-place
-    print x.asnumpy()
- ```
-For help, use the `help(mx.th)` command. 
-
-We've added support for most common functions listed on [Torch's documentation page](https://github.com/torch/torch7/blob/master/doc/maths.md). 
-If you find that the function you need is not supported, you can easily register it in `mxnet_root/plugin/torch/torch_function.cc` by using the existing registrations as examples.
-
-## Torch Modules (Layers)
-MXNet supports Torch's neural network modules through  the`mxnet.symbol.TorchModule` symbol.
-For example, the following code defines a three-layer DNN for classifying MNIST digits ([full code](https://github.com/dmlc/mxnet/blob/master/example/torch/torch_module.py)):
-
- ```Python
-    data = mx.symbol.Variable('data')
-    fc1 = mx.symbol.TorchModule(data_0=data, lua_string='nn.Linear(784, 128)', num_data=1, num_params=2, num_outputs=1, name='fc1')
-    act1 = mx.symbol.TorchModule(data_0=fc1, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu1')
-    fc2 = mx.symbol.TorchModule(data_0=act1, lua_string='nn.Linear(128, 64)', num_data=1, num_params=2, num_outputs=1, name='fc2')
-    act2 = mx.symbol.TorchModule(data_0=fc2, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu2')
-    fc3 = mx.symbol.TorchModule(data_0=act2, lua_string='nn.Linear(64, 10)', num_data=1, num_params=2, num_outputs=1, name='fc3')
-    mlp = mx.symbol.SoftmaxOutput(data=fc3, name='softmax')
- ```
-Let's break it down. First `data = mx.symbol.Variable('data')` defines a Variable as a placeholder for input.
-Then, it's fed through Torch's nn modules with:
-     `fc1 = mx.symbol.TorchModule(data_0=data, lua_string='nn.Linear(784, 128)', num_data=1, num_params=2, num_outputs=1, name='fc1')`.
-To use Torch's criterion as loss functions, you can replace the last line with:
- ```Python
-    logsoftmax = mx.symbol.TorchModule(data_0=fc3, lua_string='nn.LogSoftMax()', num_data=1, num_params=0, num_outputs=1, name='logsoftmax')
-    # Torch's label starts from 1
-    label = mx.symbol.Variable('softmax_label') + 1
-    mlp = mx.symbol.TorchCriterion(data=logsoftmax, label=label, lua_string='nn.ClassNLLCriterion()', name='softmax')
- ```
-The input to the nn module is named data_i for i = 0 ... num_data-1. `lua_string` is a single Lua statement that creates the module object.
-For Torch's built-in module, this is simply `nn.module_name(arguments)`.
-If you are using a custom module, place it in a .lua script file and load it with `require 'module_file.lua'` if your script returns a torch.nn object, or `(require 'module_file.lua')()` if your script returns a torch.nn class.
-
diff --git a/example/torch/torch_function.py b/example/torch/torch_function.py
deleted file mode 100644
index af285de227..0000000000
--- a/example/torch/torch_function.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-from __future__ import print_function
-import mxnet as mx
-x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-print(x.asnumpy())
-y = mx.th.abs(x)
-print(y.asnumpy())
-
-x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-print(x.asnumpy())
-mx.th.abs(x, x) # in-place
-print(x.asnumpy())
-
-x = mx.th.ones(2, 2, ctx=mx.cpu(0))
-y = mx.th.ones(2, 2, ctx=mx.cpu(0))*2
-print(mx.th.cdiv(x,y).asnumpy())
diff --git a/example/torch/torch_module.py b/example/torch/torch_module.py
deleted file mode 100644
index e2f7821362..0000000000
--- a/example/torch/torch_module.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-# pylint: skip-file
-import sys
-import os
-curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
-sys.path.append(os.path.join(curr_path, "../../tests/python/common"))
-from get_data import MNISTIterator
-import mxnet as mx
-import numpy as np
-import logging
-
-# define mlp
-
-use_torch_criterion = False
-
-data = mx.symbol.Variable('data')
-fc1 = mx.symbol.TorchModule(data_0=data, lua_string='nn.Linear(784, 128)', num_data=1, num_params=2, num_outputs=1, name='fc1')
-act1 = mx.symbol.TorchModule(data_0=fc1, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu1')
-fc2 = mx.symbol.TorchModule(data_0=act1, lua_string='nn.Linear(128, 64)', num_data=1, num_params=2, num_outputs=1, name='fc2')
-act2 = mx.symbol.TorchModule(data_0=fc2, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu2')
-fc3 = mx.symbol.TorchModule(data_0=act2, lua_string='nn.Linear(64, 10)', num_data=1, num_params=2, num_outputs=1, name='fc3')
-
-if use_torch_criterion:
-    logsoftmax = mx.symbol.TorchModule(data_0=fc3, lua_string='nn.LogSoftMax()', num_data=1, num_params=0, num_outputs=1, name='logsoftmax')
-    # Torch's label starts from 1
-    label = mx.symbol.Variable('softmax_label') + 1
-    mlp = mx.symbol.TorchCriterion(data=logsoftmax, label=label, lua_string='nn.ClassNLLCriterion()', name='softmax')
-else:
-    mlp = mx.symbol.SoftmaxOutput(data=fc3, name='softmax')
-
-# data
-
-train, val = MNISTIterator(batch_size=100, input_shape = (784,))
-
-# train
-
-logging.basicConfig(level=logging.DEBUG)
-
-model = mx.model.FeedForward(
-    ctx = mx.cpu(0), symbol = mlp, num_epoch = 20,
-    learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
-
-if use_torch_criterion:
-    model.fit(X=train, eval_data=val, eval_metric=mx.metric.Torch())
-else:
-    model.fit(X=train, eval_data=val)
diff --git a/make/config.mk b/make/config.mk
index 9c6393a45d..9f7564b88f 100644
--- a/make/config.mk
+++ b/make/config.mk
@@ -193,11 +193,6 @@ USE_CPP_PACKAGE = 0
 # CAFFE_PATH = $(HOME)/caffe
 # MXNET_PLUGINS += plugin/caffe/caffe.mk
 
-# whether to use torch integration. This requires installing torch.
-# You also need to add TORCH_PATH/install/lib to your LD_LIBRARY_PATH
-# TORCH_PATH = $(HOME)/torch
-# MXNET_PLUGINS += plugin/torch/torch.mk
-
 # WARPCTC_PATH = $(HOME)/warp-ctc
 # MXNET_PLUGINS += plugin/warpctc/warpctc.mk
 


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services