You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by zh...@apache.org on 2017/12/14 21:44:17 UTC

[incubator-mxnet] branch v1.0.0 updated: Remove torch support (#9072)

This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
     new d7e0a4f  Remove torch support (#9072)
d7e0a4f is described below

commit d7e0a4f302300907ce4cdd4621420b4b8e4720d7
Author: Yao Wang <ke...@gmail.com>
AuthorDate: Thu Dec 14 13:43:16 2017 -0800

    Remove torch support (#9072)
---
 docs/faq/index.md               |  2 --
 docs/faq/torch.md               | 62 -----------------------------------------
 example/torch/torch_function.py | 32 ---------------------
 example/torch/torch_module.py   | 62 -----------------------------------------
 make/config.mk                  |  5 ----
 5 files changed, 163 deletions(-)

diff --git a/docs/faq/index.md b/docs/faq/index.md
index 68c7d41..e5807f4 100644
--- a/docs/faq/index.md
+++ b/docs/faq/index.md
@@ -58,8 +58,6 @@ and full working examples, visit the [tutorials section](../tutorials/index.md).
 
 * [How do I set MXNet's environmental variables?](http://mxnet.io/how_to/env_var.html)
 
-* [How do I use MXNet as a front end for Torch?](http://mxnet.io/how_to/torch.html)
-
 ## Questions about Using MXNet
 If you need help with using MXNet, have questions about applying it to a particular kind of problem, or have a discussion topic, please use our [forum](https://discuss.mxnet.io).
 
diff --git a/docs/faq/torch.md b/docs/faq/torch.md
deleted file mode 100644
index 26def87..0000000
--- a/docs/faq/torch.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# How to Use MXNet As an (Almost) Full-function Torch Front End
-
-This topic demonstrates how to use MXNet as a front end to two of Torch's major functionalities:
-
-* Call Torch's tensor mathematical functions with MXNet.NDArray 
-
-* Embed Torch's neural network modules (layers) into MXNet's symbolic graph 
-## Compile with Torch
-
-
-* Install Torch using the [official guide](http://torch.ch/docs/getting-started.html).
-	* If you haven't already done so, copy `make/config.mk` (Linux) or `make/osx.mk` (Mac) into the MXNet root folder as `config.mk`. In `config.mk` uncomment the lines `TORCH_PATH = $(HOME)/torch` and `MXNET_PLUGINS += plugin/torch/torch.mk`.
-    * By default, Torch should be installed in your home folder (so `TORCH_PATH = $(HOME)/torch`). Modify TORCH_PATH to point to your torch installation, if necessary. 
-* Run `make clean && make` to build MXNet with Torch support.
-
-## Tensor Mathematics
-The mxnet.th module supports calling Torch's tensor mathematical functions with mxnet.nd.NDArray. See [complete code](https://github.com/dmlc/mxnet/blob/master/example/torch/torch_function.py):
-
- ```Python
-    import mxnet as mx
-    x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-    print x.asnumpy()
-    y = mx.th.abs(x)
-    print y.asnumpy()
-
-    x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-    print x.asnumpy()
-    mx.th.abs(x, x) # in-place
-    print x.asnumpy()
- ```
-For help, use the `help(mx.th)` command. 
-
-We've added support for most common functions listed on [Torch's documentation page](https://github.com/torch/torch7/blob/master/doc/maths.md). 
-If you find that the function you need is not supported, you can easily register it in `mxnet_root/plugin/torch/torch_function.cc` by using the existing registrations as examples.
-
-## Torch Modules (Layers)
-MXNet supports Torch's neural network modules through  the`mxnet.symbol.TorchModule` symbol.
-For example, the following code defines a three-layer DNN for classifying MNIST digits ([full code](https://github.com/dmlc/mxnet/blob/master/example/torch/torch_module.py)):
-
- ```Python
-    data = mx.symbol.Variable('data')
-    fc1 = mx.symbol.TorchModule(data_0=data, lua_string='nn.Linear(784, 128)', num_data=1, num_params=2, num_outputs=1, name='fc1')
-    act1 = mx.symbol.TorchModule(data_0=fc1, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu1')
-    fc2 = mx.symbol.TorchModule(data_0=act1, lua_string='nn.Linear(128, 64)', num_data=1, num_params=2, num_outputs=1, name='fc2')
-    act2 = mx.symbol.TorchModule(data_0=fc2, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu2')
-    fc3 = mx.symbol.TorchModule(data_0=act2, lua_string='nn.Linear(64, 10)', num_data=1, num_params=2, num_outputs=1, name='fc3')
-    mlp = mx.symbol.SoftmaxOutput(data=fc3, name='softmax')
- ```
-Let's break it down. First `data = mx.symbol.Variable('data')` defines a Variable as a placeholder for input.
-Then, it's fed through Torch's nn modules with:
-     `fc1 = mx.symbol.TorchModule(data_0=data, lua_string='nn.Linear(784, 128)', num_data=1, num_params=2, num_outputs=1, name='fc1')`.
-To use Torch's criterion as loss functions, you can replace the last line with:
- ```Python
-    logsoftmax = mx.symbol.TorchModule(data_0=fc3, lua_string='nn.LogSoftMax()', num_data=1, num_params=0, num_outputs=1, name='logsoftmax')
-    # Torch's label starts from 1
-    label = mx.symbol.Variable('softmax_label') + 1
-    mlp = mx.symbol.TorchCriterion(data=logsoftmax, label=label, lua_string='nn.ClassNLLCriterion()', name='softmax')
- ```
-The input to the nn module is named data_i for i = 0 ... num_data-1. `lua_string` is a single Lua statement that creates the module object.
-For Torch's built-in module, this is simply `nn.module_name(arguments)`.
-If you are using a custom module, place it in a .lua script file and load it with `require 'module_file.lua'` if your script returns a torch.nn object, or `(require 'module_file.lua')()` if your script returns a torch.nn class.
-
diff --git a/example/torch/torch_function.py b/example/torch/torch_function.py
deleted file mode 100644
index af285de..0000000
--- a/example/torch/torch_function.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-from __future__ import print_function
-import mxnet as mx
-x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-print(x.asnumpy())
-y = mx.th.abs(x)
-print(y.asnumpy())
-
-x = mx.th.randn(2, 2, ctx=mx.cpu(0))
-print(x.asnumpy())
-mx.th.abs(x, x) # in-place
-print(x.asnumpy())
-
-x = mx.th.ones(2, 2, ctx=mx.cpu(0))
-y = mx.th.ones(2, 2, ctx=mx.cpu(0))*2
-print(mx.th.cdiv(x,y).asnumpy())
diff --git a/example/torch/torch_module.py b/example/torch/torch_module.py
deleted file mode 100644
index e2f7821..0000000
--- a/example/torch/torch_module.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-# pylint: skip-file
-import sys
-import os
-curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
-sys.path.append(os.path.join(curr_path, "../../tests/python/common"))
-from get_data import MNISTIterator
-import mxnet as mx
-import numpy as np
-import logging
-
-# define mlp
-
-use_torch_criterion = False
-
-data = mx.symbol.Variable('data')
-fc1 = mx.symbol.TorchModule(data_0=data, lua_string='nn.Linear(784, 128)', num_data=1, num_params=2, num_outputs=1, name='fc1')
-act1 = mx.symbol.TorchModule(data_0=fc1, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu1')
-fc2 = mx.symbol.TorchModule(data_0=act1, lua_string='nn.Linear(128, 64)', num_data=1, num_params=2, num_outputs=1, name='fc2')
-act2 = mx.symbol.TorchModule(data_0=fc2, lua_string='nn.ReLU(false)', num_data=1, num_params=0, num_outputs=1, name='relu2')
-fc3 = mx.symbol.TorchModule(data_0=act2, lua_string='nn.Linear(64, 10)', num_data=1, num_params=2, num_outputs=1, name='fc3')
-
-if use_torch_criterion:
-    logsoftmax = mx.symbol.TorchModule(data_0=fc3, lua_string='nn.LogSoftMax()', num_data=1, num_params=0, num_outputs=1, name='logsoftmax')
-    # Torch's label starts from 1
-    label = mx.symbol.Variable('softmax_label') + 1
-    mlp = mx.symbol.TorchCriterion(data=logsoftmax, label=label, lua_string='nn.ClassNLLCriterion()', name='softmax')
-else:
-    mlp = mx.symbol.SoftmaxOutput(data=fc3, name='softmax')
-
-# data
-
-train, val = MNISTIterator(batch_size=100, input_shape = (784,))
-
-# train
-
-logging.basicConfig(level=logging.DEBUG)
-
-model = mx.model.FeedForward(
-    ctx = mx.cpu(0), symbol = mlp, num_epoch = 20,
-    learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
-
-if use_torch_criterion:
-    model.fit(X=train, eval_data=val, eval_metric=mx.metric.Torch())
-else:
-    model.fit(X=train, eval_data=val)
diff --git a/make/config.mk b/make/config.mk
index 6db22df..325cf8f 100644
--- a/make/config.mk
+++ b/make/config.mk
@@ -193,11 +193,6 @@ USE_CPP_PACKAGE = 0
 # CAFFE_PATH = $(HOME)/caffe
 # MXNET_PLUGINS += plugin/caffe/caffe.mk
 
-# whether to use torch integration. This requires installing torch.
-# You also need to add TORCH_PATH/install/lib to your LD_LIBRARY_PATH
-# TORCH_PATH = $(HOME)/torch
-# MXNET_PLUGINS += plugin/torch/torch.mk
-
 # WARPCTC_PATH = $(HOME)/warp-ctc
 # MXNET_PLUGINS += plugin/warpctc/warpctc.mk
 

-- 
To stop receiving notification emails like this one, please contact
['"commits@mxnet.apache.org" <co...@mxnet.apache.org>'].