You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by ha...@apache.org on 2018/01/28 02:38:35 UTC

[incubator-mxnet] branch v1.1.0 updated (4770fe5 -> 32caa10)

This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


    omit 4770fe5  Bump 1.1 (#192)
    omit bc30e1a  update news.md (#191)
    omit c6d97d6  refactor regression ops to nnvm interface (#9540)
     new 4c17c03  Bump 1.1 (#192)
     new 9a58196  update news.md (#191)
     new 32caa10  refactor regression ops to nnvm interface (#9540)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (4770fe5)
            \
             N -- N -- N   refs/heads/v1.1.0 (32caa10)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:

-- 
To stop receiving notification emails like this one, please contact
haibin@apache.org.

[incubator-mxnet] 01/03: Bump 1.1 (#192)

Posted by ha...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 4c17c030208c67b2f68808d12d5a79996cfaf4ba
Author: Haibin Lin <li...@gmail.com>
AuthorDate: Sat Jan 27 18:22:50 2018 -0800

    Bump 1.1 (#192)
    
    * bump
    
    * also update base.h
    
    * revert website changes
    
    * Update index.html
---
 R-package/DESCRIPTION                           | 2 +-
 docs/_static/mxnet-theme/index.html             | 6 +++---
 include/mxnet/base.h                            | 4 ++--
 python/mxnet/libinfo.py                         | 2 +-
 scala-package/assembly/linux-x86_64-cpu/pom.xml | 6 +++---
 scala-package/assembly/linux-x86_64-gpu/pom.xml | 6 +++---
 scala-package/assembly/osx-x86_64-cpu/pom.xml   | 6 +++---
 scala-package/assembly/pom.xml                  | 2 +-
 scala-package/core/pom.xml                      | 6 +++---
 scala-package/examples/pom.xml                  | 4 ++--
 scala-package/init-native/linux-x86_64/pom.xml  | 4 ++--
 scala-package/init-native/osx-x86_64/pom.xml    | 4 ++--
 scala-package/init-native/pom.xml               | 2 +-
 scala-package/init/pom.xml                      | 2 +-
 scala-package/macros/pom.xml                    | 6 +++---
 scala-package/native/linux-x86_64-cpu/pom.xml   | 4 ++--
 scala-package/native/linux-x86_64-gpu/pom.xml   | 4 ++--
 scala-package/native/osx-x86_64-cpu/pom.xml     | 4 ++--
 scala-package/native/pom.xml                    | 2 +-
 scala-package/pom.xml                           | 2 +-
 scala-package/spark/pom.xml                     | 4 ++--
 snapcraft.yaml                                  | 2 +-
 22 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/R-package/DESCRIPTION b/R-package/DESCRIPTION
index 2996eed..0ec7f36 100644
--- a/R-package/DESCRIPTION
+++ b/R-package/DESCRIPTION
@@ -1,7 +1,7 @@
 Package: mxnet
 Type: Package
 Title: MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
-Version: 1.0.1
+Version: 1.1.0
 Date: 2017-06-27
 Author: Tianqi Chen, Qiang Kou, Tong He
 Maintainer: Qiang Kou <qk...@qkou.info>
diff --git a/docs/_static/mxnet-theme/index.html b/docs/_static/mxnet-theme/index.html
index 9dfb7d6..3b48832 100644
--- a/docs/_static/mxnet-theme/index.html
+++ b/docs/_static/mxnet-theme/index.html
@@ -21,9 +21,9 @@
   <div class="container">
     <div class="row">
       <div class="col-lg-4 col-sm-12">
-        <h3>Apache MXNet 1.0.1 Released</h3>
-        <p>We're excited to announce the release of MXNet 1.0.1! Check out the release notes for latest updates.</p>
-        <a href="https://github.com/apache/incubator-mxnet/releases/tag/1.0.1">Learn More</a>
+        <h3>Apache MXNet 1.1.0 Released</h3>
+        <p>We're excited to announce the release of MXNet 1.1.0! Check out the release notes for latest updates.</p>
+        <a href="https://github.com/apache/incubator-mxnet/releases/tag/1.1.0">Learn More</a>
       </div>
       <div class="col-lg-4 col-sm-12">
         <h3>MXNet Model Server</h3>
diff --git a/include/mxnet/base.h b/include/mxnet/base.h
index 262c348..faf2fe1 100644
--- a/include/mxnet/base.h
+++ b/include/mxnet/base.h
@@ -111,9 +111,9 @@
 /*! \brief major version */
 #define MXNET_MAJOR 1
 /*! \brief minor version */
-#define MXNET_MINOR 0
+#define MXNET_MINOR 1
 /*! \brief patch version */
-#define MXNET_PATCH 1
+#define MXNET_PATCH 0
 /*! \brief mxnet version */
 #define MXNET_VERSION (MXNET_MAJOR*10000 + MXNET_MINOR*100 + MXNET_PATCH)
 /*! \brief helper for making version number */
diff --git a/python/mxnet/libinfo.py b/python/mxnet/libinfo.py
index 9ab0f59..8ccac29 100644
--- a/python/mxnet/libinfo.py
+++ b/python/mxnet/libinfo.py
@@ -61,4 +61,4 @@ def find_lib_path():
 
 
 # current version
-__version__ = "1.0.1"
+__version__ = "1.1.0"
diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index 75f2d2c..cbcd7ac 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-full-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -18,12 +18,12 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
     </dependency>
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>libmxnet-scala-linux-x86_64-cpu</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>so</type>
     </dependency>
   </dependencies>
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index 7c7162d..cfe22e7 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-full-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -18,12 +18,12 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
     </dependency>
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>libmxnet-scala-linux-x86_64-gpu</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>so</type>
     </dependency>
   </dependencies>
diff --git a/scala-package/assembly/osx-x86_64-cpu/pom.xml b/scala-package/assembly/osx-x86_64-cpu/pom.xml
index 0b5c4e2..7f7f1ab 100644
--- a/scala-package/assembly/osx-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-full-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -18,12 +18,12 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
     </dependency>
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>libmxnet-scala-osx-x86_64-cpu</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>jnilib</type>
     </dependency>
   </dependencies>
diff --git a/scala-package/assembly/pom.xml b/scala-package/assembly/pom.xml
index efa3b75..a755d7c 100644
--- a/scala-package/assembly/pom.xml
+++ b/scala-package/assembly/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
diff --git a/scala-package/core/pom.xml b/scala-package/core/pom.xml
index b721906..0df7047 100644
--- a/scala-package/core/pom.xml
+++ b/scala-package/core/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -71,13 +71,13 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-init_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <scope>provided</scope>
     </dependency>
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-macros_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <scope>provided</scope>
     </dependency>
   </dependencies>
diff --git a/scala-package/examples/pom.xml b/scala-package/examples/pom.xml
index 87ce898..a23b7b9 100644
--- a/scala-package/examples/pom.xml
+++ b/scala-package/examples/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -118,7 +118,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/scala-package/init-native/linux-x86_64/pom.xml b/scala-package/init-native/linux-x86_64/pom.xml
index 0d15a6c..848d1e1 100644
--- a/scala-package/init-native/linux-x86_64/pom.xml
+++ b/scala-package/init-native/linux-x86_64/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-scala-init-native-parent</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -20,7 +20,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-init_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>jar</type>
       <scope>compile</scope>
     </dependency>
diff --git a/scala-package/init-native/osx-x86_64/pom.xml b/scala-package/init-native/osx-x86_64/pom.xml
index e2af2de..f6b865f 100644
--- a/scala-package/init-native/osx-x86_64/pom.xml
+++ b/scala-package/init-native/osx-x86_64/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-scala-init-native-parent</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -20,7 +20,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-init_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>jar</type>
       <scope>compile</scope>
     </dependency>
diff --git a/scala-package/init-native/pom.xml b/scala-package/init-native/pom.xml
index 7ca19f1..6724099 100644
--- a/scala-package/init-native/pom.xml
+++ b/scala-package/init-native/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
diff --git a/scala-package/init/pom.xml b/scala-package/init/pom.xml
index 972b894..26130a4 100644
--- a/scala-package/init/pom.xml
+++ b/scala-package/init/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
 <!--  <relativePath>../pom.xml</relativePath>-->
   </parent>
 
diff --git a/scala-package/macros/pom.xml b/scala-package/macros/pom.xml
index dea908a..65dcbbc 100644
--- a/scala-package/macros/pom.xml
+++ b/scala-package/macros/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -41,13 +41,13 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-init_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <scope>provided</scope>
     </dependency>
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>libmxnet-init-scala-${platform}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <scope>provided</scope>
       <type>${libtype}</type>
     </dependency>
diff --git a/scala-package/native/linux-x86_64-cpu/pom.xml b/scala-package/native/linux-x86_64-cpu/pom.xml
index 3bcb012..efaeedd 100644
--- a/scala-package/native/linux-x86_64-cpu/pom.xml
+++ b/scala-package/native/linux-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-scala-native-parent</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -20,7 +20,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>jar</type>
       <scope>compile</scope>
     </dependency>
diff --git a/scala-package/native/linux-x86_64-gpu/pom.xml b/scala-package/native/linux-x86_64-gpu/pom.xml
index 85ff4d2..0befa0c 100644
--- a/scala-package/native/linux-x86_64-gpu/pom.xml
+++ b/scala-package/native/linux-x86_64-gpu/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-scala-native-parent</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -20,7 +20,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>jar</type>
       <scope>compile</scope>
     </dependency>
diff --git a/scala-package/native/osx-x86_64-cpu/pom.xml b/scala-package/native/osx-x86_64-cpu/pom.xml
index 809da66..4981224 100644
--- a/scala-package/native/osx-x86_64-cpu/pom.xml
+++ b/scala-package/native/osx-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-scala-native-parent</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -20,7 +20,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <type>jar</type>
       <scope>compile</scope>
     </dependency>
diff --git a/scala-package/native/pom.xml b/scala-package/native/pom.xml
index 55fb053..b772245 100644
--- a/scala-package/native/pom.xml
+++ b/scala-package/native/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
diff --git a/scala-package/pom.xml b/scala-package/pom.xml
index 5680c83..8599f7d 100644
--- a/scala-package/pom.xml
+++ b/scala-package/pom.xml
@@ -5,7 +5,7 @@
   <modelVersion>4.0.0</modelVersion>
   <groupId>ml.dmlc.mxnet</groupId>
   <artifactId>mxnet-parent_2.11</artifactId>
-  <version>1.0.1-SNAPSHOT</version>
+  <version>1.1.0-SNAPSHOT</version>
   <name>MXNet Scala Package - Parent</name>
   <url>https://github.com/dmlc/mxnet/tree/master/scala-package</url>
   <description>MXNet Scala Package</description>
diff --git a/scala-package/spark/pom.xml b/scala-package/spark/pom.xml
index 3863c77..da5c6e2 100644
--- a/scala-package/spark/pom.xml
+++ b/scala-package/spark/pom.xml
@@ -6,7 +6,7 @@
   <parent>
     <groupId>ml.dmlc.mxnet</groupId>
     <artifactId>mxnet-parent_2.11</artifactId>
-    <version>1.0.1-SNAPSHOT</version>
+    <version>1.1.0-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
 
@@ -21,7 +21,7 @@
     <dependency>
       <groupId>ml.dmlc.mxnet</groupId>
       <artifactId>mxnet-core_${scala.binary.version}</artifactId>
-      <version>1.0.1-SNAPSHOT</version>
+      <version>1.1.0-SNAPSHOT</version>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/snapcraft.yaml b/snapcraft.yaml
index 8a0dd45..b17c73b 100644
--- a/snapcraft.yaml
+++ b/snapcraft.yaml
@@ -1,5 +1,5 @@
 name: mxnet
-version: '1.0.1'
+version: '1.1.0'
 summary: MXNet is a deep learning framework designed for efficiency and flexibility.
 description: |
   MXNet is a deep learning framework designed for both efficiency and 

-- 
To stop receiving notification emails like this one, please contact
haibin@apache.org.

[incubator-mxnet] 03/03: refactor regression ops to nnvm interface (#9540)

Posted by ha...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 32caa1033631668016cf60755ae71b4b3186996c
Author: Ziyue Huang <zy...@gmail.com>
AuthorDate: Sun Jan 28 06:24:33 2018 +0800

    refactor regression ops to nnvm interface (#9540)
    
    * refactor regression ops
    
    * fix err for instantiation of minus_sign
    
    * remove useless header file init_op.h
    
    * replace with macro and address other comments
    
    * update
    
    * minor revise docs
    
    * add mae test
---
 src/operator/operator_tune.cc          |   2 +
 src/operator/regression_output-inl.h   | 228 +++++++++++++--------------------
 src/operator/regression_output.cc      | 107 +++++++++-------
 src/operator/regression_output.cu      |  41 +++---
 tests/python/unittest/test_operator.py |   4 +-
 5 files changed, 170 insertions(+), 212 deletions(-)

diff --git a/src/operator/operator_tune.cc b/src/operator/operator_tune.cc
index 7cdf7a2..e0f8306 100644
--- a/src/operator/operator_tune.cc
+++ b/src/operator/operator_tune.cc
@@ -286,12 +286,14 @@ IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::plus);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::minus);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::mul);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::div);  // NOLINT()
+IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::minus_sign);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::rminus);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::rdiv);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::plus);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::minus);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::mul);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::div);  // NOLINT()
+IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::minus_sign);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_BWD(mxnet::op::mshadow_op::rminus);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::rdiv);  // NOLINT()
 IMPLEMENT_BINARY_WORKLOAD_FWD(mxnet::op::mshadow_op::div_grad);  // NOLINT()
diff --git a/src/operator/regression_output-inl.h b/src/operator/regression_output-inl.h
index 08b2f0a..4642f8d 100644
--- a/src/operator/regression_output-inl.h
+++ b/src/operator/regression_output-inl.h
@@ -18,28 +18,28 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
  * \file regression_ouput-inl.h
  * \brief Regression output operator.
- */
+*/
 #ifndef MXNET_OPERATOR_REGRESSION_OUTPUT_INL_H_
 #define MXNET_OPERATOR_REGRESSION_OUTPUT_INL_H_
 
-#include <dmlc/logging.h>
-#include <mxnet/operator.h>
-#include <map>
-#include <string>
+#include <mxnet/operator_util.h>
 #include <vector>
 #include <utility>
+#include "./mshadow_op.h"
+#include "./mxnet_op.h"
 #include "./operator_common.h"
 
 namespace mxnet {
 namespace op {
 
+/*!
+ * \brief regression namespace
+ */
 namespace reg_enum {
 enum RegressionOutputOpInputs {kData, kLabel};
 enum RegressionOutputOutputs {kOut};
-enum RegressionOutputType {kLinear, kLogistic, kMAE};
 }  // reg_enum
 
 struct RegressionOutputParam : public dmlc::Parameter<RegressionOutputParam> {
@@ -50,146 +50,90 @@ struct RegressionOutputParam : public dmlc::Parameter<RegressionOutputParam> {
   };
 };
 
-// Special Operator to output regression value in forward
-// And get gradient in calculation.
-template<typename xpu, typename ForwardOp, typename BackwardOp>
-class RegressionOutputOp : public Operator {
- public:
-  explicit RegressionOutputOp(RegressionOutputParam param) : param_(param) {}
-
-  virtual void Forward(const OpContext &ctx,
-                       const std::vector<TBlob> &in_data,
-                       const std::vector<OpReqType> &req,
-                       const std::vector<TBlob> &out_data,
-                       const std::vector<TBlob> &aux_args) {
-    using namespace mshadow;
-    using namespace mshadow::expr;
-    CHECK_EQ(in_data.size(), 2U) << "RegressionOutputOp Input: [data, label]";
-    CHECK_EQ(out_data.size(), 1U) << "RegressionOutputOp Output: [output]";
-    Stream<xpu> *s = ctx.get_stream<xpu>();
-    Tensor<xpu, 2> data = in_data[reg_enum::kData].FlatTo2D<xpu, real_t>(s);
-    Tensor<xpu, 2> out = out_data[reg_enum::kOut].FlatTo2D<xpu, real_t>(s);
-    Assign(out, req[reg_enum::kOut], F<ForwardOp>(data));
-  }
-
-  virtual void Backward(const OpContext &ctx,
-                        const std::vector<TBlob> &out_grad,
-                        const std::vector<TBlob> &in_data,
-                        const std::vector<TBlob> &out_data,
-                        const std::vector<OpReqType> &req,
-                        const std::vector<TBlob> &in_grad,
-                        const std::vector<TBlob> &aux_args) {
-    using namespace mshadow;
-    using namespace mshadow::expr;
-    CHECK_EQ(in_data.size(), 2U);
-    CHECK_EQ(out_grad.size(), 1U);
-    CHECK_GE(in_grad.size(), 1U);
-    CHECK_GE(req.size(), 1U);
-    Stream<xpu> *s = ctx.get_stream<xpu>();
-    real_t num_output =
-      in_data[reg_enum::kLabel].Size()/in_data[reg_enum::kLabel].shape_[0];
-    Tensor<xpu, 2> out = out_data[reg_enum::kOut].FlatTo2D<xpu, real_t>(s);
-    Tensor<xpu, 2> grad = in_grad[reg_enum::kData].FlatTo2D<xpu, real_t>(s);
-    Tensor<xpu, 2> label = in_data[reg_enum::kLabel]
-      .get_with_shape<xpu, 2, real_t>(out.shape_, s);
-    Assign(grad, req[reg_enum::kData], param_.grad_scale/num_output*
-      F<BackwardOp>(out, reshape(label, grad.shape_)));
-  }
-
- private:
-  RegressionOutputParam param_;
-};
-
-// Decalre Factory function, used for dispatch specialization
-template<typename xpu>
-Operator* CreateRegressionOutputOp(reg_enum::RegressionOutputType type,
-                                   RegressionOutputParam param);
-
-#if DMLC_USE_CXX11
-template<reg_enum::RegressionOutputType type>
-class RegressionOutputProp : public OperatorProperty {
- public:
-  std::vector<std::string> ListArguments() const override {
-    return {"data", "label"};
-  }
-
-  void Init(const std::vector<std::pair<std::string, std::string> >& kwargs) override {
-    param_.Init(kwargs);
-  }
-
-  std::map<std::string, std::string> GetParams() const override {
-    return param_.__DICT__();
-  }
-
-  bool InferShape(std::vector<TShape> *in_shape,
-                  std::vector<TShape> *out_shape,
-                  std::vector<TShape> *aux_shape) const override {
-    using namespace mshadow;
-    CHECK_EQ(in_shape->size(), 2) << "Input:[data, label]";
-    const TShape &dshape = in_shape->at(0);
-    if (dshape.ndim() == 0) return false;
-    auto &lshape = (*in_shape)[1];
-    if (lshape.ndim() == 0) {
-      // special treatment for 1D output, to allow 1D label by default.
-      // Think about change convention later
-      if (dshape.ndim() == 2 && dshape[1] == 1) {
-        lshape = Shape1(dshape[0]);
-      } else {
-        lshape = dshape;
-      }
-    } else if (lshape[0] != dshape[0] || lshape.Size() != dshape.Size()) {
-      std::ostringstream os;
-      os << "Shape inconsistent, Provided=" << lshape << ','
-         << " inferred shape=" << dshape;
-      throw ::mxnet::op::InferShapeError(os.str(), 1);
-    }
-    out_shape->clear();
-    out_shape->push_back(dshape);
-    return true;
-  }
-
-  OperatorProperty* Copy() const override {
-    auto ptr = new RegressionOutputProp<type>();
-    ptr->param_ = param_;
-    return ptr;
-  }
-
-  std::string TypeString() const override {
-    switch (type) {
-      case reg_enum::kLinear: return "LinearRegressionOutput";
-      case reg_enum::kLogistic: return "LogisticRegressionOutput";
-      case reg_enum::kMAE: return "MAERegressionOutput";
-      default: LOG(FATAL) << "unknown type"; return "";
+inline bool RegressionOpShape(const nnvm::NodeAttrs& attrs,
+                              std::vector<TShape> *in_attrs,
+                              std::vector<TShape> *out_attrs) {
+  using namespace mshadow;
+  CHECK_EQ(in_attrs->size(), 2U) << "Input:[data, label]";
+  const TShape &dshape = in_attrs->at(0);
+  if (dshape.ndim() == 0) return false;
+  auto &lshape = (*in_attrs)[1];
+  if (lshape.ndim() == 0) {
+    // special treatment for 1D output, to allow 1D label by default.
+    // Think about change convention later
+    if (dshape.ndim() == 2 && dshape[1] == 1) {
+      lshape = Shape1(dshape[0]);
+    } else {
+      lshape = dshape;
     }
+  } else if (lshape[0] != dshape[0] || lshape.Size() != dshape.Size()) {
+    std::ostringstream os;
+    os << "Shape inconsistent, Provided=" << lshape << ','
+       << " inferred shape=" << dshape;
+    throw ::mxnet::op::InferShapeError(os.str(), 1);
   }
-
-  std::vector<int> DeclareBackwardDependency(
-    const std::vector<int> &out_grad,
-    const std::vector<int> &in_data,
-    const std::vector<int> &out_data) const override {
-    return {in_data[reg_enum::kLabel], out_data[reg_enum::kOut]};
-  }
-
-  std::vector<std::pair<int, void*> > BackwardInplaceOption(
-    const std::vector<int> &out_grad,
-    const std::vector<int> &in_data,
-    const std::vector<int> &out_data,
-    const std::vector<void*> &in_grad) const override {
-    return {{out_data[reg_enum::kOut], in_grad[reg_enum::kData]}};
-  }
-
-  std::vector<std::pair<int, void*> > ForwardInplaceOption(
-    const std::vector<int> &in_data,
-    const std::vector<void*> &out_data) const override {
-    return {{in_data[reg_enum::kData], out_data[reg_enum::kOut]}};
+  out_attrs->clear();
+  out_attrs->push_back(dshape);
+  return true;
+}
+
+template<typename xpu, typename ForwardOp>
+void RegressionForward(const nnvm::NodeAttrs& attrs,
+                       const OpContext& ctx,
+                       const std::vector<TBlob>& inputs,
+                       const std::vector<OpReqType>& req,
+                       const std::vector<TBlob>& outputs) {
+  mshadow::Stream<xpu> *s = ctx.get_stream<xpu>();
+  MSHADOW_REAL_TYPE_SWITCH(inputs[reg_enum::kData].type_flag_, DType, {
+    MXNET_ASSIGN_REQ_SWITCH(req[reg_enum::kOut], Req, {
+      const DType* in_data = inputs[reg_enum::kData].dptr<DType>();
+      DType* out_data = outputs[reg_enum::kOut].dptr<DType>();
+      using namespace mxnet_op;
+      Kernel<op_with_req<ForwardOp, Req>, xpu>::Launch(
+        s, outputs[reg_enum::kOut].Size(), out_data, in_data);
+    });
+  });
+}
+
+template<typename xpu, typename BackwardOp>
+void RegressionBackward(const nnvm::NodeAttrs& attrs,
+                        const OpContext& ctx,
+                        const std::vector<TBlob>& inputs,
+                        const std::vector<OpReqType>& req,
+                        const std::vector<TBlob>& outputs) {
+  const RegressionOutputParam& param = nnvm::get<RegressionOutputParam>(attrs.parsed);
+  mshadow::Stream<xpu> *s = ctx.get_stream<xpu>();
+  // inputs are in_label, out_data
+  // outputs are data_grad, label_grad
+  MSHADOW_REAL_TYPE_SWITCH(inputs[1].type_flag_, DType, {
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      const DType* in_label = inputs[0].dptr<DType>();
+      const DType* out_data = inputs[1].dptr<DType>();
+      DType* data_grad = outputs[0].dptr<DType>();
+      const real_t num_output = inputs[0].Size()/inputs[0].shape_[0];
+      using namespace mxnet_op;
+      Kernel<op_with_req<BackwardOp, Req>, xpu>::Launch(
+        s, outputs[0].Size(), data_grad, out_data, in_label);
+      Kernel<op_with_req<mshadow_op::mul, Req>, xpu>::Launch(
+        s, outputs[0].Size(), data_grad, data_grad,
+        static_cast<DType>(param.grad_scale/num_output));
+    });
+  });
+}
+
+struct RegressionOpGrad {
+  const char *op_name;
+  std::vector<nnvm::NodeEntry> operator()(const nnvm::NodePtr& n,
+                                          const std::vector<nnvm::NodeEntry>& ograds) const {
+    std::vector<nnvm::NodeEntry> heads;
+    heads.push_back(n->inputs[reg_enum::kLabel]);
+    heads.emplace_back(nnvm::NodeEntry{n, reg_enum::kOut, 0});
+    return MakeGradNode(op_name, n, heads, n->attrs.dict);
   }
+};
 
-  Operator* CreateOperator(Context ctx) const override;
 
- protected:
-  RegressionOutputParam param_;
-};
-#endif  // DMLC_USE_CXX11
 }  // namespace op
 }  // namespace mxnet
+
 #endif  // MXNET_OPERATOR_REGRESSION_OUTPUT_INL_H_
diff --git a/src/operator/regression_output.cc b/src/operator/regression_output.cc
index 2f8042e..7b0fbae 100644
--- a/src/operator/regression_output.cc
+++ b/src/operator/regression_output.cc
@@ -18,61 +18,71 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
- * \file regression_output.cc
- * \brief regression output operator
+ * \file regression_ouput.cc
+ * \brief Regression output operator.
 */
+
 #include "./regression_output-inl.h"
-#include "./mshadow_op.h"
+
+#define MXNET_OPERATOR_REGISTER_REGRESSION_FWD(__name$, __kernel$, __bwdop$)   \
+  NNVM_REGISTER_OP(__name$)                                                    \
+  .set_num_inputs(2)                                                           \
+  .set_num_outputs(1)                                                          \
+  .set_attr<nnvm::FListInputNames>("FListInputNames",                          \
+    [](const NodeAttrs& attrs) {                                               \
+      return std::vector<std::string>{"data", "label"};                        \
+    })                                                                         \
+  .set_attr<nnvm::FInferShape>("FInferShape", RegressionOpShape)               \
+  .set_attr<nnvm::FGradient>("FGradient", RegressionOpGrad{__bwdop$})          \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                            \
+  [](const NodeAttrs& attrs){                                                  \
+    return std::vector<std::pair<int, int> >{{0, 0}};                          \
+  })                                                                           \
+  .set_attr<FCompute>("FCompute<cpu>", RegressionForward<cpu, __kernel$>)      \
+  .add_argument("data", "NDArray-or-Symbol", "Input data to the function.")    \
+  .add_argument("label", "NDArray-or-Symbol", "Input label to the function.")  \
+  .add_arguments(RegressionOutputParam::__FIELDS__())
+
+#define MXNET_OPERATOR_REGISTER_REGRESSION_BWD(__name$, __kernel$)         \
+  NNVM_REGISTER_OP(__name$)                                                \
+  .set_num_inputs(2)                                                       \
+  .set_num_outputs(2)                                                      \
+  .set_attr_parser(ParamParser<RegressionOutputParam>)                     \
+  .set_attr<nnvm::TIsBackward>("TIsBackward", true)                        \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                        \
+  [](const NodeAttrs& attrs){                                              \
+    return std::vector<std::pair<int, int> >{{1, 0}};                      \
+  })                                                                       \
+  .set_attr<FCompute>("FCompute<cpu>", RegressionBackward<cpu, __kernel$>)
 
 namespace mxnet {
 namespace op {
 
-template<>
-Operator *CreateRegressionOutputOp<cpu>(reg_enum::RegressionOutputType type,
-                                        RegressionOutputParam param) {
-  switch (type) {
-    case reg_enum::kLinear:
-      return new RegressionOutputOp<cpu, op::mshadow_op::identity, op::mshadow_op::minus>(param);
-    case reg_enum::kLogistic:
-      return new RegressionOutputOp<cpu, mshadow_op::sigmoid, op::mshadow_op::minus>(param);
-    case reg_enum::kMAE:
-      return new RegressionOutputOp<cpu, op::mshadow_op::identity, mshadow_op::minus_sign>(param);
-    default:
-      LOG(FATAL) << "unknown activation type " << type;
-  }
-  return nullptr;
-}
-
-// DO_BIND_DISPATCH comes from operator_common.h
-template<reg_enum::RegressionOutputType type>
-Operator *RegressionOutputProp<type>::CreateOperator(Context ctx) const {
-  DO_BIND_DISPATCH(CreateRegressionOutputOp, type, param_);
-}
 
 DMLC_REGISTER_PARAMETER(RegressionOutputParam);
 
-MXNET_REGISTER_OP_PROPERTY(LinearRegressionOutput, RegressionOutputProp<reg_enum::kLinear>)
+MXNET_OPERATOR_REGISTER_REGRESSION_FWD(LinearRegressionOutput,
+  mshadow_op::identity, "_backward_linear_reg_out")
 .describe(R"code(Computes and optimizes for squared loss during backward propagation.
 Just outputs ``data`` during forward propagation.
 
 If :math:`\hat{y}_i` is the predicted value of the i-th sample, and :math:`y_i` is the corresponding target value,
 then the squared loss estimated over :math:`n` samples is defined as
 
-:math:`\text{SquaredLoss}(y, \hat{y} ) = \frac{1}{n} \sum_{i=0}^{n-1} \left( y_i - \hat{y}_i \right)^2`
+:math:`\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert  \textbf{y}_i - \hat{\textbf{y}}_i  \rVert_2`
 
 .. note::
    Use the LinearRegressionOutput as the final output layer of a net.
 
-By default, gradients of this loss function are scaled by factor `1/n`, where n is the number of training examples.
-The parameter `grad_scale` can be used to change this scale to `grad_scale/n`.
+By default, gradients of this loss function are scaled by factor `1/m`, where m is the number of regression outputs of a training example.
+The parameter `grad_scale` can be used to change this scale to `grad_scale/m`.
+
+)code" ADD_FILELINE);
 
-)code" ADD_FILELINE)
-.add_argument("data", "NDArray-or-Symbol", "Input data to the function.")
-.add_argument("label", "NDArray-or-Symbol", "Input label to the function.")
-.add_arguments(RegressionOutputParam::__FIELDS__());
+MXNET_OPERATOR_REGISTER_REGRESSION_BWD(_backward_linear_reg_out, mshadow_op::minus);
 
-MXNET_REGISTER_OP_PROPERTY(MAERegressionOutput, RegressionOutputProp<reg_enum::kMAE>)
+MXNET_OPERATOR_REGISTER_REGRESSION_FWD(MAERegressionOutput,
+  mshadow_op::identity, "_backward_mae_reg_out")
 .describe(R"code(Computes mean absolute error of the input.
 
 MAE is a risk metric corresponding to the expected value of the absolute error.
@@ -80,24 +90,24 @@ MAE is a risk metric corresponding to the expected value of the absolute error.
 If :math:`\hat{y}_i` is the predicted value of the i-th sample, and :math:`y_i` is the corresponding target value,
 then the mean absolute error (MAE) estimated over :math:`n` samples is defined as
 
-:math:`\text{MAE}(y, \hat{y} ) = \frac{1}{n} \sum_{i=0}^{n-1} \left| y_i - \hat{y}_i \right|`
+:math:`\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1`
 
 .. note::
    Use the MAERegressionOutput as the final output layer of a net.
 
-By default, gradients of this loss function are scaled by factor `1/n`, where n is the number of training examples.
-The parameter `grad_scale` can be used to change this scale to `grad_scale/n`.
+By default, gradients of this loss function are scaled by factor `1/m`, where m is the number of regression outputs of a training example.
+The parameter `grad_scale` can be used to change this scale to `grad_scale/m`.
 
-)code" ADD_FILELINE)
-.add_argument("data", "NDArray-or-Symbol", "Input data to the function.")
-.add_argument("label", "NDArray-or-Symbol", "Input label to the function.")
-.add_arguments(RegressionOutputParam::__FIELDS__());
+)code" ADD_FILELINE);
 
-MXNET_REGISTER_OP_PROPERTY(LogisticRegressionOutput, RegressionOutputProp<reg_enum::kLogistic>)
+MXNET_OPERATOR_REGISTER_REGRESSION_BWD(_backward_mae_reg_out, mshadow_op::minus_sign);
+
+MXNET_OPERATOR_REGISTER_REGRESSION_FWD(LogisticRegressionOutput,
+  mshadow_op::sigmoid, "_backward_logistic_reg_out")
 .describe(R"code(Applies a logistic function to the input.
 
 The logistic function, also known as the sigmoid function, is computed as
-:math:`\frac{1}{1+exp(-x)}`.
+:math:`\frac{1}{1+exp(-\textbf{x})}`.
 
 Commonly, the sigmoid is used to squash the real-valued output of a linear model
 :math:wTx+b into the [0,1] range so that it can be interpreted as a probability.
@@ -106,13 +116,12 @@ It is suitable for binary classification or probability prediction tasks.
 .. note::
    Use the LogisticRegressionOutput as the final output layer of a net.
 
-By default, gradients of this loss function are scaled by factor `1/n`, where n is the number of training examples.
-The parameter `grad_scale` can be used to change this scale to `grad_scale/n`.
+By default, gradients of this loss function are scaled by factor `1/m`, where m is the number of regression outputs of a training example.
+The parameter `grad_scale` can be used to change this scale to `grad_scale/m`.
+
+)code" ADD_FILELINE);
 
-)code" ADD_FILELINE)
-.add_argument("data", "NDArray-or-Symbol", "Input data to the function.")
-.add_argument("label", "NDArray-or-Symbol", "Input label to the function.")
-.add_arguments(RegressionOutputParam::__FIELDS__());
+MXNET_OPERATOR_REGISTER_REGRESSION_BWD(_backward_logistic_reg_out, mshadow_op::minus);
 
 }  // namespace op
 }  // namespace mxnet
diff --git a/src/operator/regression_output.cu b/src/operator/regression_output.cu
index cb951f1..e3a2e7e 100644
--- a/src/operator/regression_output.cu
+++ b/src/operator/regression_output.cu
@@ -18,31 +18,32 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
- * \file regression_output.cu
- * \brief regression output operator
+ * \file regression_ouput.cu
+ * \brief Regression output operator.
 */
 #include "./regression_output-inl.h"
-#include "./mshadow_op.h"
+
 
 namespace mxnet {
 namespace op {
 
-template<>
-Operator *CreateRegressionOutputOp<gpu>(reg_enum::RegressionOutputType type,
-                                        RegressionOutputParam param) {
-  switch (type) {
-    case reg_enum::kLinear:
-      return new RegressionOutputOp<gpu, op::mshadow_op::identity, op::mshadow_op::minus>(param);
-    case reg_enum::kLogistic:
-      return new RegressionOutputOp<gpu, mshadow_op::sigmoid, op::mshadow_op::minus>(param);
-    case reg_enum::kMAE:
-      return new RegressionOutputOp<gpu, op::mshadow_op::identity, mshadow_op::minus_sign>(param);
-    default:
-      LOG(FATAL) << "unknown activation type " << type;
-  }
-  return NULL;
-}
+NNVM_REGISTER_OP(LinearRegressionOutput)
+.set_attr<FCompute>("FCompute<gpu>", RegressionForward<gpu, mshadow_op::identity>);
+
+NNVM_REGISTER_OP(_backward_linear_reg_out)
+.set_attr<FCompute>("FCompute<gpu>", RegressionBackward<gpu, mshadow_op::minus>);
+
+NNVM_REGISTER_OP(MAERegressionOutput)
+.set_attr<FCompute>("FCompute<gpu>", RegressionForward<gpu, mshadow_op::identity>);
+
+NNVM_REGISTER_OP(_backward_mae_reg_out)
+.set_attr<FCompute>("FCompute<gpu>", RegressionBackward<gpu, mshadow_op::minus_sign>);
+
+NNVM_REGISTER_OP(LogisticRegressionOutput)
+.set_attr<FCompute>("FCompute<gpu>", RegressionForward<gpu, mshadow_op::sigmoid>);
+
+NNVM_REGISTER_OP(_backward_logistic_reg_out)
+.set_attr<FCompute>("FCompute<gpu>", RegressionBackward<gpu, mshadow_op::minus>);
+
 }  // namespace op
 }  // namespace mxnet
-
diff --git a/tests/python/unittest/test_operator.py b/tests/python/unittest/test_operator.py
index 742d055..640cd34 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -244,7 +244,9 @@ def test_regression():
     check_regression(mx.symbol.LinearRegressionOutput,
                      lambda x: x,
                      lambda x, y : x - y)
-
+    check_regression(mx.symbol.MAERegressionOutput,
+                     lambda x: x,
+                     lambda x, y : np.where(x > y, np.ones(x.shape), -np.ones(x.shape)))
 
 def check_softmax_grad(xpu):
     x = mx.sym.Variable('x')

-- 
To stop receiving notification emails like this one, please contact
haibin@apache.org.

[incubator-mxnet] 02/03: update news.md (#191)

Posted by ha...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 9a5819687f16ea7cd611bca7b4bcb809d4186d9d
Author: Haibin Lin <li...@gmail.com>
AuthorDate: Sat Jan 27 18:18:35 2018 -0800

    update news.md (#191)
    
    * Update NEWS.md
    
    * Update README.md
---
 NEWS.md   | 37 +++++++++++++++++++++++++++++++++++++
 README.md |  1 +
 2 files changed, 38 insertions(+)

diff --git a/NEWS.md b/NEWS.md
index fc6b101..6e116c5 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,5 +1,42 @@
 MXNet Change Log
 ================
+## 1.1.0
+### Usability Improvements
+- Improved the usability of examples and tutorials
+### Bug-fixes
+- Fixed I/O multiprocessing for too many open file handles (#8904), race condition (#8995), deadlock (#9126).
+- Fixed image IO integration with OpenCV 3.3 (#8757).
+- Fixed Gluon block printing (#8956).
+- Fixed float16 argmax when there is negative input. (#9149)
+- Fixed random number generator to ensure sufficient randomness. (#9119, #9256, #9300)
+- Fixed custom op multi-GPU scaling (#9283)
+- Fixed gradient of gather_nd when duplicate entries exist in index. (#9200)
+- Fixed overriden contexts in Module `group2ctx` option when using multiple contexts (#8867)
+### New Features
+- Added experimental API in `contrib.text` for building vocabulary, and loading pre-trained word embeddings, with built-in support for 307 GloVe and FastText pre-trained embeddings. (#8763)
+- Added experimental structural blocks in `gluon.contrib`: `Concurrent`, `HybridConcurrent`, `Identity`. (#9427)
+- Added `sparse.dot(dense, csr)` operator (#8938)
+- Added `Khatri-Rao` operator (#7781)
+- Added `FTML` and `Signum` optimizer (#9220, #9262)
+- Added `ENABLE_CUDA_RTC` build option (#9428)
+### API Changes
+- Added zero gradients to rounding operators including `rint`, `ceil`, `floor`, `trunc`, and `fix` (#9040)
+- Added `use_global_stats` in `nn.BatchNorm` (#9420)
+- Added `axis` argument to `SequenceLast`, `SequenceMask` and `SequenceReverse` operators (#9306)
+- Added `lazy_update` option for standard `SGD` & `Adam` optimizer with `row_sparse` gradients (#9468, #9189)
+- Added `select` option in `Block.collect_params` to support regex (#9348)
+- Added support for (one-to-one and sequence-to-one) inference on explicit unrolled RNN models in R (#9022) 
+### Depreciations
+- The Scala API name space is still called `ml.dmlc`. The name space is likely be changed in a future release to `org.apache` and might brake existing applications and scripts (#9579, #9324)
+### Performance Improvements
+- Improved GPU inference speed by 20% when batch size is 1 (#9055)
+- Improved `SequenceLast` operator speed (#9306)
+- Added multithreading for the class of broadcast_reduce operators on CPU (#9444)
+- Improved batching for GEMM/TRSM operators with large matrices on GPU (#8846)
+### Known Issues
+- "Predict with pre-trained models" tutorial is broken
+
+
 ## 1.0.0
 ### Performance
   - Enhanced the performance of `sparse.dot` operator.
diff --git a/README.md b/README.md
index feff029..dbae65d 100644
--- a/README.md
+++ b/README.md
@@ -22,6 +22,7 @@ deep learning systems, and interesting insights of DL systems for hackers.
 
 What's New
 ----------
+* [Version 1.1.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.1.0) - MXNet 1.1.0 Release.
 * [Version 1.0.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.0.0) - MXNet 1.0.0 Release.
 * [Version 0.12.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.1) - MXNet 0.12.1 Patch Release.
 * [Version 0.12.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 0.12.0 Release.

-- 
To stop receiving notification emails like this one, please contact
haibin@apache.org.