You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/09/21 21:07:54 UTC

[GitHub] [tvm] vinx13 opened a new pull request, #12864: [TOPI] Add layer norm operator

vinx13 opened a new pull request, #12864:
URL: https://github.com/apache/tvm/pull/12864

   This PR added a tuple-sum based implementation of layer norm. It performs one-pass reduction to compute mean and variance at the same time.
   Reducer pattern is also added to allow `LowerCrossThreadReduction` to handle this case.
   On CUDA, it will generate two kernels: one for reduction and one for elemwise operations. Because of some limitation of `compute_at` currently we are not able to fuse them into one kernel. 
   
   cc @MasterJH5574 @junrushao @AndrewZhaoLuo 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] MasterJH5574 commented on a diff in pull request #12864: [TOPI] Add layer norm operator

Posted by GitBox <gi...@apache.org>.
MasterJH5574 commented on code in PR #12864:
URL: https://github.com/apache/tvm/pull/12864#discussion_r977092902


##########
include/tvm/topi/nn/layer_norm.h:
##########
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \brief layer normalization op constructions
+ * \file nn/layer_norm.h
+ */
+#ifndef TVM_TOPI_NN_LAYER_NORM_H_
+#define TVM_TOPI_NN_LAYER_NORM_H_
+
+#include <tvm/te/operation.h>
+#include <tvm/topi/tags.h>
+
+#include <string>
+
+namespace tvm {
+namespace topi {
+namespace nn {
+
+using namespace tvm::te;
+
+/*!
+ * \brief Layer normalization.
+ * \param data N-D tensor with shape [d_0, d_1, ..., d_n]
+ * \param gamma R-D tensor with shape [r_0, r_1, ..., r_k] where R == len(axis) and d_{axis_i} ==
+ *              r_i
+ * \param beta Optional, R-D tensor with shape [r_0, r_1, ..., r_k] where R == len(axis) and
+ *             d_{axis_i} == r_i

Review Comment:
   For `data`, should the shape be `[d_0, ..., d_{n-1}]`? (Ditto for the documents on Python side)
   



##########
python/tvm/topi/nn/__init__.py:
##########
@@ -38,6 +38,7 @@
 from .bnn import *
 from .qnn import *
 from .upsampling import *
+from .layer_norm import layer_norm

Review Comment:
   What about importing `*` :eyes:? Since I see all other imports import `*`.
   ```suggestion
   from .layer_norm import *
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao commented on a diff in pull request #12864: [TOPI] Add layer norm operator

Posted by GitBox <gi...@apache.org>.
junrushao commented on code in PR #12864:
URL: https://github.com/apache/tvm/pull/12864#discussion_r977142221


##########
python/tvm/topi/nn/__init__.py:
##########
@@ -38,6 +38,7 @@
 from .bnn import *
 from .qnn import *
 from .upsampling import *
+from .layer_norm import layer_norm

Review Comment:
   Wildcard importing is actually not a good idea though lol



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vinx13 commented on a diff in pull request #12864: [TOPI] Add layer norm operator

Posted by GitBox <gi...@apache.org>.
vinx13 commented on code in PR #12864:
URL: https://github.com/apache/tvm/pull/12864#discussion_r978051047


##########
python/tvm/topi/nn/__init__.py:
##########
@@ -38,6 +38,7 @@
 from .bnn import *
 from .qnn import *
 from .upsampling import *
+from .layer_norm import layer_norm

Review Comment:
   agreed, so I avoid using wildcard here. perhaps we should clean up this file in the future



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vinx13 merged pull request #12864: [TOPI] Add layer norm operator

Posted by GitBox <gi...@apache.org>.
vinx13 merged PR #12864:
URL: https://github.com/apache/tvm/pull/12864


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org