You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by je...@apache.org on 2022/11/21 17:03:16 UTC

[mxnet] branch v1.9.x updated: [v1.9.x] TLP Updates (#21148)

This is an automated email from the ASF dual-hosted git repository.

jevans pushed a commit to branch v1.9.x
in repository https://gitbox.apache.org/repos/asf/mxnet.git


The following commit(s) were added to refs/heads/v1.9.x by this push:
     new 26a5ad1f39 [v1.9.x] TLP Updates (#21148)
26a5ad1f39 is described below

commit 26a5ad1f39784a60d1564f6f740e5c7bd971cd65
Author: Joe Evans <jo...@gmail.com>
AuthorDate: Mon Nov 21 09:02:56 2022 -0800

    [v1.9.x] TLP Updates (#21148)
    
    * Update repo URLs and website to remove Incubating references.
    
    * Update repo to remove references to Apache Incubator, update website, remove DISCLAIMER.
    
    * Update license check configuration.
    
    * Update license check configuration.
    
    * Update license check configuration.
    
    * Update license check configuration.
    
    * Update license check configuration.
    
    * Add Apache 2.0 license to files without it.
    
    * Add Apache 2.0 license to files without it.
    
    * Remove references to DISCLAIMER in build scripts / configs.
    
    * Rearrange dependencies for ubuntu_tutorials to prevent pip hangs.
    
    * Change node type for Cpp: MKLDNN+GPU builds.
    
    * Update node type
    
    * Add missing node assign for G4 node type.
---
 .licenserc.yaml                                    |   6 +-
 .travis.yml                                        |   4 +-
 CMakeLists.txt                                     |  19 +-
 CONTRIBUTORS.md                                    |   6 +-
 DISCLAIMER                                         |  10 -
 LICENSE                                            |   4 +-
 Makefile                                           |   4 +-
 NEWS.md                                            |  96 +++++-----
 NOTICE                                             |   2 +-
 R-package/DESCRIPTION                              |   6 +-
 README.md                                          |  46 ++---
 benchmark/opperf/README.md                         |   6 +-
 benchmark/opperf/nd_operations/misc_operators.py   |   2 +-
 benchmark/opperf/utils/benchmark_utils.py          |   6 +-
 benchmark/opperf/utils/op_registry_utils.py        |   4 +-
 cd/README.md                                       |   2 +-
 cd/python/pypi/README.md                           |   2 +-
 cd/python/pypi/pypi_package.sh                     |   2 +-
 cd/utils/artifact_repository.md                    |   2 +-
 cd/utils/requirements.txt                          |  16 ++
 ci/docker/install/requirements                     |   2 +-
 ci/docker/install/ubuntu_tutorials.sh              |   8 +-
 ci/docker/runtime_functions.sh                     |   9 +-
 ci/jenkins/Jenkins_steps.groovy                    |   4 +-
 ci/jenkins/Jenkinsfile_centos_gpu                  |   2 +-
 ci/jenkins/Jenkinsfile_unix_cpu                    |   4 +-
 ci/jenkins/Jenkinsfile_unix_gpu                    |   2 +-
 ci/publish/website/deploy.sh                       |  12 +-
 ci/requirements.txt                                |  16 ++
 contrib/clojure-package/README.md                  |  20 +-
 .../examples/rnn/src/rnn/train_char_rnn.clj        |   2 +-
 contrib/clojure-package/project.clj                |   2 +-
 cpp-package/README.md                              |   8 +-
 cpp-package/example/README.md                      |  28 +--
 cpp-package/example/inference/README.md            |  18 +-
 cpp-package/include/mxnet-cpp/contrib.h            |   6 +-
 cpp-package/include/mxnet-cpp/symbol.hpp           |   2 +-
 cpp-package/tests/ci_test.sh                       |   2 +-
 doap.rdf                                           |   6 +-
 docker/docker-python/README.md                     |   8 +-
 docs/README.md                                     |  10 +-
 docs/python_docs/python/scripts/conf.py            |   2 +-
 .../python/tutorials/deploy/export/onnx.md         |   8 +-
 .../python/tutorials/extend/custom_layer.md        |  10 +-
 .../gluon_from_experiment_to_deployment.md         |   6 +-
 .../gluon/blocks/custom_layer_beginners.md         |  10 +-
 .../packages/gluon/blocks/save_load_params.md      |   4 +-
 .../python/tutorials/packages/gluon/loss/loss.md   |   2 +-
 .../packages/gluon/training/fit_api_tutorial.md    |   2 +-
 .../packages/ndarray/gotchas_numpy_in_mxnet.md     |   4 +-
 .../tutorials/packages/ndarray/sparse/train.md     |   2 +-
 .../packages/ndarray/sparse/train_gluon.md         |   2 +-
 .../python/tutorials/performance/backend/amp.md    |   2 +-
 .../backend/mkldnn/mkldnn_quantization.md          |   8 +-
 .../performance/backend/mkldnn/mkldnn_readme.md    |  20 +-
 .../tutorials/performance/backend/profiler.md      |   4 +-
 .../python/tutorials/performance/index.rst         |   2 +-
 .../themes/mx-theme/mxtheme/footer.html            |   4 +-
 .../themes/mx-theme/mxtheme/header_top.html        |   2 +-
 docs/static_site/src/_config.yml                   |   2 +-
 docs/static_site/src/_config_beta.yml              |   2 +-
 docs/static_site/src/_config_prod.yml              |   2 +-
 docs/static_site/src/_includes/footer.html         |  12 +-
 .../src/_includes/get_started/cloud/cpu.md         |   2 +-
 .../src/_includes/get_started/cloud/gpu.md         |   2 +-
 .../_includes/get_started/devices/raspberry_pi.md  |   4 +-
 .../get_started/linux/python/cpu/docker.md         |   2 +-
 .../_includes/get_started/linux/python/cpu/pip.md  |   2 +-
 .../get_started/linux/python/gpu/docker.md         |   2 +-
 .../_includes/get_started/linux/python/gpu/pip.md  |   2 +-
 docs/static_site/src/_includes/header.html         |   3 +-
 docs/static_site/src/assets/img/asf_logo.svg       | 210 +++++++++++++++++++++
 docs/static_site/src/index.html                    |   2 +-
 docs/static_site/src/pages/api/api.html            |   2 +-
 .../pages/api/architecture/exception_handling.md   |   2 +-
 .../src/pages/api/architecture/note_engine.md      |   2 +-
 .../src/pages/api/architecture/program_model.md    |   2 +-
 .../cpp/docs/tutorials/multi_threaded_inference.md |  14 +-
 .../docs/tutorials/mxnet_cpp_inference_tutorial.md |  22 +--
 docs/static_site/src/pages/api/cpp/index.md        |   8 +-
 ...github_contribution_and_PR_verification_tips.md |   6 +-
 .../exception_handing_and_custom_error_types.md    |   2 +-
 .../src/pages/api/faq/add_op_in_backend.md         |   8 +-
 docs/static_site/src/pages/api/faq/cloud.md        |   2 +-
 .../src/pages/api/faq/distributed_training.md      |   8 +-
 docs/static_site/src/pages/api/faq/env_var.md      |   6 +-
 docs/static_site/src/pages/api/faq/float16.md      |  10 +-
 .../src/pages/api/faq/gradient_compression.md      |   4 +-
 .../src/pages/api/faq/large_tensor_support.md      |   6 +-
 docs/static_site/src/pages/api/faq/perf.md         |   2 +-
 .../java/docs/tutorials/mxnet_java_on_intellij.md  |   6 +-
 .../pages/api/java/docs/tutorials/ssd_inference.md |   6 +-
 .../src/pages/api/r/docs/tutorials/symbol.md       |   2 +-
 .../pages/api/scala/docs/tutorials/char_lstm.md    |   6 +-
 .../src/pages/api/scala/docs/tutorials/infer.md    |   6 +-
 .../src/pages/api/scala/docs/tutorials/io.md       |   6 +-
 .../docs/tutorials/mxnet_scala_on_intellij.md      |   8 +-
 docs/static_site/src/pages/api/scala/index.md      |   4 +-
 docs/static_site/src/pages/community/contribute.md |  18 +-
 docs/static_site/src/pages/ecosystem.html          |   2 +-
 .../src/pages/get_started/build_from_source.md     |  16 +-
 docs/static_site/src/pages/get_started/download.md |   2 +-
 docs/static_site/src/pages/get_started/index.html  |   2 +-
 .../src/pages/get_started/java_setup.md            |   6 +-
 .../src/pages/get_started/jetson_setup.md          |   4 +-
 .../static_site/src/pages/get_started/osx_setup.md |   4 +-
 .../src/pages/get_started/scala_setup.md           |  10 +-
 .../src/pages/get_started/ubuntu_setup.md          |  12 +-
 .../src/pages/get_started/validate_mxnet.md        |   2 +-
 .../src/pages/get_started/windows_setup.md         |  28 +--
 example/README.md                                  |   4 +-
 example/cnn_chinese_text_classification/README.md  |   2 +-
 example/ctc/README.md                              |   2 +-
 example/ctc/lstm_ocr_train.py                      |   2 +-
 example/distributed_training/README.md             |   2 +-
 example/gluon/audio/urban_sounds/requirements.txt  |  18 +-
 example/gluon/lipnet/requirements.txt              |  16 ++
 example/gluon/sn_gan/data.py                       |   2 +-
 example/gluon/sn_gan/model.py                      |   2 +-
 example/gluon/sn_gan/train.py                      |   2 +-
 example/gluon/sn_gan/utils.py                      |   2 +-
 .../predict-cpp/CMakeLists.txt                     |  17 ++
 example/model-parallel/README.md                   |   2 +-
 example/module/README.md                           |   6 +-
 example/multi_threaded_inference/README.md         |   2 +-
 .../multi_threaded_inference.cc                    |   2 +-
 example/named_entity_recognition/README.md         |   4 +-
 example/onnx/README.md                             |   2 +-
 example/onnx/cv_model_inference.py                 |   2 +-
 example/quantization/README.md                     |   6 +-
 example/sparse/README.md                           |   8 +-
 example/speech_recognition/README.md               |   2 +-
 example/ssd/README.md                              |  26 +--
 julia/NEWS.md                                      |   4 +-
 julia/README.md                                    |   2 +-
 julia/deps/build.jl                                |   6 +-
 julia/docs/mkdocs.yml                              |   2 +-
 julia/docs/src/index.md                            |   2 +-
 julia/docs/src/tutorial/char-lstm.md               |  10 +-
 julia/docs/src/tutorial/mnist.md                   |   2 +-
 julia/src/autograd.jl                              |   2 +-
 julia/src/deprecated.jl                            |   2 +-
 julia/test/unittest/ndarray.jl                     |   2 +-
 .../mxnet/contrib/onnx/onnx2mx/_op_translations.py |   4 +-
 python/mxnet/error.py                              |   2 +-
 python/mxnet/gluon/block.py                        |   4 +-
 python/mxnet/gluon/contrib/data/text.py            |   2 +-
 python/mxnet/gluon/utils.py                        |   2 +-
 python/mxnet/onnx/mx2onnx/_export_model.py         |   2 +-
 python/mxnet/onnx/setup.py                         |   2 +-
 python/setup.py                                    |   2 +-
 rat-excludes                                       |   1 -
 scala-package/README.md                            |  12 +-
 scala-package/dev/compile-mxnet-backend.sh         |   2 +-
 .../javaapi/infer/objectdetector/README.md         |   2 +-
 .../javaapi/infer/predictor/README.md              |   2 +-
 .../org/apache/mxnetexamples/benchmark/README.md   |  16 +-
 .../mxnetexamples/cnntextclassification/README.md  |   2 +-
 .../org/apache/mxnetexamples/customop/README.md    |   2 +-
 .../imageclassifier/ImageClassifierExample.scala   |   2 +-
 .../mxnetexamples/infer/imageclassifier/README.md  |  12 +-
 .../mxnetexamples/infer/objectdetector/README.md   |   2 +-
 .../objectdetector/SSDClassifierExample.scala      |   2 +-
 scala-package/memory-management.md                 |   4 +-
 scala-package/mxnet-demo/java-demo/README.md       |   2 +-
 scala-package/mxnet-demo/scala-demo/README.md      |   2 +-
 scala-package/pom.xml                              |   8 +-
 setup-utils/install-mxnet-osx-python.sh            |   2 +-
 src/imperative/cached_op.cc                        |   2 +-
 src/operator/linalg_impl.h                         |   2 +-
 src/operator/nn/fully_connected-inl.h              |   2 +-
 src/operator/nn/mkldnn/mkldnn_base-inl.h           |   2 +-
 tests/CMakeLists.txt                               |  16 ++
 tests/cpp/operator/batchnorm_test.cc               |   2 +-
 tests/jenkins/run_test_installation_docs.sh        |  10 +-
 tests/nightly/test_large_array.py                  |  20 +-
 tests/nightly/test_large_vector.py                 |   8 +-
 tests/python-pytest/onnx/test_onnxruntime_cv.py    |  70 +++----
 tests/python/gpu/test_contrib_amp.py               |   2 +-
 tests/python/gpu/test_gluon_gpu.py                 |   2 +-
 tests/python/gpu/test_operator_gpu.py              |   8 +-
 tests/python/mkl/test_mkldnn.py                    |   2 +-
 tests/python/quantization/test_quantization.py     |   2 +-
 tests/python/unittest/test_contrib_svrg_module.py  |  12 +-
 tests/python/unittest/test_executor.py             |   2 +-
 tests/python/unittest/test_gluon.py                |  36 ++--
 tests/python/unittest/test_gluon_utils.py          |   2 +-
 tests/python/unittest/test_loss.py                 |   4 +-
 tests/python/unittest/test_module.py               |   2 +-
 tests/python/unittest/test_ndarray.py              |  12 +-
 tests/python/unittest/test_numpy_ndarray.py        |   2 +-
 tests/python/unittest/test_numpy_op.py             |   4 +-
 tests/python/unittest/test_operator.py             |  36 ++--
 tests/python/unittest/test_profiler.py             |   2 +-
 tests/python/unittest/test_random.py               |  16 +-
 tests/python/unittest/test_sparse_ndarray.py       |   2 +-
 tests/python/unittest/test_sparse_operator.py      |   2 +-
 tests/python/unittest/test_symbol.py               |   2 +-
 tests/python/unittest/test_test_utils.py           |   2 +-
 tests/requirements.txt                             |  19 +-
 tests/tutorials/test_tutorials.py                  |   6 +-
 tools/caffe_translator/README.md                   |  10 +-
 tools/caffe_translator/build.gradle                |   6 +-
 tools/caffe_translator/build_from_source.md        |   2 +-
 tools/coreml/pip_package/README.rst                |  10 +-
 tools/coreml/pip_package/setup.py                  |   2 +-
 tools/create_source_archive.sh                     |   2 +-
 tools/dependencies/README.md                       |  18 +-
 tools/diagnose.py                                  |   2 +-
 tools/pip/MANIFEST.in                              |   1 -
 tools/pip/doc/PYPI_README.md                       |   2 +-
 tools/pip/setup.py                                 |   2 +-
 tools/staticbuild/build.sh                         |   1 -
 tools/windowsbuild/README.md                       |   2 +-
 214 files changed, 989 insertions(+), 661 deletions(-)

diff --git a/.licenserc.yaml b/.licenserc.yaml
index 23660fc039..e5df6f059c 100644
--- a/.licenserc.yaml
+++ b/.licenserc.yaml
@@ -18,7 +18,6 @@ header:
     - '.gitmodules'
     - '.licenserc.yaml'
     - 'CODEOWNERS'
-    - 'DISCLAIMER'
     - 'KEYS'
     - 'python/mxnet/_cy3/README'
     - 'tools/dependencies/LICENSE.binary.dependencies'
@@ -79,6 +78,11 @@ header:
     - 'include/dmlc' # symlink to 3rdparty/dmlc-core/include/dmlc
     - 'include/mshadow' # symlink to 3rdparty/mshadow/mshadow
     - 'include/mkldnn' # symlinks to 3rdparty/mkldnn
+    - 'include/nnvm' # symlinks to 3rdparty/tvm/nnvm/include/nnvm
+    - 'example/automatic-mixed-precision/common' # symlinks to example/image-classification/common
+    - 'example/quantization/common' # symlinks to example/image-classification/common
+    - 'scala-package/packageTest/core/scripts' # symlinks to scala-package/core/scripts
+    - 'scala-package/packageTest/examples/scripts' # symlinks to scala-package/examples/scripts
     # test/build data
     - 'contrib/clojure-package/examples/imclassification/test/test-symbol.json.ref'
     - 'example/speech_recognition/resources/unicodemap_en_baidu.csv'
diff --git a/.travis.yml b/.travis.yml
index bccf989aef..32bb127ce5 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -48,13 +48,13 @@ install:
 # https://docs.travis-ci.com/user/reference/overview/
 script:
 # Temporarily disable travis build due to travis constantly time out, tracked in
-# https://github:com/apache/incubator-mxnet/issues/16535:
+# https://github:com/apache/mxnet/issues/16535:
   - export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
   - export MXNET_SUBGRAPH_VERBOSE=0
   - mv make/osx.mk config.mk
 #  - make -j 2
 
-  # Temporarily disabled due to https://github.com/apache/incubator-mxnet/issues/13136
+  # Temporarily disabled due to https://github.com/apache/mxnet/issues/13136
   # We ignore several tests to avoid possible timeouts on large PRs.
   # This lowers our test coverage, but is required for consistent Travis runs.
   # These tests will be tested in a variety of environments in Jenkins based tests.
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 098ae34ab1..23f1654966 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,3 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
 cmake_minimum_required(VERSION 3.13)
 
 # workaround to store CMAKE_CROSSCOMPILING because is getting reset by the project command
@@ -266,7 +283,7 @@ if(USE_TENSORRT)
 endif()
 
 # please note that when you enable this, you might run into an linker not being able to work properly due to large code injection.
-# you can find more information here https://github.com/apache/incubator-mxnet/issues/15971
+# you can find more information here https://github.com/apache/mxnet/issues/15971
 if(ENABLE_TESTCOVERAGE)
   message(STATUS "Compiling with test coverage support enabled. This will result in additional files being written to your source directory!")
   find_program( GCOV_PATH gcov )
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index f05421805f..db16519ec5 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -15,7 +15,7 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-Contributors of Apache MXNet (incubating)
+Contributors of Apache MXNet
 =========================================
 MXNet has been developed by a community of people who are interested in large-scale machine learning and deep learning.
 Everyone is more than welcomed to is a great way to make the project better and more accessible to more users.
@@ -86,7 +86,7 @@ New committers will be proposed by current committers, with support from more th
 
 List of Contributors
 --------------------
-* [Full List of Contributors](https://github.com/apache/incubator-mxnet/graphs/contributors)
+* [Full List of Contributors](https://github.com/apache/mxnet/graphs/contributors)
   - To contributors: please add your name to the list when you submit a patch to the project:)
 * [Feng Wang](https://github.com/happynear)
   - Feng makes MXNet compatible with Windows Visual Studio.
@@ -267,5 +267,5 @@ Label Bot
     - @mxnet-label-bot update [specify comma separated labels here]  
       (i.e. @mxnet-label-bot update [Bug, Python])
 
-  - Available label names which are supported: [Labels](https://github.com/apache/incubator-mxnet/labels)
+  - Available label names which are supported: [Labels](https://github.com/apache/mxnet/labels)
   - For further details: [My Wiki Page](https://cwiki.apache.org/confluence/display/MXNET/Machine+Learning+Based+GitHub+Bot)
diff --git a/DISCLAIMER b/DISCLAIMER
deleted file mode 100644
index eacaa1b85b..0000000000
--- a/DISCLAIMER
+++ /dev/null
@@ -1,10 +0,0 @@
-Apache MXNet is an effort undergoing incubation at
-The Apache Software Foundation (ASF), sponsored by the name of Apache Incubator PMC.
-
-Incubation is required of all newly accepted projects until a further review
-indicates that the infrastructure, communications, and decision making process
-have stabilized in a manner consistent with other successful ASF projects.
-
-While incubation status is not necessarily a reflection of the completeness
-or stability of the code, it does indicate that the project has yet to be fully
-endorsed by the ASF.
diff --git a/LICENSE b/LICENSE
index aa9b781882..2b2095379f 100644
--- a/LICENSE
+++ b/LICENSE
@@ -202,9 +202,9 @@
    limitations under the License.
 
     ======================================================================================
-    Apache MXNET (incubating) Subcomponents:
+    Apache MXNET Subcomponents:
 
-    The Apache MXNET (incubating) project contains subcomponents with separate
+    The Apache MXNET project contains subcomponents with separate
     copyright notices and license terms. Your use of the source code for the
     these subcomponents is subject to the terms and conditions of the following
     licenses. See licenses/ for text of these licenses.
diff --git a/Makefile b/Makefile
index d03505f7bf..55ca9dd00a 100644
--- a/Makefile
+++ b/Makefile
@@ -134,7 +134,7 @@ CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC -I$(NNV
 LDFLAGS = -pthread -ldl $(MSHADOW_LDFLAGS) $(DMLC_LDFLAGS)
 
 # please note that when you enable this, you might run into an linker not being able to work properly due to large code injection.
-# you can find more information here https://github.com/apache/incubator-mxnet/issues/15971
+# you can find more information here https://github.com/apache/mxnet/issues/15971
 ifeq ($(ENABLE_TESTCOVERAGE), 1)
         CFLAGS += --coverage
         LDFLAGS += --coverage
@@ -429,7 +429,7 @@ endef
 
 $(warning WARNING: Archive utility: ar version being used is less than 2.27.0. $n \
 		   Note that with USE_CUDA=1 flag and USE_CUDNN=1 this is known to cause problems. $n \
-		   For more info see: https://github.com/apache/incubator-mxnet/issues/15084)
+		   For more info see: https://github.com/apache/mxnet/issues/15084)
 $(shell sleep 5)
 endif
 endif
diff --git a/NEWS.md b/NEWS.md
index 7c16eaad2c..a4e745a5f7 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1550,10 +1550,10 @@ Apache MXNet (incubating) 1.5.1 is a maintenance release incorporating important
 #### Automatic Mixed Precision(experimental)
 Training Deep Learning networks is a very computationally intensive task. Novel model architectures tend to have increasing numbers of layers and parameters, which slow down training. Fortunately, software optimizations and new generations of training hardware make it a feasible task.
 However, most of the hardware and software optimization opportunities exist in exploiting lower precision (e.g. FP16) to, for example, utilize Tensor Cores available on new Volta and Turing GPUs. While training in FP16 showed great success in image classification tasks, other more complicated neural networks typically stayed in FP32 due to difficulties in applying the FP16 training guidelines.
-That is where AMP (Automatic Mixed Precision) comes into play. It automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16. To learn more about AMP, check out this [tutorial](https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/amp/amp_tutorial.md).
+That is where AMP (Automatic Mixed Precision) comes into play. It automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16. To learn more about AMP, check out this [tutorial](https://github.com/apache/mxnet/blob/master/docs/tutorials/amp/amp_tutorial.md).
 
 #### MKL-DNN Reduced precision inference and RNN API support
-Two advanced features, fused computation and reduced-precision kernels, are introduced by MKL-DNN in the recent version. These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies. MXNet MKL-DNN backend provides optimized implementations for various operators covering a broad range of applications including image classification, object detection, and natural language processing. Refer to the [MKL-DNN operator documentation](ht [...]
+Two advanced features, fused computation and reduced-precision kernels, are introduced by MKL-DNN in the recent version. These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies. MXNet MKL-DNN backend provides optimized implementations for various operators covering a broad range of applications including image classification, object detection, and natural language processing. Refer to the [MKL-DNN operator documentation](ht [...]
 
 #### Dynamic Shape(experimental)
 MXNet now supports Dynamic Shape in both imperative and symbolic mode. MXNet used to require that operators statically infer the output shapes from the input shapes. However, there exist some operators that don't meet this requirement. Examples are:
@@ -2093,7 +2093,7 @@ Note: this feature is still experimental, for more details, refer to [design doc
 * Fixes installation nightly test by filtering out the git commands (#14144)
 * fix nightly test on tutorials (#14036)
 * Fix MXNet R package build (#13952)
-* re-enable test after issue fixed https://github.com/apache/incubator-mxnet/issues/10973 (#14032)
+* re-enable test after issue fixed https://github.com/apache/mxnet/issues/10973 (#14032)
 * Add back R tests and fix typo around R and perl tests (#13940)
 * Fix document build (#13927)
 * Temporarily disables windows pipeline to unblock PRs (#14261)
@@ -2106,7 +2106,7 @@ Note: this feature is still experimental, for more details, refer to [design doc
 * Rearrange tests written only for update_on_kvstore = True (#13514)
 * add batch norm test (#13625)
 * Adadelta optimizer test (#13443)
-* Skip flaky test https://github.com/apache/incubator-mxnet/issues/13446 (#13480)
+* Skip flaky test https://github.com/apache/mxnet/issues/13446 (#13480)
 * Comment out test_unix_python3_tensorrt_gpu step (#14642)
 * Enable bulking test on windows (#14392)
 * rewrote the concat test to avoid flaky failures (#14049)
@@ -2544,11 +2544,11 @@ For distributed training, the `Reduce` communication patterns used by NCCL and M
   * multiple trees (bandwidth-optimal for large messages) to handle `Reduce` on large messages
 
 More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication)
-Note: This is an experimental feature and has known problems - see [13341](https://github.com/apache/incubator-mxnet/issues/13341). Please help to contribute to improve the robustness of the feature.
+Note: This is an experimental feature and has known problems - see [13341](https://github.com/apache/mxnet/issues/13341). Please help to contribute to improve the robustness of the feature.
 
 #### MKLDNN backend: Graph optimization and Quantization (experimental)
 
-Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), [#13297](https://github.com/apache/incubator-mxnet/pull/13297), [#13260](https://github.com/apache/incubator-mxnet/pull/13260)).
+Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release ([#12530](https://github.com/apache/mxnet/pull/12530), [#13297](https://github.com/apache/mxnet/pull/13297), [#13260](https://github.com/apache/mxnet/pull/13260)).
 These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for inference on platforms with [supported Intel CPUs](https://github.com/intel/mkl-dnn#system-requirements).
 
 ##### Graph Optimization
@@ -2557,7 +2557,7 @@ MKLDNN backend takes advantage of MXNet subgraph to implement the most of possib
 ##### Quantization
 Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN  [...]
 
-Please find detailed information and performance/accuracy numbers here: [MKLDNN README](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/mkldnn/mkldnn_readme.html), [quantization README](https://github.com/apache/incubator-mxnet/tree/master/example/quantization#1) and [design proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN)
+Please find detailed information and performance/accuracy numbers here: [MKLDNN README](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/mkldnn/mkldnn_readme.html), [quantization README](https://github.com/apache/mxnet/tree/master/example/quantization#1) and [design proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN)
 
 ### New Operators
 
@@ -2680,7 +2680,7 @@ Please find detailed information and performance/accuracy numbers here: [MKLDNN
 * [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739)
 
 #### Julia
-* Import Julia binding (#10149), how to use is available at https://github.com/apache/incubator-mxnet/tree/master/julia
+* Import Julia binding (#10149), how to use is available at https://github.com/apache/mxnet/tree/master/julia
 
 ### Performance benchmarks and improvements
 * Update mshadow for omp acceleration when nvcc is not present  (#12674)
@@ -2988,15 +2988,15 @@ Submodule@commit ID::Last updated by MXNet:: Last update in submodule
 
 ### Bug fixes
 
-* [MXNET-953] Fix oob memory read (v1.3.x) / [#13118](https://github.com/apache/incubator-mxnet/pull/13118)
+* [MXNET-953] Fix oob memory read (v1.3.x) / [#13118](https://github.com/apache/mxnet/pull/13118)
 Simple bugfix addressing an out-of-bounds memory read.
 
 
-* [MXNET-969] Fix buffer overflow in RNNOp (v1.3.x) / [#13119](https://github.com/apache/incubator-mxnet/pull/13119)
+* [MXNET-969] Fix buffer overflow in RNNOp (v1.3.x) / [#13119](https://github.com/apache/mxnet/pull/13119)
 This fixes an buffer overflow detected by ASAN.
 
 
-* CudnnFind() usage improvements (v1.3.x) / [#13123](https://github.com/apache/incubator-mxnet/pull/13123)
+* CudnnFind() usage improvements (v1.3.x) / [#13123](https://github.com/apache/mxnet/pull/13123)
   This PR improves the MXNet's use of cudnnFind() to address a few issues:
   1. With the gluon imperative style, cudnnFind() is called during forward(), and so might have its timings perturbed by other GPU activity (including potentially other cudnnFind() calls).
   2. With some cuda drivers versions, care is needed to ensure that the large I/O and workspace cudaMallocs() performed by cudnnFind() are immediately released and available to MXNet.
@@ -3009,24 +3009,24 @@ This fixes an buffer overflow detected by ASAN.
   4. Increased training performance based on being able to consistently run with models that approach the GPU's full global memory footprint.
   5. Adds a unittest for and solves issue #12662.
 
-* [MXNET-922] Fix memleak in profiler (v1.3.x) / [#13120](https://github.com/apache/incubator-mxnet/pull/13120)
+* [MXNET-922] Fix memleak in profiler (v1.3.x) / [#13120](https://github.com/apache/mxnet/pull/13120)
   Fix a memleak reported locally by ASAN during a normal inference test.
 
-* Fix lazy record io when used with dataloader and multi_worker > 0 (v1.3.x) / [#13124](https://github.com/apache/incubator-mxnet/pull/13124)
+* Fix lazy record io when used with dataloader and multi_worker > 0 (v1.3.x) / [#13124](https://github.com/apache/mxnet/pull/13124)
   Fixes multi_worker data loader when record file is used. The MXRecordIO instance needs to require a new file handler after fork to be safely manipulated simultaneously.
 
   This fix also safely voids the previous temporary fixes #12093 #11370.
 
-* fixed symbols naming in RNNCell, LSTMCell, GRUCell (v1.3.x) / [#13158](https://github.com/apache/incubator-mxnet/pull/13158)
+* fixed symbols naming in RNNCell, LSTMCell, GRUCell (v1.3.x) / [#13158](https://github.com/apache/mxnet/pull/13158)
   This fixes #12783, by assigning all nodes in hybrid_forward a unique name. Some operations were in fact performed without attaching the appropriate (time) prefix to the name, which makes serialized graphs non-deserializable.
 
-* Fixed `__setattr__` method of `_MXClassPropertyMetaClass` (v1.3.x) / [#13157](https://github.com/apache/incubator-mxnet/pull/13157)
+* Fixed `__setattr__` method of `_MXClassPropertyMetaClass` (v1.3.x) / [#13157](https://github.com/apache/mxnet/pull/13157)
   Fixed `__setattr__` method
 
-* allow foreach on input with 0 length (v1.3.x) / [#13151](https://github.com/apache/incubator-mxnet/pull/13151)
+* allow foreach on input with 0 length (v1.3.x) / [#13151](https://github.com/apache/mxnet/pull/13151)
   Fix #12470. With this change, outs shape can be inferred correctly.
 
-* Infer dtype in SymbolBlock import from input symbol (v1.3.x) / [#13117](https://github.com/apache/incubator-mxnet/pull/13117)
+* Infer dtype in SymbolBlock import from input symbol (v1.3.x) / [#13117](https://github.com/apache/mxnet/pull/13117)
   Fix for the issue - #11849
   Currently, Gluon symbol block cannot import any symbol with type other than fp32. All the parameters are created as FP32 leading to failure in importing the params when it is of type fp16, fp64 etc,
   In this PR, we infer the type of the symbol being imported and create the Symbol Block Parameters with that inferred type.
@@ -3034,14 +3034,14 @@ This fixes an buffer overflow detected by ASAN.
 
 ### Documentation fixes
 
-* Document the newly added env variable (v1.3.x) / [#13156](https://github.com/apache/incubator-mxnet/pull/13156)
-  Document the env variable: MXNET_ENFORCE_DETERMINISM added in PR: [#12992](https://github.com/apache/incubator-mxnet/pull/12992)
+* Document the newly added env variable (v1.3.x) / [#13156](https://github.com/apache/mxnet/pull/13156)
+  Document the env variable: MXNET_ENFORCE_DETERMINISM added in PR: [#12992](https://github.com/apache/mxnet/pull/12992)
 
-* fix broken links (v1.3.x) / [#13155](https://github.com/apache/incubator-mxnet/pull/13155)
+* fix broken links (v1.3.x) / [#13155](https://github.com/apache/mxnet/pull/13155)
   This PR fixes broken links on the website.
 
-* fix broken Python IO API docs (v1.3.x) / [#13154](https://github.com/apache/incubator-mxnet/pull/13154)
-  Fixes [#12854: Data Iterators documentation is broken](https://github.com/apache/incubator-mxnet/issues/12854)
+* fix broken Python IO API docs (v1.3.x) / [#13154](https://github.com/apache/mxnet/pull/13154)
+  Fixes [#12854: Data Iterators documentation is broken](https://github.com/apache/mxnet/issues/12854)
 
   This PR manually specifies members of the IO module so that the docs will render as expected. This is workaround in the docs to deal with a bug introduced in the Python code/structure since v1.3.0. See the comments for more info.
 
@@ -3049,7 +3049,7 @@ This fixes an buffer overflow detected by ASAN.
 
   This is important for any future modules - that they recognize this issue and make efforts to map the params and other elements.
 
-* add/update infer_range docs (v1.3.x) / [#13153](https://github.com/apache/incubator-mxnet/pull/13153)
+* add/update infer_range docs (v1.3.x) / [#13153](https://github.com/apache/mxnet/pull/13153)
   This PR adds or updates the docs for the infer_range feature.
 
   Clarifies the param in the C op docs
@@ -3060,20 +3060,20 @@ This fixes an buffer overflow detected by ASAN.
 
 ### Other Improvements
 
-* [MXNET-1179] Enforce deterministic algorithms in convolution layers (v1.3.x) / [#13152](https://github.com/apache/incubator-mxnet/pull/13152)
+* [MXNET-1179] Enforce deterministic algorithms in convolution layers (v1.3.x) / [#13152](https://github.com/apache/mxnet/pull/13152)
   Some of the CUDNN convolution algorithms are non-deterministic (see issue #11341). This PR adds an env variable to enforce determinism in the convolution operators. If set to true, only deterministic CUDNN algorithms will be used. If no deterministic algorithm is available, MXNet will error out.
 
 
 ### Submodule updates
 
-* update mshadow (v1.3.x) / [#13122](https://github.com/apache/incubator-mxnet/pull/13122)
+* update mshadow (v1.3.x) / [#13122](https://github.com/apache/mxnet/pull/13122)
   Update mshadow for omp acceleration when nvcc is not present
 
 ### Known issues
 
 The test test_operator.test_dropout has issues and has been disabled on the branch:
 
-* Disable flaky test test_operator.test_dropout (v1.3.x) / [#13200](https://github.com/apache/incubator-mxnet/pull/13200)
+* Disable flaky test test_operator.test_dropout (v1.3.x) / [#13200](https://github.com/apache/mxnet/pull/13200)
 
 
 
@@ -3083,14 +3083,14 @@ For more information and examples, see [full release notes](https://cwiki.apache
 ## 1.3.0
 
 ### New Features - Gluon RNN layers are now HybridBlocks
-- In this release, Gluon RNN layers such as `gluon.rnn.RNN`, `gluon.rnn.LSTM`, `gluon.rnn.GRU` becomes `HybridBlock`s as part of [gluon.rnn improvements project](https://github.com/apache/incubator-mxnet/projects/11) (#11482).
-- This is the result of newly available fused RNN operators added for CPU: LSTM([#10104](https://github.com/apache/incubator-mxnet/pull/10104)), vanilla RNN([#11399](https://github.com/apache/incubator-mxnet/pull/11399)), GRU([#10311](https://github.com/apache/incubator-mxnet/pull/10311))
+- In this release, Gluon RNN layers such as `gluon.rnn.RNN`, `gluon.rnn.LSTM`, `gluon.rnn.GRU` becomes `HybridBlock`s as part of [gluon.rnn improvements project](https://github.com/apache/mxnet/projects/11) (#11482).
+- This is the result of newly available fused RNN operators added for CPU: LSTM([#10104](https://github.com/apache/mxnet/pull/10104)), vanilla RNN([#11399](https://github.com/apache/mxnet/pull/11399)), GRU([#10311](https://github.com/apache/mxnet/pull/10311))
 - Now many dynamic networks that are based on Gluon RNN layers can now be completely hybridized, exported, and used in the inference APIs in other language bindings such as R, Scala, etc.
 
 ### MKL-DNN improvements
 - Introducing more functionality support for MKL-DNN as follows:
-  - Added support for more activation functions like, "sigmoid", "tanh", "softrelu". ([#10336](https://github.com/apache/incubator-mxnet/pull/10336))
-  - Added Debugging functionality: Result check ([#12069](https://github.com/apache/incubator-mxnet/pull/12069)) and Backend switch ([#12058](https://github.com/apache/incubator-mxnet/pull/12058)).
+  - Added support for more activation functions like, "sigmoid", "tanh", "softrelu". ([#10336](https://github.com/apache/mxnet/pull/10336))
+  - Added Debugging functionality: Result check ([#12069](https://github.com/apache/mxnet/pull/12069)) and Backend switch ([#12058](https://github.com/apache/mxnet/pull/12058)).
 
 ### New Features - Gluon Model Zoo Pre-trained Models
 - Gluon Vision Model Zoo now provides MobileNetV2 pre-trained models (#10879) in addition to
@@ -3099,7 +3099,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - Updated pre-trained models provide state-of-the-art performance on all resnetv1, resnetv2, and vgg16, vgg19, vgg16_bn, vgg19_bn models (#11327 #11860 #11830).
 
 ### New Features - Clojure package (experimental)
-- MXNet now supports the Clojure programming language. The MXNet Clojure package brings flexible and efficient GPU computing and state-of-art deep learning to Clojure. It enables you to write seamless tensor/matrix computation with multiple GPUs in Clojure. It also lets you construct and customize the state-of-art deep learning models in Clojure, and apply them to tasks, such as image classification and data science challenges.([#11205](https://github.com/apache/incubator-mxnet/pull/11205))
+- MXNet now supports the Clojure programming language. The MXNet Clojure package brings flexible and efficient GPU computing and state-of-art deep learning to Clojure. It enables you to write seamless tensor/matrix computation with multiple GPUs in Clojure. It also lets you construct and customize the state-of-art deep learning models in Clojure, and apply them to tasks, such as image classification and data science challenges.([#11205](https://github.com/apache/mxnet/pull/11205))
 - Checkout examples and API documentation [here](https://mxnet.apache.org/api/clojure/index.html).
 
 ### New Features - Synchronized Cross-GPU Batch Norm (experimental)
@@ -3107,16 +3107,16 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - This enables stable training on large-scale networks with high memory consumption such as FCN for image segmentation.
 
 ### New Features - Sparse Tensor Support for Gluon (experimental)
-- Sparse gradient support is added to `gluon.nn.Embedding`. Set `sparse_grad=True` to enable when constructing the Embedding block. ([#10924](https://github.com/apache/incubator-mxnet/pull/10924))
-- Gluon Parameter now supports "row_sparse" storage type, which reduces communication cost and memory consumption for multi-GPU training for large models. `gluon.contrib.nn.SparseEmbedding` is an example empowered by this. ([#11001](https://github.com/apache/incubator-mxnet/pull/11001), [#11429](https://github.com/apache/incubator-mxnet/pull/11429))
-- Gluon HybridBlock now supports hybridization with sparse operators ([#11306](https://github.com/apache/incubator-mxnet/pull/11306)).
+- Sparse gradient support is added to `gluon.nn.Embedding`. Set `sparse_grad=True` to enable when constructing the Embedding block. ([#10924](https://github.com/apache/mxnet/pull/10924))
+- Gluon Parameter now supports "row_sparse" storage type, which reduces communication cost and memory consumption for multi-GPU training for large models. `gluon.contrib.nn.SparseEmbedding` is an example empowered by this. ([#11001](https://github.com/apache/mxnet/pull/11001), [#11429](https://github.com/apache/mxnet/pull/11429))
+- Gluon HybridBlock now supports hybridization with sparse operators ([#11306](https://github.com/apache/mxnet/pull/11306)).
 
 ### New Features - Control flow operators (experimental)
 - This is the first step towards optimizing dynamic neural networks with variable computation graphs, by adding symbolic and imperative control flow operators. [Proposal](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators).
-- New operators introduced: foreach([#11531](https://github.com/apache/incubator-mxnet/pull/11531)), while_loop([#11566](https://github.com/apache/incubator-mxnet/pull/11566)), cond([#11760](https://github.com/apache/incubator-mxnet/pull/11760)).
+- New operators introduced: foreach([#11531](https://github.com/apache/mxnet/pull/11531)), while_loop([#11566](https://github.com/apache/mxnet/pull/11566)), cond([#11760](https://github.com/apache/mxnet/pull/11760)).
 
 ### New Features - Scala API Improvements (experimental)
-- Improvements to MXNet Scala API usability([#10660](https://github.com/apache/incubator-mxnet/pull/10660), [#10787](https://github.com/apache/incubator-mxnet/pull/10787), [#10991](https://github.com/apache/incubator-mxnet/pull/10991))
+- Improvements to MXNet Scala API usability([#10660](https://github.com/apache/mxnet/pull/10660), [#10787](https://github.com/apache/mxnet/pull/10787), [#10991](https://github.com/apache/mxnet/pull/10991))
 - Symbol.api and NDArray.api would bring new set of functions that have complete definition for all arguments.
 - Please see this [Type safe API design document](https://cwiki.apache.org/confluence/display/MXNET/Scala+Type-safe+API+Design+Doc) for more details.
 
@@ -3125,21 +3125,21 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - Unlike the default memory pool requires exact size match to reuse released memory chunks, this new memory pool uses exponential-linear rounding so that similar sized memory chunks can all be reused, which is more suitable for all the workloads with dynamic-shape inputs and outputs. Set environment variable `MXNET_GPU_MEM_POOL_TYPE=Round` to enable.
 
 ### New Features - Topology-aware AllReduce (experimental)
-- This features uses trees to perform the Reduce and Broadcast. It uses the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it. This topology aware approach reduces the existing limitations for single machine communication shown by mehods like parameter server and NCCL ring reduction. It is an experimental feature ([#11591](https://github.com/apache/incubator-mxnet/pull/11591)).
+- This features uses trees to perform the Reduce and Broadcast. It uses the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it. This topology aware approach reduces the existing limitations for single machine communication shown by mehods like parameter server and NCCL ring reduction. It is an experimental feature ([#11591](https://github.com/apache/mxnet/pull/11591)).
 - Paper followed for implementation: [Optimal message scheduling for aggregation](https://www.sysml.cc/doc/178.pdf).
 - Set environment variable `MXNET_KVSTORE_USETREE=1` to enable.
 
 ### New Features - Export MXNet models to ONNX format (experimental)
-- With this feature, now MXNet models can be exported to ONNX format([#11213](https://github.com/apache/incubator-mxnet/pull/11213)). Currently, MXNet supports ONNX v1.2.1. [API documentation](https://mxnet.apache.org/api/python/contrib/onnx.html).
+- With this feature, now MXNet models can be exported to ONNX format([#11213](https://github.com/apache/mxnet/pull/11213)). Currently, MXNet supports ONNX v1.2.1. [API documentation](https://mxnet.apache.org/api/python/contrib/onnx.html).
 - Checkout this [tutorial](https://mxnet.apache.org/tutorials/onnx/export_mxnet_to_onnx.html) which shows how to use MXNet to ONNX exporter APIs. ONNX protobuf so that those models can be imported in other frameworks for inference.
 
 ### New Features - TensorRT Runtime Integration (experimental)
 - [TensorRT](https://developer.nvidia.com/tensorrt) provides significant acceleration of model inference on NVIDIA GPUs compared to running the full graph in MxNet using unfused GPU operators. In addition to faster fp32 inference, TensorRT optimizes fp16 inference, and is capable of int8 inference (provided the quantization steps are performed). Besides increasing throughput, TensorRT significantly reduces inference latency, especially for small batches.
-- This feature in MXNet now introduces runtime integration of TensorRT into MXNet, in order to accelerate inference.([#11325](https://github.com/apache/incubator-mxnet/pull/11325))
+- This feature in MXNet now introduces runtime integration of TensorRT into MXNet, in order to accelerate inference.([#11325](https://github.com/apache/mxnet/pull/11325))
 - Currently, its in contrib package.
 
 ### New Examples - Scala
-- Refurnished Scala Examples with improved API, documentation and CI test coverage. ([#11753](https://github.com/apache/incubator-mxnet/pull/11753), [#11621](https://github.com/apache/incubator-mxnet/pull/11621) )
+- Refurnished Scala Examples with improved API, documentation and CI test coverage. ([#11753](https://github.com/apache/mxnet/pull/11753), [#11621](https://github.com/apache/mxnet/pull/11621) )
 - Now all Scala examples have:
   - No bugs block in the middle
   - Good Readme to start with
@@ -3147,11 +3147,11 @@ For more information and examples, see [full release notes](https://cwiki.apache
   - monitored in CI in each PR runs
 
 ### Maintenance - Flaky Tests improvement effort
-- Fixed 130 flaky tests on CI. Tracked progress of the project [here](https://github.com/apache/incubator-mxnet/projects/9).
+- Fixed 130 flaky tests on CI. Tracked progress of the project [here](https://github.com/apache/mxnet/projects/9).
 - Add flakiness checker (#11572)
 
 ### Maintenance - MXNet Model Backwards Compatibility Checker
-- This tool ([#11626](https://github.com/apache/incubator-mxnet/pull/11626)) helps in ensuring consistency and sanity while performing inference on the latest version of MXNet using models trained on older versions of MXNet.
+- This tool ([#11626](https://github.com/apache/mxnet/pull/11626)) helps in ensuring consistency and sanity while performing inference on the latest version of MXNet using models trained on older versions of MXNet.
 - This tool will help in detecting issues earlier in the development cycle which break backwards compatibility on MXNet and would contribute towards ensuring a healthy and stable release of MXNet.
 
 ### Maintenance - Integrated testing for "the Straight Dope"
@@ -3192,7 +3192,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - Improve performance of broadcast ops backward pass (#11252)
 - Improved numerical stability as a result of using stable L2 norm (#11573)
 - Accelerate the performance of topk for GPU and CPU side (#12085 #10997 ; This changes the behavior of topk when nan values occur in the input)
-- Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on CPU ([#11113](https://github.com/apache/incubator-mxnet/pull/11113))
+- Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on CPU ([#11113](https://github.com/apache/mxnet/pull/11113))
 - Performance improvement for Batch Dot on CPU from mshadow ([mshadow PR#342](https://github.com/dmlc/mshadow/pull/342))
 
 ### API Changes
@@ -3276,10 +3276,10 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - Implemented new [Scala Inference APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for performing predictions with deep learning models trained with MXNet (#9678). Implemented a new ImageClassifier class which provides APIs for classification tasks on a Java BufferedImage using a pre-trained model you provide (#10054). Implemented a new ObjectDetector class which provides APIs for  [...]
 
 ### New Features - Added a Module to Import ONNX models into MXNet
-- Implemented a new ONNX module in MXNet which offers an easy to use API to import ONNX models into MXNet's symbolic interface (#9963). Checkout the [example](https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py) on how you could use this [API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) to import ONNX models and perform inference on MXNet. Currently, the ONNX-MXNet Import module is still experimental. Please use it with caution.
+- Implemented a new ONNX module in MXNet which offers an easy to use API to import ONNX models into MXNet's symbolic interface (#9963). Checkout the [example](https://github.com/apache/mxnet/blob/master/example/onnx/super_resolution.py) on how you could use this [API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) to import ONNX models and perform inference on MXNet. Currently, the ONNX-MXNet Import module is still experimental. Please use it with caution.
 
 ### New Features - Added Support for Model Quantization with Calibration
-- Implemented model quantization by adopting the [TensorFlow approach](https://www.tensorflow.org/performance/quantization) with calibration by borrowing the idea from Nvidia's [TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf). The focus of this work is on keeping quantized models (ConvNets for now) inference accuracy loss under control when compared to their corresponding FP32 models. Please see the [example](https://github.com/ap [...]
+- Implemented model quantization by adopting the [TensorFlow approach](https://www.tensorflow.org/performance/quantization) with calibration by borrowing the idea from Nvidia's [TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf). The focus of this work is on keeping quantized models (ConvNets for now) inference accuracy loss under control when compared to their corresponding FP32 models. Please see the [example](https://github.com/ap [...]
 
 ### New Features - MKL-DNN Integration
 - MXNet now integrates with Intel MKL-DNN to accelerate neural network operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch Normalization, Activation, LRN, Softmax, as well as some common operators: sum and concat (#9677). This integration allows NDArray to contain data with MKL-DNN layouts and reduces data layout conversion to get the maximal performance from MKL-DNN. Currently, the MKL-DNN integration is still experimental. Please use it with caution.
@@ -3298,7 +3298,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - Changed API for the Pooling operator from `mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)` to  `mxnet.symbol.Pooling(data=None,  kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)`. This is a breaking change when kwargs are not provided [...]
 
 ### Bug Fixes
-- Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, #10422). Please see: [Tests Improvement Project](https://github.com/apache/incubator-mxnet/projects/9)
+- Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, #10422). Please see: [Tests Improvement Project](https://github.com/apache/mxnet/projects/9)
 - Fixed `cudnn_conv` and `cudnn_deconv` deadlock (#10392).
 - Fixed a race condition in `io.LibSVMIter` when batch size is large (#10124).
 - Fixed a race condition in converting data layouts in MKL-DNN (#9862).
@@ -3395,7 +3395,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
 - [DevGuide.md](https://github.com/google/googletest/blob/ec44c6c1675c25b9827aacd08c02433cccde7780/googlemock/docs/DevGuide.md) in the 3rdparty submodule googletest licensed under CC-BY-2.5.
 - Incompatibility in the behavior of MXNet Convolution operator for certain unsupported use cases: Raises an exception when MKLDNN is enabled, fails silently when it is not.
 - MXNet convolution generates wrong results for 1-element strides (#10689).
-- [Tutorial on fine-tuning an ONNX model](https://github.com/apache/incubator-mxnet/blob/v1.2.0/docs/tutorials/onnx/fine_tuning_gluon.md) fails when using cpu context.
+- [Tutorial on fine-tuning an ONNX model](https://github.com/apache/mxnet/blob/v1.2.0/docs/tutorials/onnx/fine_tuning_gluon.md) fails when using cpu context.
 - CMake build ignores the `USE_MKLDNN` flag and doesn't build with MKLDNN support even with `-DUSE_MKLDNN=1`. To workaround the issue please see: #10801.
 - Linking the dmlc-core library fails with CMake build when building with `USE_OPENMP=OFF`. To workaround the issue, please use the updated CMakeLists in dmlc-core unit tests directory: https://github.com/dmlc/dmlc-core/pull/396. You can also workaround the issue by using make instead of cmake when building with `USE_OPENMP=OFF`.
 
@@ -3468,7 +3468,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
   - Added Lambda block for wrapping a user defined function as a block.
   - Generalized `gluon.data.ArrayDataset` to support arbitrary number of arrays.
 ### New Features - ARM / Raspberry Pi support [Experimental]
-  - MXNet now compiles and runs on ARMv6, ARMv7, ARMv64 including Raspberry Pi devices. See https://github.com/apache/incubator-mxnet/tree/master/docker_multiarch for more information.
+  - MXNet now compiles and runs on ARMv6, ARMv7, ARMv64 including Raspberry Pi devices. See https://github.com/apache/mxnet/tree/master/docker_multiarch for more information.
 ### New Features - NVIDIA Jetson support [Experimental]
   - MXNet now compiles and runs on NVIDIA Jetson TX2 boards with GPU acceleration.
   - You can install the python MXNet package on a Jetson board by running - `$ pip install mxnet-jetson-tx2`.
diff --git a/NOTICE b/NOTICE
index dca346b613..b5a1d6ab3a 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,4 +1,4 @@
-    Apache MXNET (incubating)
+    Apache MXNET
     Copyright 2017-2022 The Apache Software Foundation
 
     This product includes software developed at
diff --git a/R-package/DESCRIPTION b/R-package/DESCRIPTION
index 40723c880d..2cc6b6eee0 100644
--- a/R-package/DESCRIPTION
+++ b/R-package/DESCRIPTION
@@ -5,13 +5,13 @@ Version: 1.9.1
 Date: 2017-06-27
 Author: Tianqi Chen, Qiang Kou, Tong He, Anirudh Acharya <https://github.com/anirudhacharya>
 Maintainer: Qiang Kou <qk...@qkou.info>
-Repository: apache/incubator-mxnet
+Repository: apache/mxnet
 Description: MXNet is a deep learning framework designed for both efficiency
     and flexibility. It allows you to mix the flavours of deep learning programs
     together to maximize the efficiency and your productivity.
 License: Apache License (== 2.0)
-URL: https://github.com/apache/incubator-mxnet/tree/master/R-package
-BugReports: https://github.com/apache/incubator-mxnet/issues
+URL: https://github.com/apache/mxnet/tree/master/R-package
+BugReports: https://github.com/apache/mxnet/issues
 Imports:
     methods,
     Rcpp (>= 0.12.1),
diff --git a/README.md b/README.md
index 6bee1874a8..ea9f1fba5e 100644
--- a/README.md
+++ b/README.md
@@ -21,9 +21,9 @@
 
 [![banner](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/banner.png)](https://mxnet.apache.org)
 
-Apache MXNet (incubating) for Deep Learning
+Apache MXNet for Deep Learning
 ===========================================
-[![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/apache/incubator-mxnet)](https://github.com/apache/incubator-mxnet/releases) [![GitHub stars](https://img.shields.io/github/stars/apache/incubator-mxnet)](https://github.com/apache/incubator-mxnet/stargazers) [![GitHub forks](https://img.shields.io/github/forks/apache/incubator-mxnet)](https://github.com/apache/incubator-mxnet/network) [![GitHub contributors](https://img.shields.io/github/contributors-anon/apache/ [...]
+[![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/apache/mxnet)](https://github.com/apache/mxnet/releases) [![GitHub stars](https://img.shields.io/github/stars/apache/mxnet)](https://github.com/apache/mxnet/stargazers) [![GitHub forks](https://img.shields.io/github/forks/apache/mxnet)](https://github.com/apache/mxnet/network) [![GitHub contributors](https://img.shields.io/github/contributors-anon/apache/mxnet)](https://github.com/apache/mxnet/graphs/contributors) [...]
 
 Apache MXNet is a deep learning framework designed for both *efficiency* and *flexibility*.
 It allows you to ***mix*** [symbolic and imperative programming](https://mxnet.apache.org/api/architecture/program_model)
@@ -36,12 +36,12 @@ Apache MXNet is more than a deep learning project. It is a [community](https://m
 on a mission of democratizing AI. It is a collection of [blue prints and guidelines](https://mxnet.apache.org/api/architecture/overview)
 for building deep learning systems, and interesting insights of DL systems for hackers.
 
-Licensed under an [Apache-2.0](https://github.com/apache/incubator-mxnet/blob/master/LICENSE) license.
+Licensed under an [Apache-2.0](https://github.com/apache/mxnet/blob/master/LICENSE) license.
 
 | Branch  | Build Status  |
 |:-------:|:-------------:|
-| [master](https://github.com/apache/incubator-mxnet/tree/master) | [![CentOS CPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/master/badge/icon?subject=build%20centos%20cpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/master/) [![CentOS GPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job/master/badge/icon?subject=build%20centos%20gpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/cent [...]
-| [v1.x](https://github.com/apache/incubator-mxnet/tree/v1.x) | [![CentOS CPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/v1.x/badge/icon?subject=build%20centos%20cpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/v1.x/) [![CentOS GPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job/v1.x/badge/icon?subject=build%20centos%20gpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job [...]
+| [master](https://github.com/apache/mxnet/tree/master) | [![CentOS CPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/master/badge/icon?subject=build%20centos%20cpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/master/) [![CentOS GPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job/master/badge/icon?subject=build%20centos%20gpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job [...]
+| [v1.x](https://github.com/apache/mxnet/tree/v1.x) | [![CentOS CPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/v1.x/badge/icon?subject=build%20centos%20cpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-cpu/job/v1.x/) [![CentOS GPU Build Status](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job/v1.x/badge/icon?subject=build%20centos%20gpu)](http://jenkins.mxnet-ci.com/job/mxnet-validation/job/centos-gpu/job/v1.x/) [! [...]
 
 Features
 --------
@@ -59,28 +59,28 @@ Contents
 * [Tutorials](https://mxnet.apache.org/api/python/docs/tutorials/)
 * [Ecosystem](https://mxnet.apache.org/ecosystem)
 * [API Documentation](https://mxnet.apache.org/api)
-* [Examples](https://github.com/apache/incubator-mxnet-examples)
+* [Examples](https://github.com/apache/mxnet-examples)
 * [Stay Connected](#stay-connected)
 * [Social Media](#social-media)
 
 What's New
 ----------
-* [1.9.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.9.1) - MXNet 1.9.1 Release.
-* [1.8.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.8.0) - MXNet 1.8.0 Release.
-* [1.7.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.7.0) - MXNet 1.7.0 Release.
-* [1.6.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.6.0) - MXNet 1.6.0 Release.
-* [1.5.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.5.1) - MXNet 1.5.1 Patch Release.
-* [1.5.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.5.0) - MXNet 1.5.0 Release.
-* [1.4.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.4.1) - MXNet 1.4.1 Patch Release.
-* [1.4.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.4.0) - MXNet 1.4.0 Release.
-* [1.3.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.3.1) - MXNet 1.3.1 Patch Release.
-* [1.3.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.3.0) - MXNet 1.3.0 Release.
-* [1.2.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.2.0) - MXNet 1.2.0 Release.
-* [1.1.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.1.0) - MXNet 1.1.0 Release.
-* [1.0.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.0.0) - MXNet 1.0.0 Release.
-* [0.12.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.1) - MXNet 0.12.1 Patch Release.
-* [0.12.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 0.12.0 Release.
-* [0.11.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.11.0) - MXNet 0.11.0 Release.
+* [1.9.1 Release](https://github.com/apache/mxnet/releases/tag/1.9.1) - MXNet 1.9.1 Release.
+* [1.8.0 Release](https://github.com/apache/mxnet/releases/tag/1.8.0) - MXNet 1.8.0 Release.
+* [1.7.0 Release](https://github.com/apache/mxnet/releases/tag/1.7.0) - MXNet 1.7.0 Release.
+* [1.6.0 Release](https://github.com/apache/mxnet/releases/tag/1.6.0) - MXNet 1.6.0 Release.
+* [1.5.1 Release](https://github.com/apache/mxnet/releases/tag/1.5.1) - MXNet 1.5.1 Patch Release.
+* [1.5.0 Release](https://github.com/apache/mxnet/releases/tag/1.5.0) - MXNet 1.5.0 Release.
+* [1.4.1 Release](https://github.com/apache/mxnet/releases/tag/1.4.1) - MXNet 1.4.1 Patch Release.
+* [1.4.0 Release](https://github.com/apache/mxnet/releases/tag/1.4.0) - MXNet 1.4.0 Release.
+* [1.3.1 Release](https://github.com/apache/mxnet/releases/tag/1.3.1) - MXNet 1.3.1 Patch Release.
+* [1.3.0 Release](https://github.com/apache/mxnet/releases/tag/1.3.0) - MXNet 1.3.0 Release.
+* [1.2.0 Release](https://github.com/apache/mxnet/releases/tag/1.2.0) - MXNet 1.2.0 Release.
+* [1.1.0 Release](https://github.com/apache/mxnet/releases/tag/1.1.0) - MXNet 1.1.0 Release.
+* [1.0.0 Release](https://github.com/apache/mxnet/releases/tag/1.0.0) - MXNet 1.0.0 Release.
+* [0.12.1 Release](https://github.com/apache/mxnet/releases/tag/0.12.1) - MXNet 0.12.1 Patch Release.
+* [0.12.0 Release](https://github.com/apache/mxnet/releases/tag/0.12.0) - MXNet 0.12.0 Release.
+* [0.11.0 Release](https://github.com/apache/mxnet/releases/tag/0.11.0) - MXNet 0.11.0 Release.
 * [Apache Incubator](http://incubator.apache.org/projects/mxnet.html) - We are now an Apache Incubator project.
 * [0.10.0 Release](https://github.com/apache/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release.
 * [0.9.3 Release](./docs/architecture/release_note_0_9.md) - First 0.9 official release.
@@ -101,7 +101,7 @@ Stay Connected
 
 | Channel | Purpose |
 |---|---|
-| [Follow MXNet Development on Github](https://github.com/apache/incubator-mxnet/issues) | See what's going on in the MXNet project. |
+| [Follow MXNet Development on Github](https://github.com/apache/mxnet/issues) | See what's going on in the MXNet project. |
 | [MXNet Confluence Wiki for Developers](https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home) <i class="fas fa-external-link-alt"> | MXNet developer wiki for information related to project development, maintained by contributors and developers. To request write access, send an email to [send request to the dev list](mailto:dev@mxnet.apache.org?subject=Requesting%20CWiki%20write%20access) <i class="far fa-envelope"></i>. |
 | [dev@mxnet.apache.org mailing list](https://lists.apache.org/list.html?dev@mxnet.apache.org) | The "dev list". Discussions about the development of MXNet. To subscribe, send an email to [dev-subscribe@mxnet.apache.org](mailto:dev-subscribe@mxnet.apache.org) <i class="far fa-envelope"></i>. |
 | [discuss.mxnet.io](https://discuss.mxnet.io) <i class="fas fa-external-link-alt"></i> | Asking & answering MXNet usage questions. |
diff --git a/benchmark/opperf/README.md b/benchmark/opperf/README.md
index 241734fdd6..63a04704bf 100644
--- a/benchmark/opperf/README.md
+++ b/benchmark/opperf/README.md
@@ -53,7 +53,7 @@ Note:
 To install MXNet, refer [Installing MXNet page](https://mxnet.apache.org/versions/master/install/index.html)
 
 ```
-export PYTHONPATH=$PYTHONPATH:/path/to/incubator-mxnet/
+export PYTHONPATH=$PYTHONPATH:/path/to/mxnet/
 ```
 
 ## Usecase 1 - Run benchmarks for all the operators
@@ -61,7 +61,7 @@ export PYTHONPATH=$PYTHONPATH:/path/to/incubator-mxnet/
 Below command runs all the MXNet operators (NDArray) benchmarks with default inputs and saves the final result as JSON in the given file.
 
 ```
-python incubator-mxnet/benchmark/opperf/opperf.py --output-format json --output-file mxnet_operator_benchmark_results.json
+python mxnet/benchmark/opperf/opperf.py --output-format json --output-file mxnet_operator_benchmark_results.json
 ```
 
 **Other Supported Options:**
@@ -177,7 +177,7 @@ See `utils/op_registry_utils.py` for more details.
 Optionally, you could use the python time package as the profiler engine to caliberate runtime in each operator.
 To use python timer for all operators, use the argument --profiler 'python':
 ```
-python incubator-mxnet/benchmark/opperf/opperf.py --profiler='python'
+python mxnet/benchmark/opperf/opperf.py --profiler='python'
 ```
 
 To use python timer for a specific operator, pass the argument profiler to the run_performance_test method:
diff --git a/benchmark/opperf/nd_operations/misc_operators.py b/benchmark/opperf/nd_operations/misc_operators.py
index 5a0efc57de..fc73516060 100644
--- a/benchmark/opperf/nd_operations/misc_operators.py
+++ b/benchmark/opperf/nd_operations/misc_operators.py
@@ -86,7 +86,7 @@ def run_mx_misc_operators_benchmarks(ctx=mx.cpu(), dtype='float32', profiler='na
                                            warmup=warmup,
                                            runs=runs)
     # There are currently issus with UpSampling with bilinear interpolation.
-    # track issue here: https://github.com/apache/incubator-mxnet/issues/9138
+    # track issue here: https://github.com/apache/mxnet/issues/9138
     upsampling_benchmark = run_performance_test([getattr(MX_OP_MODULE, "UpSampling")],
                                                 run_backward=True,
                                                 dtype=dtype,
diff --git a/benchmark/opperf/utils/benchmark_utils.py b/benchmark/opperf/utils/benchmark_utils.py
index f6cdfe0042..15960cc9d4 100644
--- a/benchmark/opperf/utils/benchmark_utils.py
+++ b/benchmark/opperf/utils/benchmark_utils.py
@@ -164,7 +164,7 @@ def run_performance_test(ops, inputs, run_backward=True,
     List of dictionary of benchmark results. key -> name of the operator, Value is benchmark results.
 
     Note: when run_performance_test is called on the nd.Embedding operator with run_backward=True, an error will
-    be thrown. Track issue here: https://github.com/apache/incubator-mxnet/issues/11314
+    be thrown. Track issue here: https://github.com/apache/mxnet/issues/11314
     """
     kwargs_list = _prepare_op_inputs(inputs, run_backward, dtype, ctx)
 
@@ -183,11 +183,11 @@ def run_performance_test(ops, inputs, run_backward=True,
 
 def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, runs):
     # Running SoftmaxOutput backwards on GPU results in errors
-    # track issue here: https://github.com/apache/incubator-mxnet/issues/880
+    # track issue here: https://github.com/apache/mxnet/issues/880
     gpu_backwards_disabled_ops = ['SoftmaxOutput']
 
     # Running im2col either forwards or backwards on GPU results in errors
-    # track issue here: https://github.com/apache/incubator-mxnet/issues/17493
+    # track issue here: https://github.com/apache/mxnet/issues/17493
     gpu_disabled_ops = ['im2col']
 
     # For each operator, run benchmarks
diff --git a/benchmark/opperf/utils/op_registry_utils.py b/benchmark/opperf/utils/op_registry_utils.py
index 99678b8d31..0a01e178d6 100644
--- a/benchmark/opperf/utils/op_registry_utils.py
+++ b/benchmark/opperf/utils/op_registry_utils.py
@@ -457,9 +457,9 @@ def get_all_indexing_routines():
     """Gets all indexing routines registered with MXNet.
 
     # @ChaiBapchya unravel_index errors out on certain inputs
-    # tracked here https://github.com/apache/incubator-mxnet/issues/16771
+    # tracked here https://github.com/apache/mxnet/issues/16771
     # @ChaiBapchya scatter_nd errors with core dump
-    # tracked here https://github.com/apache/incubator-mxnet/issues/17480
+    # tracked here https://github.com/apache/mxnet/issues/17480
 
     Returns
     -------
diff --git a/cd/README.md b/cd/README.md
index 8735953d64..75ceea4449 100644
--- a/cd/README.md
+++ b/cd/README.md
@@ -37,7 +37,7 @@ Currently, below variants are supported. All of these variants except native hav
 * *cu110*: CUDA 11.0
 * *cu112*: CUDA 11.2
 
-*For more on variants, see [here](https://github.com/apache/incubator-mxnet/issues/8671)*
+*For more on variants, see [here](https://github.com/apache/mxnet/issues/8671)*
 
 ## Framework Components
 
diff --git a/cd/python/pypi/README.md b/cd/python/pypi/README.md
index 4a17a00923..7c66c6c7a1 100644
--- a/cd/python/pypi/README.md
+++ b/cd/python/pypi/README.md
@@ -21,7 +21,7 @@ The Jenkins pipelines for continuous delivery of the PyPI MXNet packages.
 The pipelines for each variant are run, and fail, independently. Each depends
 on a successful build of the statically linked libmxet library.
 
-The pipeline relies on the scripts and resources located in [tools/pip](https://github.com/apache/incubator-mxnet/tree/master/tools/pip)
+The pipeline relies on the scripts and resources located in [tools/pip](https://github.com/apache/mxnet/tree/master/tools/pip)
 to build the PyPI packages.
 
 ## Credentials
diff --git a/cd/python/pypi/pypi_package.sh b/cd/python/pypi/pypi_package.sh
index d967c300e2..78a6032de5 100755
--- a/cd/python/pypi/pypi_package.sh
+++ b/cd/python/pypi/pypi_package.sh
@@ -21,7 +21,7 @@ set -ex
 # variant = cpu, native, cu100, cu101, cu102, cu110, cu112 etc.
 export mxnet_variant=${1:?"Please specify the mxnet variant"}
 
-# Due to this PR: https://github.com/apache/incubator-mxnet/pull/14899
+# Due to this PR: https://github.com/apache/mxnet/pull/14899
 # The setup.py expects that mkldnn_version.h be present in
 # mxnet-build/3rdparty/mkldnn/build/install/include
 # The artifact repository stores this file in the dependencies
diff --git a/cd/utils/artifact_repository.md b/cd/utils/artifact_repository.md
index 9de806eb6e..07e7cfdc9d 100644
--- a/cd/utils/artifact_repository.md
+++ b/cd/utils/artifact_repository.md
@@ -23,7 +23,7 @@ An MXNet artifact is defined as the following set of files:
 
 * The compiled libmxnet.so
 * License files for dependencies that required their licenses to be shipped with the binary
-* Dependencies that should be shipped together with the binary. For instance, for packaging the python wheel files, some dependencies that cannot be statically linked to the library need to also be included, see here (https://github.com/apache/incubator-mxnet/blob/master/tools/pip/setup.py#L142).
+* Dependencies that should be shipped together with the binary. For instance, for packaging the python wheel files, some dependencies that cannot be statically linked to the library need to also be included, see here (https://github.com/apache/mxnet/blob/master/tools/pip/setup.py#L142).
 
 The artifact_repository.py script automates the upload and download of the specified files with the appropriate S3 object keys by taking explicitly set, or automatically derived, values for the different characteristics of the artifact.
 
diff --git a/cd/utils/requirements.txt b/cd/utils/requirements.txt
index 4ecbff9c00..0aaf101ca4 100644
--- a/cd/utils/requirements.txt
+++ b/cd/utils/requirements.txt
@@ -1,2 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
 boto3==1.9.114
 PyYAML==5.1
diff --git a/ci/docker/install/requirements b/ci/docker/install/requirements
index 8e27a1f31c..482846cb2f 100644
--- a/ci/docker/install/requirements
+++ b/ci/docker/install/requirements
@@ -31,7 +31,7 @@ numpy>=1.16.0,<1.20.0
 pylint==2.3.1  # pylint and astroid need to be aligned
 astroid==2.3.3  # pylint and astroid need to be aligned
 requests<2.19.0,>=2.18.4
-scipy<1.7.0 # Restrict scipy version due to https://github.com/apache/incubator-mxnet/issues/20389
+scipy<1.7.0 # Restrict scipy version due to https://github.com/apache/mxnet/issues/20389
 setuptools
 coverage
 packaging
diff --git a/ci/docker/install/ubuntu_tutorials.sh b/ci/docker/install/ubuntu_tutorials.sh
index 4757c69534..f92195ff1a 100755
--- a/ci/docker/install/ubuntu_tutorials.sh
+++ b/ci/docker/install/ubuntu_tutorials.sh
@@ -24,5 +24,9 @@ set -ex
 apt-get update || true
 apt-get install graphviz
 
-# sckit-learn past version 0.20 does not support python version 2 and 3.4
-pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz==0.8.4 tqdm mxboard scipy gluoncv
+pip3 install graphviz==0.8.4 tqdm mxboard
+pip3 install jupyter
+pip3 install matplotlib Pillow opencv-python
+pip3 install scipy gluoncv
+pip3 install scikit-learn
+
diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh
index 32f614cf33..1d47550a8b 100755
--- a/ci/docker/runtime_functions.sh
+++ b/ci/docker/runtime_functions.sh
@@ -151,7 +151,6 @@ gather_licenses() {
     cp tools/dependencies/LICENSE.binary.dependencies licenses/
     cp NOTICE licenses/
     cp LICENSE licenses/
-    cp DISCLAIMER licenses/
 }
 
 build_ubuntu_cpu_release() {
@@ -989,7 +988,7 @@ cd_unittest_ubuntu() {
     $nose_cmd $NOSE_TIMER_ARGUMENTS --verbose tests/python/unittest
     $nose_cmd $NOSE_TIMER_ARGUMENTS --verbose tests/python/quantization
 
-    # https://github.com/apache/incubator-mxnet/issues/11801
+    # https://github.com/apache/mxnet/issues/11801
     # if [[ ${mxnet_variant} = "cpu" ]] || [[ ${mxnet_variant} = "mkl" ]]; then
         # integrationtest_ubuntu_cpu_dist_kvstore
     # fi
@@ -1004,7 +1003,7 @@ cd_unittest_ubuntu() {
 
     if [[ ${mxnet_variant} = *mkl ]]; then
         # skipping python 2 testing
-        # https://github.com/apache/incubator-mxnet/issues/14675
+        # https://github.com/apache/mxnet/issues/14675
         if [[ ${python_cmd} = "python3" ]]; then
             $nose_cmd $NOSE_TIMER_ARGUMENTS --verbose tests/python/mkl
         fi
@@ -1250,7 +1249,7 @@ integrationtest_ubuntu_cpu_onnx() {
     COV_ARG="--cov=./ --cov-report=xml --cov-append"
     pytest $COV_ARG --verbose tests/python-pytest/onnx/test_operators.py
     pytest $COV_ARG --verbose tests/python-pytest/onnx/mxnet_export_test.py
-    # Skip this as https://github.com/apache/incubator-mxnet/pull/19914 breaks import
+    # Skip this as https://github.com/apache/mxnet/pull/19914 breaks import
     #pytest $COV_ARG --verbose tests/python-pytest/onnx/test_models.py
     #pytest $COV_ARG --verbose tests/python-pytest/onnx/test_node.py
     pytest $COV_ARG -v -m integration tests/python-pytest/onnx/test_onnxruntime_cv.py
@@ -1961,7 +1960,7 @@ create_repo() {
    git clone $mxnet_url $repo_folder --recursive
    echo "Adding MXNet upstream repo..."
    cd $repo_folder
-   git remote add upstream https://github.com/apache/incubator-mxnet
+   git remote add upstream https://github.com/apache/mxnet
    cd ..
 }
 
diff --git a/ci/jenkins/Jenkins_steps.groovy b/ci/jenkins/Jenkins_steps.groovy
index c80b71d2b2..3e68c5bb51 100644
--- a/ci/jenkins/Jenkins_steps.groovy
+++ b/ci/jenkins/Jenkins_steps.groovy
@@ -1005,7 +1005,7 @@ def test_unix_cpp_gpu() {
 
 def test_unix_cpp_mkldnn_gpu() {
     return ['Cpp: MKLDNN+GPU': {
-      node(NODE_LINUX_GPU) {
+      node(NODE_LINUX_GPU_G4) {
         ws('workspace/ut-cpp-mkldnn-gpu') {
           timeout(time: max_time, unit: 'MINUTES') {
             utils.unpack_and_init('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib)
@@ -1143,7 +1143,7 @@ def test_centos7_python3_cpu() {
 
 def test_centos7_python3_gpu() {
     return ['Python3: CentOS 7 GPU': {
-      node(NODE_LINUX_GPU) {
+      node(NODE_LINUX_GPU_G4) {
         ws('workspace/build-centos7-gpu') {
           timeout(time: max_time, unit: 'MINUTES') {
             utils.unpack_and_init('centos7_gpu', mx_lib)
diff --git a/ci/jenkins/Jenkinsfile_centos_gpu b/ci/jenkins/Jenkinsfile_centos_gpu
index cad77a9a7d..0eb09f7efe 100644
--- a/ci/jenkins/Jenkinsfile_centos_gpu
+++ b/ci/jenkins/Jenkinsfile_centos_gpu
@@ -29,7 +29,7 @@ node('utility') {
   utils = load('ci/Jenkinsfile_utils.groovy')
   custom_steps = load('ci/jenkins/Jenkins_steps.groovy')
 }
-utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3')
+utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3', linux_gpu_g4: 'mxnetlinux-gpu-g4')
 
 utils.main_wrapper(
 core_logic: {
diff --git a/ci/jenkins/Jenkinsfile_unix_cpu b/ci/jenkins/Jenkinsfile_unix_cpu
index 2fa66c1bbc..1c726bbd39 100644
--- a/ci/jenkins/Jenkinsfile_unix_cpu
+++ b/ci/jenkins/Jenkinsfile_unix_cpu
@@ -63,8 +63,8 @@ core_logic: {
     custom_steps.test_static_python_cpu(),
     custom_steps.test_static_python_cpu_cmake(),
     /*  Disabled due to master build failure:
-     *  http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1221/pipeline/
-     *  https://github.com/apache/incubator-mxnet/issues/11801
+     *  http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet/detail/master/1221/pipeline/
+     *  https://github.com/apache/mxnet/issues/11801
     custom_steps.test_unix_distributed_kvstore_cpu()
     */
     custom_steps.test_unix_python3_cpu_no_tvm_op(),
diff --git a/ci/jenkins/Jenkinsfile_unix_gpu b/ci/jenkins/Jenkinsfile_unix_gpu
index 163a0a02e0..4be9f871d0 100644
--- a/ci/jenkins/Jenkinsfile_unix_gpu
+++ b/ci/jenkins/Jenkinsfile_unix_gpu
@@ -63,7 +63,7 @@ core_logic: {
     custom_steps.test_static_python_gpu_cmake(),
     custom_steps.test_unix_capi_cpp_package(),
 
-    // Disabled due to: https://github.com/apache/incubator-mxnet/issues/11407
+    // Disabled due to: https://github.com/apache/mxnet/issues/11407
     //custom_steps.test_unix_caffe_gpu()
   ]) 
 }
diff --git a/ci/publish/website/deploy.sh b/ci/publish/website/deploy.sh
index 3309d852f7..cdb79f0e01 100644
--- a/ci/publish/website/deploy.sh
+++ b/ci/publish/website/deploy.sh
@@ -39,11 +39,11 @@ jekyll_fork=ThomasDelteil
 
 setup_mxnet_site_repo() {
    fork=$1
-   if [ ! -d "incubator-mxnet-site" ]; then
-     git clone https://$APACHE_USERNAME:$APACHE_PASSWORD@github.com/aaronmarkham/incubator-mxnet-site.git
+   if [ ! -d "mxnet-site" ]; then
+     git clone https://$APACHE_USERNAME:$APACHE_PASSWORD@github.com/aaronmarkham/mxnet-site.git
    fi
 
-   cd incubator-mxnet-site
+   cd mxnet-site
    git checkout asf-site
    rm -rf *
    git rm -r *
@@ -66,14 +66,14 @@ setup_jekyll_repo() $jekyll_fork
 
 # Copy in the main jekyll website artifacts
 web_artifacts=mxnet.io-v2/release
-web_dir=incubator-mxnet-site
+web_dir=mxnet-site
 cp -a $web_artifacts/* $web_dir
 
 
 fetch_artifacts() {
     api=$1
     artifacts=https://mxnet-public.s3.us-east-2.amazonaws.com/docs/$version/$api-artifacts.tgz
-    dir=incubator-mxnet-site/api/
+    dir=mxnet-site/api/
     wget -q $artifacts
     mkdir -p $dir
     tar xf $api-artifacts.tgz -C $dir
@@ -86,7 +86,7 @@ do
 done
 
 # Commit the updates
-cd incubator-mxnet-site
+cd mxnet-site
 pwd
 git branch
 git add .
diff --git a/ci/requirements.txt b/ci/requirements.txt
index 8f21ead27f..7adc32fd7a 100644
--- a/ci/requirements.txt
+++ b/ci/requirements.txt
@@ -1 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
 docker==3.5.0
diff --git a/contrib/clojure-package/README.md b/contrib/clojure-package/README.md
index e0bacbc3d9..f1a21f4561 100644
--- a/contrib/clojure-package/README.md
+++ b/contrib/clojure-package/README.md
@@ -44,7 +44,7 @@ By far the best way to get involved with this project is to install the Clojure
 There are two main ways of reaching out to other users and the package maintainers:
 
 - If you have a question or general feedback, or you encountered a problem but are not sure if it's a bug or a misunderstanding, then the *Apache Slack* (channels `#mxnet` and `#mxnet-scala`) is the best place to turn check out. To join, [ask for an invitation](https://mxnet.apache.org/community/contribute.html#slack) at `dev@mxnet.apache.org`.
-- If you found a bug, miss an important feature or want to give feedback directly relevant for development, please head over to the MXNet [GitHub issue page](https://github.com/apache/incubator-mxnet/issues) and create a new issue. If the issue is specific to the Clojure package, consider using a title starting with `[Clojure]` to make it easily discoverable among the many other, mostly generic issues.
+- If you found a bug, miss an important feature or want to give feedback directly relevant for development, please head over to the MXNet [GitHub issue page](https://github.com/apache/mxnet/issues) and create a new issue. If the issue is specific to the Clojure package, consider using a title starting with `[Clojure]` to make it easily discoverable among the many other, mostly generic issues.
 
 Of course, contributions to code or documentation are also more than welcome! Please check out the [Clojure Package Contribution Needs](https://cwiki.apache.org/confluence/display/MXNET/Clojure+Package+Contribution+Needs) to get an idea about where and how to contribute code.
 
@@ -93,7 +93,7 @@ sudo apt-get update
 sudo apt install libopencv-imgcodecs3.4 libopenblas-base libatlas3-base libcurl3
 ```
 
-Note: `libcurl3` may conflict with other packages on your system. [Here](https://github.com/apache/incubator-mxnet/issues/12822) is a possible workaround.
+Note: `libcurl3` may conflict with other packages on your system. [Here](https://github.com/apache/mxnet/issues/12822) is a possible workaround.
 
 ##### Linux (Arch)
 
@@ -129,7 +129,7 @@ brew install opencv
 
 You can find the latest version out on [maven central- clojure-mxnet latest](https://search.maven.org/search?q=clojure-mxnet)
 
-After making this change and running `lein deps`, you should be able to run example code like this [NDArray Tutorial](https://github.com/apache/incubator-mxnet/blob/master/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj).
+After making this change and running `lein deps`, you should be able to run example code like this [NDArray Tutorial](https://github.com/apache/mxnet/blob/master/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj).
 
 ### Option 2: Clojure package from Source, Scala Package from Jar
 
@@ -140,7 +140,7 @@ With this option, you will install a Git revision of the Clojure package source
 - Recursively clone the MXNet repository and checkout the desired version, (example 1.4.1). You should use the latest [version](https://search.maven.org/search?q=clojure-mxnet)), and a clone into the `~/mxnet` directory:
 
   ```bash
-  git clone --recursive https://github.com/apache/incubator-mxnet.git ~/mxnet
+  git clone --recursive https://github.com/apache/mxnet.git ~/mxnet
   cd ~/mxnet
   git tag --list  # Find the tag that matches the Scala package version
 
@@ -177,7 +177,7 @@ With this option, you will compile the core MXNet C++ package and jars for both
 The first step is to recursively clone the MXNet repository and checkout the desired version, (example 1.4.1). You should use the latest [version](https://search.maven.org/search?q=clojure-mxnet)), and clone into the `~/mxnet` directory:
 
   ```bash
-  git clone --recursive https://github.com/apache/incubator-mxnet.git ~/mxnet
+  git clone --recursive https://github.com/apache/mxnet.git ~/mxnet
   cd ~/mxnet
   git checkout tags/version -b my_mxnet  # this is optional
   git submodule update --init --recursive
@@ -219,10 +219,10 @@ To run examples, you can now use `lein run` in any of the example directories, e
 There are Dockerfiles available as well.
 
 - [Community Provided by Magnet](https://hub.docker.com/u/magnetcoop/)
-- [MXNet CI](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/Dockerfile.build.ubuntu_cpu) and the install scripts
-  - [Ubuntu core](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_core.sh)
-  - [Ubuntu Scala](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_scala.sh)
-  - [Ubuntu Clojure](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_clojure.sh)
+- [MXNet CI](https://github.com/apache/mxnet/blob/master/ci/docker/Dockerfile.build.ubuntu_cpu) and the install scripts
+  - [Ubuntu core](https://github.com/apache/mxnet/blob/master/ci/docker/install/ubuntu_core.sh)
+  - [Ubuntu Scala](https://github.com/apache/mxnet/blob/master/ci/docker/install/ubuntu_scala.sh)
+  - [Ubuntu Clojure](https://github.com/apache/mxnet/blob/master/ci/docker/install/ubuntu_clojure.sh)
 
 ## Need Help?
 
@@ -230,7 +230,7 @@ If you are having trouble getting started or have a question, feel free to reach
 
 - Clojurian Slack #mxnet channel. To join, go to [http://clojurians.net/](http://clojurians.net/).
 - Apache Slack #mxnet and #mxnet-scala channel. To join this slack send an email to dev@mxnet.apache.org.
-- Create an Issue on [https://github.com/apache/incubator-mxnet/issues](https://github.com/apache/incubator-mxnet/issues).
+- Create an Issue on [https://github.com/apache/mxnet/issues](https://github.com/apache/mxnet/issues).
 
 
 ## Examples
diff --git a/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj b/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj
index 41a764f7af..9ed2a29f18 100644
--- a/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj
+++ b/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj
@@ -33,7 +33,7 @@
              [org.apache.clojure-mxnet.module :as m])
   (:gen-class))
 
-;;https://github.com/apache/incubator-mxnet/blob/master/example/rnn/old/char-rnn.ipynb
+;;https://github.com/apache/mxnet/blob/master/example/rnn/old/char-rnn.ipynb
 
 (when-not (.exists (clojure.java.io/file "data"))
   (do (println "Retrieving data...") (sh "./get_data.sh")))
diff --git a/contrib/clojure-package/project.clj b/contrib/clojure-package/project.clj
index ba9f3bba23..a8e2757ddb 100644
--- a/contrib/clojure-package/project.clj
+++ b/contrib/clojure-package/project.clj
@@ -17,7 +17,7 @@
 
 (defproject org.apache.mxnet.contrib.clojure/clojure-mxnet "1.9.1-SNAPSHOT"
   :description "Clojure package for MXNet"
-  :url "https://github.com/apache/incubator-mxnet"
+  :url "https://github.com/apache/mxnet"
   :license {:name "Apache License"
             :url "http://www.apache.org/licenses/LICENSE-2.0"}
   :dependencies [[org.clojure/clojure "1.9.0"]
diff --git a/cpp-package/README.md b/cpp-package/README.md
index 77ff0ee36e..48043e9eca 100644
--- a/cpp-package/README.md
+++ b/cpp-package/README.md
@@ -29,13 +29,13 @@ The cpp-package directory contains the implementation of C++ API. As mentioned a
 1.  Building the MXNet C++ package requires building MXNet from source.
 2.  Clone the MXNet GitHub repository **recursively** to ensure the code in submodules is available for building MXNet.
 	```
-	git clone --recursive https://github.com/apache/incubator-mxnet mxnet
+	git clone --recursive https://github.com/apache/mxnet mxnet
 	```
 
 3.  Install the [prerequisites](<https://mxnet.apache.org/install/build_from_source#prerequisites>), desired [BLAS libraries](<https://mxnet.apache.org/install/build_from_source#blas-library>) and optional [OpenCV, CUDA, and cuDNN](<https://mxnet.apache.org/install/build_from_source#optional>) for building MXNet from source.
-4.  There is a configuration file for make, [make/config.mk](<https://github.com/apache/incubator-mxnet/blob/master/make/config.mk>) that contains all the compilation options. You can edit this file and set the appropriate options prior to running the **make** command.
+4.  There is a configuration file for make, [make/config.mk](<https://github.com/apache/mxnet/blob/master/make/config.mk>) that contains all the compilation options. You can edit this file and set the appropriate options prior to running the **make** command.
 5.  Please refer to  [platform specific build instructions](<https://mxnet.apache.org/install/build_from_source#build-instructions-by-operating-system>) and available [build configurations](https://mxnet.apache.org/install/build_from_source#build-configurations) for more details.
-5.  For enabling the build of C++ Package, set the **USE\_CPP\_PACKAGE = 1** in [make/config.mk](<https://github.com/apache/incubator-mxnet/blob/master/make/config.mk>). Optionally, the compilation flag can also be specified on **make** command line as follows.
+5.  For enabling the build of C++ Package, set the **USE\_CPP\_PACKAGE = 1** in [make/config.mk](<https://github.com/apache/mxnet/blob/master/make/config.mk>). Optionally, the compilation flag can also be specified on **make** command line as follows.
 	```
 	make -j USE_CPP_PACKAGE=1
 	```
@@ -45,7 +45,7 @@ The cpp-package directory contains the implementation of C++ API. As mentioned a
 In order to consume the C++ API please follow the steps below.
 
 1. Ensure that the MXNet shared library is built from source with the **USE\_CPP\_PACKAGE = 1**.
-2. Include the [MxNetCpp.h](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/include/mxnet-cpp/MxNetCpp.h>) in the program that is going to consume MXNet C++ API.
+2. Include the [MxNetCpp.h](<https://github.com/apache/mxnet/blob/master/cpp-package/include/mxnet-cpp/MxNetCpp.h>) in the program that is going to consume MXNet C++ API.
 	```
 	#include <mxnet-cpp/MxNetCpp.h>
 	```
diff --git a/cpp-package/example/README.md b/cpp-package/example/README.md
index 555316dd1a..d74a915e57 100644
--- a/cpp-package/example/README.md
+++ b/cpp-package/example/README.md
@@ -19,8 +19,8 @@
 
 ## Building C++ examples
 
-The examples in this folder demonstrate the **training** workflow. The **inference workflow** related examples can be found in [inference](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference>) folder.
-Please build the MXNet C++ Package as explained in the [README](<https://github.com/apache/incubator-mxnet/tree/master/cpp-package#building-c-package>) File before building these examples manually.
+The examples in this folder demonstrate the **training** workflow. The **inference workflow** related examples can be found in [inference](<https://github.com/apache/mxnet/blob/master/cpp-package/example/inference>) folder.
+Please build the MXNet C++ Package as explained in the [README](<https://github.com/apache/mxnet/tree/master/cpp-package#building-c-package>) File before building these examples manually.
 The examples in this folder are built while building the MXNet library and cpp-package from source. However, they can be built manually as follows
 
 From cpp-package/examples directory
@@ -39,9 +39,9 @@ The makefile will also download the necessary data files and store in a data fol
 
 ## Examples demonstrating training workflow
 
-This directory contains following examples. In order to run the examples, ensure that the path to the MXNet shared library is added to the OS specific environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS and **PATH** for Windows OS. For example `export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/ubuntu/incubator-mxnet/lib` on ubuntu using gpu.
+This directory contains following examples. In order to run the examples, ensure that the path to the MXNet shared library is added to the OS specific environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS and **PATH** for Windows OS. For example `export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/ubuntu/mxnet/lib` on ubuntu using gpu.
 
-### [alexnet.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/alexnet.cpp>)
+### [alexnet.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/alexnet.cpp>)
 
 The example implements the C++ version of AlexNet. The networks trains on MNIST data. The number of epochs can be specified as a command line argument. For example to train with 10 epochs use the following:
 
@@ -49,7 +49,7 @@ The example implements the C++ version of AlexNet. The networks trains on MNIST
 build/alexnet 10
 ```
 
-### [googlenet.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/googlenet.cpp>)
+### [googlenet.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/googlenet.cpp>)
 
 The code implements a GoogLeNet/Inception network using the C++ API. The example uses MNIST data to train the network. By default, the example trains the model for 100 epochs. The number of epochs can also be specified in the command line. For example, to train the model for 10 epochs use the following:
 
@@ -57,7 +57,7 @@ The code implements a GoogLeNet/Inception network using the C++ API. The example
 build/googlenet 10
 ```
 
-### [mlp.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/mlp.cpp>)
+### [mlp.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/mlp.cpp>)
 
 The code implements a multilayer perceptron from scratch. The example creates its own dummy data to train the model. The example does not require command line parameters. It trains the model for 20,000 epochs.
 To run the example use the following command:
@@ -66,7 +66,7 @@ To run the example use the following command:
 build/mlp
 ```
 
-### [mlp_cpu.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/mlp_cpu.cpp>)
+### [mlp_cpu.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/mlp_cpu.cpp>)
 
 The code implements a multilayer perceptron to train the MNIST data. The code demonstrates the use of "SimpleBind"  C++ API and MNISTIter. The example is designed to work on CPU. The example does not require command line parameters.
 To run the example use the following command:
@@ -75,7 +75,7 @@ To run the example use the following command:
 build/mlp_cpu
 ```
 
-### [mlp_gpu.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/mlp_gpu.cpp>)
+### [mlp_gpu.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/mlp_gpu.cpp>)
 
 The code implements a multilayer perceptron to train the MNIST data. The code demonstrates the use of the "SimpleBind"  C++ API and MNISTIter. The example is designed to work on GPU. The example does not require command line arguments. To run the example execute following command:
 
@@ -83,7 +83,7 @@ The code implements a multilayer perceptron to train the MNIST data. The code de
 build/mlp_gpu
 ```
 
-### [mlp_csv.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/mlp_csv.cpp>)
+### [mlp_csv.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/mlp_csv.cpp>)
 
 The code implements a multilayer perceptron to train the MNIST data. The code demonstrates the use of the "SimpleBind"  C++ API and CSVIter. The CSVIter can iterate data that is in CSV format. The example can be run on CPU or GPU. The example usage is as follows:
 
@@ -92,12 +92,12 @@ build/mlp_csv --train data/mnist_data/mnist_train.csv --test data/mnist_data/mni
 ```
 * To get the `mnist_training_set.csv` and `mnist_test_set.csv` please run the following command:
 ```python
-# in incubator-mxnet/cpp-package/example directory
+# in mxnet/cpp-package/example directory
 python mnist_to_csv.py ./data/mnist_data/train-images-idx3-ubyte ./data/mnist_data/train-labels-idx1-ubyte ./data/mnist_data/mnist_train.csv 60000
 python mnist_to_csv.py ./data/mnist_data/t10k-images-idx3-ubyte ./data/mnist_data/t10k-labels-idx1-ubyte ./data/mnist_data/mnist_test.csv 10000
 ```
 
-### [resnet.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/resnet.cpp>)
+### [resnet.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/resnet.cpp>)
 
 The code implements a resnet model using the C++ API. The model is used to train MNIST data. The number of epochs for training the model can be specified on the command line. By default, model is trained for 100 epochs. For example, to train with 10 epochs use the following command:
 
@@ -105,14 +105,14 @@ The code implements a resnet model using the C++ API. The model is used to train
 build/resnet 10
 ```
 
-### [lenet.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/lenet.cpp>)
+### [lenet.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/lenet.cpp>)
 
 The code implements a lenet model using the C++ API. It uses MNIST training data in CSV format to train the network. The example does not use built-in CSVIter to read the data from CSV file. The number of epochs can be specified on the command line. By default, the mode is trained for 100,000 epochs. For example, to train with 10 epochs use the following command:
 
 ```
 build/lenet 10
 ```
-### [lenet\_with\_mxdataiter.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/mlp_cpu.cpp>)
+### [lenet\_with\_mxdataiter.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/mlp_cpu.cpp>)
 
 The code implements a lenet model using the C++ API. It uses MNIST training data to train the network. The example uses built-in MNISTIter to read the data. The number of epochs can be specified on the command line. By default, the mode is trained for 100 epochs. For example, to train with 10 epochs use the following command:
 
@@ -122,7 +122,7 @@ build/lenet_with_mxdataiter 10
 
 In addition, there is `run_lenet_with_mxdataiter.sh` that downloads the mnist data and run `lenet_with_mxdataiter` example.
 
-### [inception_bn.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inception_bn.cpp>)
+### [inception_bn.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/inception_bn.cpp>)
 
 The code implements an Inception network using the C++ API with batch normalization. The example uses MNIST data to train the network. The model trains for 100 epochs. The example can be run by executing the following command:
 
diff --git a/cpp-package/example/inference/README.md b/cpp-package/example/inference/README.md
index 90047e5fe1..82309ecb61 100644
--- a/cpp-package/example/inference/README.md
+++ b/cpp-package/example/inference/README.md
@@ -19,7 +19,7 @@
 
 ## Building C++ Inference examples
 
-The examples in this folder demonstrate the **inference** workflow. Please build the MXNet C++ Package as explained in the [README](<https://github.com/apache/incubator-mxnet/tree/master/cpp-package#building-c-package>) File before building these examples.
+The examples in this folder demonstrate the **inference** workflow. Please build the MXNet C++ Package as explained in the [README](<https://github.com/apache/mxnet/tree/master/cpp-package#building-c-package>) File before building these examples.
 To build examples use following commands:
 
 -  Release: **make all**
@@ -30,11 +30,11 @@ To build examples use following commands:
 
 This directory contains following examples. In order to run the examples, ensure that the path to the MXNet shared library is added to the OS specific environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS and **PATH** for Windows OS.
 
-## [imagenet_inference.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp>)
+## [imagenet_inference.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp>)
 
-This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by Intel® MKL-DNN (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.
+This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by Intel® MKL-DNN (see this [quantization flow](https://github.com/apache/mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.
 
-Most of CNN models have been tested on Linux systems. And 50000 images are used to collect accuracy numbers. Please refer to this [README](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md) for  more details about accuracy.
+Most of CNN models have been tested on Linux systems. And 50000 images are used to collect accuracy numbers. Please refer to this [README](https://github.com/apache/mxnet/blob/master/example/quantization/README.md) for  more details about accuracy.
 
 The following performance numbers are collected via using C++ inference API on AWS EC2 C5.12xlarge. The environment variables are set like below:
 
@@ -81,10 +81,10 @@ imagenet_inference  --symbol_file <model symbol file in json format>
 Follow the below steps to do inference with more models.
 
 - Download the pre-trained FP32 models into ```./model``` directory.
-- Refer this [README](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md) to generate the corresponding quantized models and also put them into ```./model``` directory.
+- Refer this [README](https://github.com/apache/mxnet/blob/master/example/quantization/README.md) to generate the corresponding quantized models and also put them into ```./model``` directory.
 - Prepare [validation dataset](http://data.mxnet.io/data/val_256_q90.rec) and put it into ```./data``` directory.
 
-The below command lines show how to run inference with FP32/INT8 resnet50_v1 model. Because the C++ inference script provides the almost same command line as this [Python script](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/imagenet_inference.py) and then users can easily go from Python to C++.
+The below command lines show how to run inference with FP32/INT8 resnet50_v1 model. Because the C++ inference script provides the almost same command line as this [Python script](https://github.com/apache/mxnet/blob/master/example/quantization/imagenet_inference.py) and then users can easily go from Python to C++.
 ```
 
 # FP32 inference
@@ -100,7 +100,7 @@ The below command lines show how to run inference with FP32/INT8 resnet50_v1 mod
 ./imagenet_inference --symbol_file "./model/resnet50_v1-quantized-5batches-naive-symbol.json" --batch_size 64 --num_inference_batches 500 --benchmark
 
 ```
-For a quick inference test, users can directly run [unit_test_imagenet_inference.sh](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/unit_test_imagenet_inference.sh>) by using the below command. This script will automatically download the pre-trained **Inception-Bn** and **resnet50_v1_int8** model and **validation dataset** which are required for inference.
+For a quick inference test, users can directly run [unit_test_imagenet_inference.sh](<https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/unit_test_imagenet_inference.sh>) by using the below command. This script will automatically download the pre-trained **Inception-Bn** and **resnet50_v1_int8** model and **validation dataset** which are required for inference.
 
 ```
 ./unit_test_imagenet_inference.sh
@@ -146,7 +146,7 @@ imagenet_inference.cpp:439:  benchmark completed!
 imagenet_inference.cpp:440:  batch size: 16 num batch: 500 throughput: 6284.78 imgs/s latency:0.159115 ms
 ```
 
-## [sentiment_analysis_rnn.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/sentiment_analysis_rnn.cpp>)
+## [sentiment_analysis_rnn.cpp](<https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/sentiment_analysis_rnn.cpp>)
 This example demonstrates how you can load a pre-trained RNN model and use it to predict the sentiment expressed in the given movie review with the MXNet C++ API. The example is capable of processing variable legnth inputs. It performs the following tasks
 - Loads the pre-trained RNN model.
 - Loads the dictionary file containing the word to index mapping.
@@ -210,4 +210,4 @@ Input Line : [ The direction is awesome] Score : 0.968855
 The sentiment score between 0 and 1, (1 being positive)=0.966677
 ```
 
-Alternatively, you can run the [unit_test_sentiment_analysis_rnn.sh](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/unit_test_sentiment_analysis_rnn.sh>) script.
+Alternatively, you can run the [unit_test_sentiment_analysis_rnn.sh](<https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/unit_test_sentiment_analysis_rnn.sh>) script.
diff --git a/cpp-package/include/mxnet-cpp/contrib.h b/cpp-package/include/mxnet-cpp/contrib.h
index 21ca540141..b78792be5f 100644
--- a/cpp-package/include/mxnet-cpp/contrib.h
+++ b/cpp-package/include/mxnet-cpp/contrib.h
@@ -58,13 +58,13 @@ namespace details {
 namespace contrib {
 
   // needs to be same with
-  //   https://github.com/apache/incubator-mxnet/blob/1c874cfc807cee755c38f6486e8e0f4d94416cd8/src/operator/subgraph/tensorrt/tensorrt-inl.h#L190
+  //   https://github.com/apache/mxnet/blob/1c874cfc807cee755c38f6486e8e0f4d94416cd8/src/operator/subgraph/tensorrt/tensorrt-inl.h#L190
   static const std::string TENSORRT_SUBGRAPH_PARAM_IDENTIFIER = "subgraph_params_names";
   // needs to be same with
-  //   https://github.com/apache/incubator-mxnet/blob/master/src/operator/subgraph/tensorrt/tensorrt.cc#L244
+  //   https://github.com/apache/mxnet/blob/master/src/operator/subgraph/tensorrt/tensorrt.cc#L244
   static const std::string TENSORRT_SUBGRAPH_PARAM_PREFIX = "subgraph_param_";
   /*!
-   * this is a mimic to https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/contrib/tensorrt.py#L37
+   * this is a mimic to https://github.com/apache/mxnet/blob/master/python/mxnet/contrib/tensorrt.py#L37
    * @param symbol symbol that already called subgraph api
    * @param argParams original arg params, params needed by tensorrt will be removed after calling this function
    * @param auxParams original aux params, params needed by tensorrt will be removed after calling this function
diff --git a/cpp-package/include/mxnet-cpp/symbol.hpp b/cpp-package/include/mxnet-cpp/symbol.hpp
index 454d775ad2..3d95a99b09 100644
--- a/cpp-package/include/mxnet-cpp/symbol.hpp
+++ b/cpp-package/include/mxnet-cpp/symbol.hpp
@@ -191,7 +191,7 @@ inline std::map<std::string, std::string> Symbol::ListAttributes() const {
     std::map<std::string, std::string> attributes;
     for (mx_uint i = 0; i < size; ++i) {
         // pairs is 2 * size with key, value pairs according to
-        //   https://github.com/apache/incubator-mxnet/blob/master/include/mxnet/c_api.h#L1428
+        //   https://github.com/apache/mxnet/blob/master/include/mxnet/c_api.h#L1428
         attributes[pairs[2 * i]] = pairs[2 * i + 1];
     }
     return attributes;
diff --git a/cpp-package/tests/ci_test.sh b/cpp-package/tests/ci_test.sh
index 58f04b3416..ec03387dd1 100755
--- a/cpp-package/tests/ci_test.sh
+++ b/cpp-package/tests/ci_test.sh
@@ -60,7 +60,7 @@ cp ../../build/cpp-package/example/test_score .
 cp ../../build/cpp-package/example/test_ndarray_copy .
 ./test_ndarray_copy
 
-# skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/20011
+# skippping temporarily, tracked by https://github.com/apache/mxnet/issues/20011
 #cp ../../build/cpp-package/example/test_regress_label .
 #./test_regress_label
 
diff --git a/doap.rdf b/doap.rdf
index 10ff7139b4..c8b88bfa68 100644
--- a/doap.rdf
+++ b/doap.rdf
@@ -29,7 +29,7 @@
     <asfext:pmc rdf:resource="https://incubator.apache.org" />
     <shortdesc>Apache MXNet is a deep learning framework designed for both efficiency and flexibility.</shortdesc>
     <description>Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It's lightweight, Portable, Flexible Distributed/Mobile Deep Learning with dynamic, mutation-aware data-flow dependency scheduler; for Python, R, Julia, Scala, Go, Javascript and more</description>
-    <bug-database rdf:resource="https://github.com/apache/incubator-mxnet/labels/Bug" />
+    <bug-database rdf:resource="https://github.com/apache/mxnet/labels/Bug" />
     <mailing-list rdf:resource="https://lists.apache.org/list.html?dev@mxnet.apache.org" />
     <download-page rdf:resource="https://mxnet.apache.org/get_started/download" />
     <programming-language>C++</programming-language>
@@ -43,8 +43,8 @@
     </release>
     <repository>
       <GitRepository>
-        <location rdf:resource="https://github.com/apache/incubator-mxnet"/>
-        <browse rdf:resource="https://github.com/apache/incubator-mxnet"/>
+        <location rdf:resource="https://github.com/apache/mxnet"/>
+        <browse rdf:resource="https://github.com/apache/mxnet"/>
       </GitRepository>
     </repository>
     <maintainer>
diff --git a/docker/docker-python/README.md b/docker/docker-python/README.md
index 767be6d1eb..970f2aa03e 100644
--- a/docker/docker-python/README.md
+++ b/docker/docker-python/README.md
@@ -52,12 +52,12 @@ Refer: https://pypi.org/project/mxnet/
 `./build_python_dockerfile.sh <mxnet_version> <pip_tag> <path_to_cloned_mxnet_repo>`
 
 For example: 
-`./build_python_dockerfile.sh 1.3.0 1.3.0.post0 ~/build-docker/incubator-mxnet`
+`./build_python_dockerfile.sh 1.3.0 1.3.0.post0 ~/build-docker/mxnet`
 
 ### Tests run
-* [test_conv.py](https://github.com/apache/incubator-mxnet/blob/master/tests/python/train/test_conv.py)
-* [train_mnist.py](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_mnist.py)
-* [test_mxnet.py](https://github.com/apache/incubator-mxnet/blob/master/docker/docker-python/test_mxnet.py): This script is used to make sure that the docker image builds the expected mxnet version. That is, the version picked by pip is the same as as the version passed as a parameter.
+* [test_conv.py](https://github.com/apache/mxnet/blob/master/tests/python/train/test_conv.py)
+* [train_mnist.py](https://github.com/apache/mxnet/blob/master/example/image-classification/train_mnist.py)
+* [test_mxnet.py](https://github.com/apache/mxnet/blob/master/docker/docker-python/test_mxnet.py): This script is used to make sure that the docker image builds the expected mxnet version. That is, the version picked by pip is the same as as the version passed as a parameter.
 
 ### Dockerhub Credentials
 Dockerhub credentials will be required to push images at the end of this script.
diff --git a/docs/README.md b/docs/README.md
index cd78c94b03..8060f01481 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -38,7 +38,7 @@ If you plan to contribute changes to the documentation or website, please submit
 
 MXNet's Python documentation is built with [Sphinx](https://www.sphinx-doc.org) and a variety of plugins including [pandoc](https://pandoc.org/), and [recommonmark](https://github.com/rtfd/recommonmark).
 
-More information on the dependencies can be found in the [CI folder's installation scripts](https://github.com/apache/incubator-mxnet/tree/master/ci/docker/install/ubuntu_docs.sh).
+More information on the dependencies can be found in the [CI folder's installation scripts](https://github.com/apache/mxnet/tree/master/ci/docker/install/ubuntu_docs.sh).
 
 You can run just the Python docs by following the instructions in the Python API guide.
 
@@ -56,7 +56,7 @@ If you only need to make changes to tutorials or other pages that are not genera
 
 ### Ubuntu Setup
 
-As this is maintained for CI, Ubuntu is recommended. Refer to [ubuntu_doc.sh](https://github.com/apache/incubator-mxnet/tree/master/ci/docker/install/ubuntu_docs.sh) for the latest install script.
+As this is maintained for CI, Ubuntu is recommended. Refer to [ubuntu_doc.sh](https://github.com/apache/mxnet/tree/master/ci/docker/install/ubuntu_docs.sh) for the latest install script.
 
 ### Caveat for Rendering Outputs
 
@@ -158,11 +158,11 @@ The `-W` Sphinx option enforces "warnings as errors". This will help you debug y
 
 ## Production Website Deployment Process
 
-[Apache Jenkins MXNet website building job](https://builds.apache.org/job/incubator-mxnet-build-site/) is used to build MXNet website.
+[Apache Jenkins MXNet website building job](https://builds.apache.org/job/mxnet-build-site/) is used to build MXNet website.
 
-The Jenkins docs build job will fetch MXNet repository, build MXNet website and push all static files to [host repository](https://github.com/apache/incubator-mxnet-site.git).
+The Jenkins docs build job will fetch MXNet repository, build MXNet website and push all static files to [host repository](https://github.com/apache/mxnet-site.git).
 
-The host repo is hooked with [Apache gitbox](https://gitbox.apache.org/repos/asf?p=incubator-mxnet-site.git;a=summary) to host website.
+The host repo is hooked with [Apache gitbox](https://gitbox.apache.org/repos/asf?p=mxnet-site.git;a=summary) to host website.
 
 ### Processes for Running the Docs Build Jobs
 
diff --git a/docs/python_docs/python/scripts/conf.py b/docs/python_docs/python/scripts/conf.py
index 684c8decb3..58bcb200ee 100644
--- a/docs/python_docs/python/scripts/conf.py
+++ b/docs/python_docs/python/scripts/conf.py
@@ -32,7 +32,7 @@ needs_sphinx = '1.5.6'
 project = u'Apache MXNet'
 author = u'%s developers' % project
 copyright = u'2015-2019, %s' % author
-github_doc_root = 'https://github.com/apache/incubator-mxnet/tree/master/docs/'
+github_doc_root = 'https://github.com/apache/mxnet/tree/master/docs/'
 doc_root = 'https://mxnet.apache.org/'
 
 # add markdown parser
diff --git a/docs/python_docs/python/tutorials/deploy/export/onnx.md b/docs/python_docs/python/tutorials/deploy/export/onnx.md
index 4e74fd73f9..2c1cb7a43f 100644
--- a/docs/python_docs/python/tutorials/deploy/export/onnx.md
+++ b/docs/python_docs/python/tutorials/deploy/export/onnx.md
@@ -17,14 +17,14 @@
 
 # Exporting to ONNX format
 
-[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. In the MXNet 1.9 release, the MXNet-to-ONNX export module (mx2onnx) has received a major update with new features such as dynamic input shapes and better operator and model coverages. Please visit the [ONNX Export Support for MXNet](https://github.com/apache [...]
+[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. In the MXNet 1.9 release, the MXNet-to-ONNX export module (mx2onnx) has received a major update with new features such as dynamic input shapes and better operator and model coverages. Please visit the [ONNX Export Support for MXNet](https://github.com/apache [...]
 
 In this tutorial, we will learn how to use the mx2onnx exporter on pre-trained models.
 
 ## Prerequisites
 
 To run the tutorial we will need to have installed the following python modules:
-- [MXNet >= 1.9.0](/get_started) _OR_ an earlier MXNet version + [the mx2onnx wheel](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#installation)
+- [MXNet >= 1.9.0](/get_started) _OR_ an earlier MXNet version + [the mx2onnx wheel](https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx#installation)
 - [onnx >= 1.7.0](https://github.com/onnx/onnx#installation)
 
 *Note:* The latest mx2onnx exporting module is tested with ONNX op set version 12 or later, which corresponds to ONNX version 1.7 or later. Use of ealier ONNX versions may still work on some simple models, but again this is not tested.
@@ -67,7 +67,7 @@ export_model(sym, params, in_shapes=None, in_types=<class 'numpy.float32'>, onnx
     Exports the MXNet model file, passed as a parameter, into ONNX model.
     Accepts both symbol,parameter objects as well as json and params filepaths as input.
     Operator support and coverage -
-    https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix
+    https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix
     
     Parameters
     ----------
@@ -139,7 +139,7 @@ We have defined the input parameters required for the `export_model` API. Now, w
 converted_model_path = mx.onnx.export_model(sym, params, in_shapes, in_types, onnx_file)
 ```
 
-This API returns the path of the converted model which you can later use to run inference with or import the model into other frameworks. Please refer to [mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis) for more details about the API.
+This API returns the path of the converted model which you can later use to run inference with or import the model into other frameworks. Please refer to [mx2onnx](https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx#apis) for more details about the API.
 
 ## Dynamic input shapes
 The mx2onnx module also supports dynamic input shapes. We can set `dynamic=True` to turn it on. Note that even with dynamic shapes, a set of static input shapes still need to be specified in `in_shapes`; on top of that, we'll also need to specify which dimensions of the input shapes are dynamic in `dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better understandability in the exported onnx  [...]
diff --git a/docs/python_docs/python/tutorials/extend/custom_layer.md b/docs/python_docs/python/tutorials/extend/custom_layer.md
index 72700c0b97..7ac0acf672 100644
--- a/docs/python_docs/python/tutorials/extend/custom_layer.md
+++ b/docs/python_docs/python/tutorials/extend/custom_layer.md
@@ -24,9 +24,9 @@ In this article, I will cover how to create a new layer from scratch, how to use
 
 ## The simplest custom layer
 
-To create a new layer in Gluon API, one must create a class that inherits from [Block](https://github.com/apache/incubator-mxnet/blob/c9818480680f84daa6e281a974ab263691302ba8/python/mxnet/gluon/block.py#L128) class. This class provides the most basic functionality, and all pre-defined layers inherit from it directly or via other subclasses. Because each layer in Apache MxNet inherits from `Block`, words "layer" and "block" are used interchangeable inside of the Apache MxNet community.
+To create a new layer in Gluon API, one must create a class that inherits from [Block](https://github.com/apache/mxnet/blob/c9818480680f84daa6e281a974ab263691302ba8/python/mxnet/gluon/block.py#L128) class. This class provides the most basic functionality, and all pre-defined layers inherit from it directly or via other subclasses. Because each layer in Apache MxNet inherits from `Block`, words "layer" and "block" are used interchangeable inside of the Apache MxNet community.
 
-The only instance method needed to be implemented is [forward(self, x)](https://github.com/apache/incubator-mxnet/blob/c9818480680f84daa6e281a974ab263691302ba8/python/mxnet/gluon/block.py#L909), which defines what exactly your layer is going to do during forward propagation. Notice, that it doesn't require to provide what the block should do during back propogation. Back propogation pass for blocks is done by Apache MxNet for you. 
+The only instance method needed to be implemented is [forward(self, x)](https://github.com/apache/mxnet/blob/c9818480680f84daa6e281a974ab263691302ba8/python/mxnet/gluon/block.py#L909), which defines what exactly your layer is going to do during forward propagation. Notice, that it doesn't require to provide what the block should do during back propogation. Back propogation pass for blocks is done by Apache MxNet for you. 
 
 In the example below, we define a new layer and implement `forward()` method to normalize input data by fitting it into a range of [0, 1].
 
@@ -50,11 +50,11 @@ class NormalizationLayer(gluon.Block):
         return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
 ```
 
-The rest of methods of the `Block` class are already implemented, and majority of them are used to work with parameters of a block. There is one very special method named [hybridize()](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L384), though, which I am going to cover before moving to a more complex example of a custom layer.
+The rest of methods of the `Block` class are already implemented, and majority of them are used to work with parameters of a block. There is one very special method named [hybridize()](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/block.py#L384), though, which I am going to cover before moving to a more complex example of a custom layer.
 
 ## Hybridization and the difference between Block and HybridBlock
 
-Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.
+Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.
 
 The reason for that is that `HybridBlock` allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convinient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from [this article](/api/architecture/program_model).
 
@@ -140,7 +140,7 @@ Output:
 
 Usually, a layer has a set of associated parameters, sometimes also referred as weights. This is an internal state of a layer. Most often, these parameters are the ones, that we want to learn during backpropogation step, but sometimes these parameters might be just constants we want to use during forward pass.
 
-All parameters of a block are stored and accessed via [ParameterDict](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/parameter.py#L508) class. This class helps with initialization, updating, saving and loading of the parameters. Each layer can have multiple set of parameters, and all of them can be stored in a single instance of the `ParameterDict` class. On a block level, the instance of the `ParameterDict` class is accessible via `self.params` field, and outsi [...]
+All parameters of a block are stored and accessed via [ParameterDict](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/parameter.py#L508) class. This class helps with initialization, updating, saving and loading of the parameters. Each layer can have multiple set of parameters, and all of them can be stored in a single instance of the `ParameterDict` class. On a block level, the instance of the `ParameterDict` class is accessible via `self.params` field, and outside of a bl [...]
 
 
 ```python
diff --git a/docs/python_docs/python/tutorials/getting-started/gluon_from_experiment_to_deployment.md b/docs/python_docs/python/tutorials/getting-started/gluon_from_experiment_to_deployment.md
index 20e9cabcda..0452d3a822 100644
--- a/docs/python_docs/python/tutorials/getting-started/gluon_from_experiment_to_deployment.md
+++ b/docs/python_docs/python/tutorials/getting-started/gluon_from_experiment_to_deployment.md
@@ -47,7 +47,7 @@ We have prepared a utility file to help you download and organize your data into
 ```python
 import mxnet as mx
 data_util_file = "oxford_102_flower_dataset.py"
-base_url = "https://raw.githubusercontent.com/apache/incubator-mxnet/master/docs/tutorial_utils/data/{}?raw=true"
+base_url = "https://raw.githubusercontent.com/apache/mxnet/master/docs/tutorial_utils/data/{}?raw=true"
 mx.test_utils.download(base_url.format(data_util_file), fname=data_util_file)
 import oxford_102_flower_dataset
 
@@ -313,7 +313,7 @@ probability=9.798435, class=lotus
 You can continue to the [next tutorial](/api/cpp/docs/tutorials/cpp_inference) on how to load the model we just trained and run inference using MXNet C++ API.
 
 You can also find more ways to run inference and deploy your models here:
-1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
+1. [Java Inference examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
 2. [Scala Inference examples](/api/scala/docs/tutorials/infer)
 4. [MXNet Model Server Examples](https://github.com/awslabs/mxnet-model-server/tree/master/examples)
 
@@ -323,4 +323,4 @@ You can also find more ways to run inference and deploy your models here:
 2. [Gluon book on fine-tuning](https://www.d2l.ai/chapter_computer-vision/fine-tuning.html)
 3. [Gluon CV transfer learning tutorial](https://gluon-cv.mxnet.io/build/examples_classification/transfer_learning_minc.html)
 4. [Gluon crash course](https://gluon-crash-course.mxnet.io/)
-5. [Gluon CPP inference example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/)
+5. [Gluon CPP inference example](https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/)
diff --git a/docs/python_docs/python/tutorials/packages/gluon/blocks/custom_layer_beginners.md b/docs/python_docs/python/tutorials/packages/gluon/blocks/custom_layer_beginners.md
index 933a70bbdf..6c8ea05728 100644
--- a/docs/python_docs/python/tutorials/packages/gluon/blocks/custom_layer_beginners.md
+++ b/docs/python_docs/python/tutorials/packages/gluon/blocks/custom_layer_beginners.md
@@ -23,9 +23,9 @@ In this article, I will cover how to create a new layer from scratch, how to use
 
 ## The simplest custom layer
 
-To create a new layer in Gluon API, one must create a class that inherits from [Block](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L123) class. This class provides the most basic functionality, and all pre-defined layers inherit from it directly or via other subclasses. Because each layer in Apache MxNet inherits from `Block`  words “layer” and “block” are used interchangeable inside of the Apache MxNet community.
+To create a new layer in Gluon API, one must create a class that inherits from [Block](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/block.py#L123) class. This class provides the most basic functionality, and all pre-defined layers inherit from it directly or via other subclasses. Because each layer in Apache MxNet inherits from `Block`  words “layer” and “block” are used interchangeable inside of the Apache MxNet community.
 
-The only instance method needed to be implemented is [forward(self, x)](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L415) which defines what exactly your layer is going to do during forward propagation. Notice, that it doesn’t require to provide what the block should do during back propogation. Back propogation pass for blocks is done by Apache MxNet for you.
+The only instance method needed to be implemented is [forward(self, x)](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/block.py#L415) which defines what exactly your layer is going to do during forward propagation. Notice, that it doesn’t require to provide what the block should do during back propogation. Back propogation pass for blocks is done by Apache MxNet for you.
 
 In the example below, we define a new layer and implement `forward()`  method to normalize input data by fitting it into a range of [0, 1].
 
@@ -47,11 +47,11 @@ class NormalizationLayer(gluon.Block):
         return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
 ```
 
-The rest of methods of the `Block` class are already implemented, and majority of them are used to work with parameters of a block. There is one very special method named [hybridize()](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L384), though, which I am going to cover before moving to a more complex example of a custom layer.
+The rest of methods of the `Block` class are already implemented, and majority of them are used to work with parameters of a block. There is one very special method named [hybridize()](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/block.py#L384), though, which I am going to cover before moving to a more complex example of a custom layer.
 
 ## Hybridization and the difference between Block and HybridBlock
 
-Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.
+Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.
 
 The reason for that is that `HybridBlock` allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convenient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from this [deep learning programming paradigm](/api/architecture/program_model) article.
 
@@ -127,7 +127,7 @@ net(input)
 
 Usually, a layer has a set of associated parameters, sometimes also referred as weights. This is an internal state of a layer. Most often, these parameters are the ones, that we want to learn during backpropogation step, but sometimes these parameters might be just constants we want to use during forward pass.
 
-All parameters of a block are stored and accessed via [ParameterDict](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/parameter.py#L508) class. This class helps with initialization, updating, saving and loading of the parameters. Each layer can have multiple set of parameters, and all of them can be stored in a single instance of the `ParameterDict` class. On a block level, the instance of the `ParameterDict` class is accessible via `self.params` field, and outsi [...]
+All parameters of a block are stored and accessed via [ParameterDict](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/parameter.py#L508) class. This class helps with initialization, updating, saving and loading of the parameters. Each layer can have multiple set of parameters, and all of them can be stored in a single instance of the `ParameterDict` class. On a block level, the instance of the `ParameterDict` class is accessible via `self.params` field, and outside of a bl [...]
 
 ```python
 class NormalizationHybridLayer(gluon.HybridBlock):
diff --git a/docs/python_docs/python/tutorials/packages/gluon/blocks/save_load_params.md b/docs/python_docs/python/tutorials/packages/gluon/blocks/save_load_params.md
index ee72095994..962b9ebe4d 100644
--- a/docs/python_docs/python/tutorials/packages/gluon/blocks/save_load_params.md
+++ b/docs/python_docs/python/tutorials/packages/gluon/blocks/save_load_params.md
@@ -252,8 +252,8 @@ net.export("lenet", epoch=1)
 ### From a different frontend
 
 One of the main reasons to serialize model architecture into a JSON file is to load it from a different frontend like C, C++ or Scala. Here is a couple of examples:
-1. [Loading serialized Hybrid networks from C](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/predict-cpp/image-classification-predict.cc)
-2. [Loading serialized Hybrid networks from Scala](https://github.com/apache/incubator-mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/ImageClassifier.scala)
+1. [Loading serialized Hybrid networks from C](https://github.com/apache/mxnet/blob/master/example/image-classification/predict-cpp/image-classification-predict.cc)
+2. [Loading serialized Hybrid networks from Scala](https://github.com/apache/mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/ImageClassifier.scala)
 
 ### From Python
 
diff --git a/docs/python_docs/python/tutorials/packages/gluon/loss/loss.md b/docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
index 018e75f5fc..feb8f35e28 100644
--- a/docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
+++ b/docs/python_docs/python/tutorials/packages/gluon/loss/loss.md
@@ -236,7 +236,7 @@ The network would learn to minimize the distance between the two `A`'s and maxim
 
 #### [CTC Loss](/api/python/docs/api/gluon/loss/index.html#mxnet.gluon.loss.CTCLoss)
 
-CTC Loss is the [connectionist temporal classification loss](https://distill.pub/2017/ctc/) . It is used to train recurrent neural networks with variable time dimension. It learns the alignment and labelling of input sequences. It takes a sequence as input and gives probabilities for each timestep. For instance, in the following image the word is not well aligned with the 5 timesteps because of the different sizes of characters. CTC Loss finds for each timestep the highest probability e. [...]
+CTC Loss is the [connectionist temporal classification loss](https://distill.pub/2017/ctc/) . It is used to train recurrent neural networks with variable time dimension. It learns the alignment and labelling of input sequences. It takes a sequence as input and gives probabilities for each timestep. For instance, in the following image the word is not well aligned with the 5 timesteps because of the different sizes of characters. CTC Loss finds for each timestep the highest probability e. [...]
 
 ![ctc_loss](ctc_loss.png)
 
diff --git a/docs/python_docs/python/tutorials/packages/gluon/training/fit_api_tutorial.md b/docs/python_docs/python/tutorials/packages/gluon/training/fit_api_tutorial.md
index 3858d0f5e1..d442ce82d5 100644
--- a/docs/python_docs/python/tutorials/packages/gluon/training/fit_api_tutorial.md
+++ b/docs/python_docs/python/tutorials/packages/gluon/training/fit_api_tutorial.md
@@ -163,7 +163,7 @@ There are also some default utility handlers that will be added to your estimato
 `ValidationHandler` is used to validate your model on test data at each epoch's end and then calculate validation metrics.
 You can create these utility handlers with different configurations and pass to estimator. This will override the default handler configuration.
 You can create a custom handler by inheriting one or multiple
-[base event handlers](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/contrib/estimator/event_handler.py#L32)
+[base event handlers](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/contrib/estimator/event_handler.py#L32)
  including: `TrainBegin`, `TrainEnd`, `EpochBegin`, `EpochEnd`, `BatchBegin`, `BatchEnd`.
 
 
diff --git a/docs/python_docs/python/tutorials/packages/ndarray/gotchas_numpy_in_mxnet.md b/docs/python_docs/python/tutorials/packages/ndarray/gotchas_numpy_in_mxnet.md
index 1fe40bc167..5bb6cef611 100644
--- a/docs/python_docs/python/tutorials/packages/ndarray/gotchas_numpy_in_mxnet.md
+++ b/docs/python_docs/python/tutorials/packages/ndarray/gotchas_numpy_in_mxnet.md
@@ -102,9 +102,9 @@ pad_array(nd.array([1, 2, 3]), max_length=10)
 `<NDArray 10 @cpu(0)>` <!--notebook-skip-line-->
 
 
-### Search for an operator on [Github](https://github.com/apache/incubator-mxnet/labels/Operator)
+### Search for an operator on [Github](https://github.com/apache/mxnet/labels/Operator)
 
-Apache MXNet community is responsive to requests, and everyone is welcomed to contribute new operators. Have in mind, that there is always a lag between new operators being merged into the codebase and release of a next stable version. For example, [nd.diag()](https://github.com/apache/incubator-mxnet/pull/11643) operator was recently introduced to Apache MXNet, but on the moment of writing this tutorial, it is not in any stable release. You can always get all latest implementations by i [...]
+Apache MXNet community is responsive to requests, and everyone is welcomed to contribute new operators. Have in mind, that there is always a lag between new operators being merged into the codebase and release of a next stable version. For example, [nd.diag()](https://github.com/apache/mxnet/pull/11643) operator was recently introduced to Apache MXNet, but on the moment of writing this tutorial, it is not in any stable release. You can always get all latest implementations by installing  [...]
 
 ## How to minimize the impact of blocking calls
 
diff --git a/docs/python_docs/python/tutorials/packages/ndarray/sparse/train.md b/docs/python_docs/python/tutorials/packages/ndarray/sparse/train.md
index 23654fc6a3..91013c8b5e 100644
--- a/docs/python_docs/python/tutorials/packages/ndarray/sparse/train.md
+++ b/docs/python_docs/python/tutorials/packages/ndarray/sparse/train.md
@@ -333,7 +333,7 @@ assert metric.get()[1] < 1, "Achieved MSE (%f) is larger than expected (1.0)" %
 ### Training the model with multiple machines or multiple devices
 
 Distributed training with `row_sparse` weights and gradients are supported in MXNet, which significantly reduces communication cost for large models. To train a sparse model with multiple machines, you need to call `prepare` before `forward`, or `save_checkpoint`.
-Please refer to the example in [mxnet/example/sparse/linear_classification](https://github.com/apache/incubator-mxnet/tree/master/example/sparse/linear_classification)
+Please refer to the example in [mxnet/example/sparse/linear_classification](https://github.com/apache/mxnet/tree/master/example/sparse/linear_classification)
 for more details.
 
 <!-- INSERT SOURCE DOWNLOAD BUTTONS -->
diff --git a/docs/python_docs/python/tutorials/packages/ndarray/sparse/train_gluon.md b/docs/python_docs/python/tutorials/packages/ndarray/sparse/train_gluon.md
index 8239f2b7de..b7c85f365c 100644
--- a/docs/python_docs/python/tutorials/packages/ndarray/sparse/train_gluon.md
+++ b/docs/python_docs/python/tutorials/packages/ndarray/sparse/train_gluon.md
@@ -467,7 +467,7 @@ Memory Allocation for Weight Gradient:
 
 ### Advanced: Sparse `weight`
 
-You can optimize this example further by setting the weight's `stype` to `'row_sparse'`, but whether `'row_sparse'` weights make sense or not will depends on your specific task. See [contrib.SparseEmbedding](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/contrib/nn/basic_layers.py#L118) for an example of this.
+You can optimize this example further by setting the weight's `stype` to `'row_sparse'`, but whether `'row_sparse'` weights make sense or not will depends on your specific task. See [contrib.SparseEmbedding](https://github.com/apache/mxnet/blob/master/python/mxnet/gluon/contrib/nn/basic_layers.py#L118) for an example of this.
 
 ## Conclusion
 
diff --git a/docs/python_docs/python/tutorials/performance/backend/amp.md b/docs/python_docs/python/tutorials/performance/backend/amp.md
index c862b51131..a3b54a87ab 100644
--- a/docs/python_docs/python/tutorials/performance/backend/amp.md
+++ b/docs/python_docs/python/tutorials/performance/backend/amp.md
@@ -266,7 +266,7 @@ To do inference with mixed precision for a trained model in FP32, you can use th
 Below, we demonstrate for a gluon model and a symbolic model:
 - Conversion from FP32 model to mixed precision model.
 - Run inference on the mixed precision model.
-- For AMP conversion of bucketing module please refer to [example/rnn/bucketing/README.md](https://github.com/apache/incubator-mxnet/blob/master/example/rnn/bucketing/README.md).
+- For AMP conversion of bucketing module please refer to [example/rnn/bucketing/README.md](https://github.com/apache/mxnet/blob/master/example/rnn/bucketing/README.md).
 
 ```python
 with mx.Context(mx.gpu(0)):
diff --git a/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_quantization.md b/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_quantization.md
index ae86386a5c..74a24ad252 100644
--- a/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_quantization.md
+++ b/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_quantization.md
@@ -19,7 +19,7 @@
 
 This document is to introduce how to quantize the customer models from FP32 to INT8 with Apache/MXNet toolkit and APIs under Intel CPU.
 
-If you are not familiar with Apache/MXNet quantization flow, please reference [quantization blog](https://medium.com/apache-mxnet/model-quantization-for-production-level-neural-network-inference-f54462ebba05) first, and the performance data is shown in [Apache/MXNet C++ interface](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference) and [GluonCV](https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html). 
+If you are not familiar with Apache/MXNet quantization flow, please reference [quantization blog](https://medium.com/apache-mxnet/model-quantization-for-production-level-neural-network-inference-f54462ebba05) first, and the performance data is shown in [Apache/MXNet C++ interface](https://github.com/apache/mxnet/tree/master/cpp-package/example/inference) and [GluonCV](https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html). 
 
 ## Installation and Prerequisites
 
@@ -35,7 +35,7 @@ pip install --pre "mxnet<2" -f https://dist.mxnet.io/python
 
 ## Image Classification Demo
 
-A quantization script [imagenet_gen_qsym_mkldnn.py](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/imagenet_gen_qsym_mkldnn.py) has been designed to launch quantization for image-classification models. This script is  integrated with [Gluon-CV modelzoo](https://gluon-cv.mxnet.io/model_zoo/classification.html), so that all pre-trained models can be downloaded from Gluon-CV and then converted for quantization. For details, you can refer [Model Quantization with  [...]
+A quantization script [imagenet_gen_qsym_mkldnn.py](https://github.com/apache/mxnet/blob/master/example/quantization/imagenet_gen_qsym_mkldnn.py) has been designed to launch quantization for image-classification models. This script is  integrated with [Gluon-CV modelzoo](https://gluon-cv.mxnet.io/model_zoo/classification.html), so that all pre-trained models can be downloaded from Gluon-CV and then converted for quantization. For details, you can refer [Model Quantization with Calibratio [...]
 
 ## Integrate Quantization Flow to Your Project
 
@@ -256,7 +256,7 @@ BTW, You can also modify the `min_calib_range` and `max_calib_range` in the JSON
 
 ## Deploy with Python/C++
 
-MXNet also supports deploy quantized models with C++. Refer [MXNet C++ Package](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/README.md) for more details.
+MXNet also supports deploy quantized models with C++. Refer [MXNet C++ Package](https://github.com/apache/mxnet/blob/master/cpp-package/README.md) for more details.
 
 # Improving accuracy with Intel® Neural Compressor
 
@@ -498,7 +498,7 @@ def native_quantization(model, calib_dataloader, dev_dataloader):
     return quantized_model
 ```
 
-For complete code, see this example on the [official GitHub repository](https://github.com/apache/incubator-mxnet/tree/v1.x/example/quantization_inc/BERT_MRPC).
+For complete code, see this example on the [official GitHub repository](https://github.com/apache/mxnet/tree/v1.x/example/quantization_inc/BERT_MRPC).
 
 #### Results:
 
diff --git a/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md b/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md
index 086839a7bb..2936829f59 100644
--- a/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md
+++ b/docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md
@@ -20,7 +20,7 @@
 A better training and inference performance is expected to be achieved on Intel-Architecture CPUs with MXNet built with [Intel MKL-DNN](https://github.com/intel/mkl-dnn) on multiple operating system, including Linux, Windows and MacOS.
 In the following sections, you will find build instructions for MXNet with Intel MKL-DNN on Linux, MacOS and Windows.
 
-Please find MKL-DNN optimized operators and other features in the [MKL-DNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md).
+Please find MKL-DNN optimized operators and other features in the [MKL-DNN operator list](https://github.com/apache/mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md).
 
 The detailed performance data collected on Intel Xeon CPU with MXNet built with Intel MKL-DNN can be found [here](https://mxnet.apache.org/api/faq/perf#intel-cpu).
 
@@ -51,8 +51,8 @@ sudo apt-get install -y graphviz
 ### Clone MXNet sources
 
 ```
-git clone --recursive https://github.com/apache/incubator-mxnet.git
-cd incubator-mxnet
+git clone --recursive https://github.com/apache/mxnet.git
+cd mxnet
 ```
 
 ### Build MXNet with MKL-DNN
@@ -103,8 +103,8 @@ brew install llvm
 ### Clone MXNet sources
 
 ```
-git clone --recursive https://github.com/apache/incubator-mxnet.git
-cd incubator-mxnet
+git clone --recursive https://github.com/apache/mxnet.git
+cd mxnet
 ```
 
 ### Build MXNet with MKL-DNN
@@ -131,9 +131,9 @@ To build and install MXNet yourself, you need the following dependencies. Instal
 
 After you have installed all of the required dependencies, build the MXNet source code:
 
-1. Start a Visual Studio command prompt by click windows Start menu>>Visual Studio 2015>>VS2015 X64 Native Tools Command Prompt, and download the MXNet source code from [GitHub](https://github.com/apache/incubator-mxnet) by the command:
+1. Start a Visual Studio command prompt by click windows Start menu>>Visual Studio 2015>>VS2015 X64 Native Tools Command Prompt, and download the MXNet source code from [GitHub](https://github.com/apache/mxnet) by the command:
 ```
-git clone --recursive https://github.com/apache/incubator-mxnet.git
+git clone --recursive https://github.com/apache/mxnet.git
 cd C:\incubator-mxent
 ```
 2. Enable Intel MKL-DNN by -DUSE_MKLDNN=1. Use [CMake 3](https://cmake.org/) to create a Visual Studio solution in ```./build```. Make sure to specify the architecture in the
@@ -170,7 +170,7 @@ User can follow the same steps of Visual Studio 2015 to build MXNET with MKL-DNN
 Preinstall python and some dependent modules:
 ```
 pip install numpy graphviz
-set PYTHONPATH=[workdir]\incubator-mxnet\python
+set PYTHONPATH=[workdir]\mxnet\python
 ```
 or install mxnet
 ```
@@ -295,7 +295,7 @@ This limitations of this experimental feature are:
 
 Benefiting from Intel MKL-DNN, MXNet built with Intel MKL-DNN brings outstanding performance improvement on quantization and inference with INT8 Intel CPU Platform on Intel Xeon Scalable Platform.
 
-- [CNN Quantization Examples](https://github.com/apache/incubator-mxnet/tree/master/example/quantization).
+- [CNN Quantization Examples](https://github.com/apache/mxnet/tree/master/example/quantization).
 
 - [Model Quantization for Production-Level Neural Network Inference](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN).
 
@@ -305,4 +305,4 @@ Benefiting from Intel MKL-DNN, MXNet built with Intel MKL-DNN brings outstanding
 
 - For questions or support specific to MKL, visit the [Intel MKLDNN](https://github.com/intel/mkl-dnn) website.
 
-- If you find bugs, please open an issue on GitHub for [MXNet with MKL](https://github.com/apache/incubator-mxnet/labels/MKL) or [MXNet with MKLDNN](https://github.com/apache/incubator-mxnet/labels/MKLDNN).
+- If you find bugs, please open an issue on GitHub for [MXNet with MKL](https://github.com/apache/mxnet/labels/MKL) or [MXNet with MKLDNN](https://github.com/apache/mxnet/labels/MKLDNN).
diff --git a/docs/python_docs/python/tutorials/performance/backend/profiler.md b/docs/python_docs/python/tutorials/performance/backend/profiler.md
index b9798a6c65..3ba9b641f9 100644
--- a/docs/python_docs/python/tutorials/performance/backend/profiler.md
+++ b/docs/python_docs/python/tutorials/performance/backend/profiler.md
@@ -327,7 +327,7 @@ You can initiate the profiling directly from inside Visual Profiler or from the
 
 `==11588== NVPROF is profiling process 11588, command: python my_profiler_script.py`
 
-`==11588== Generated result file: /home/user/Development/incubator-mxnet/ci/my_profile.nvvp`
+`==11588== Generated result file: /home/user/Development/mxnet/ci/my_profile.nvvp`
 
 We specified an output file called `my_profile.nvvp` and this will be annotated with NVTX ranges (for MXNet operations) that will be displayed alongside the standard NVProf timeline. This can be very useful when you're trying to find patterns between operators run by MXNet, and their associated CUDA kernel calls.
 
@@ -353,7 +353,7 @@ Nsight Compute is available in CUDA 10 toolkit, but can be used to profile code
 
 ### Further reading
 
-- [Examples using MXNet profiler.](https://github.com/apache/incubator-mxnet/tree/master/example/profiler)
+- [Examples using MXNet profiler.](https://github.com/apache/mxnet/tree/master/example/profiler)
 - [Some tips for improving MXNet performance.](https://mxnet.apache.org/api/faq/perf)
 
 <!-- INSERT SOURCE DOWNLOAD BUTTONS -->
diff --git a/docs/python_docs/python/tutorials/performance/index.rst b/docs/python_docs/python/tutorials/performance/index.rst
index b1f5c66c20..877165bba7 100644
--- a/docs/python_docs/python/tutorials/performance/index.rst
+++ b/docs/python_docs/python/tutorials/performance/index.rst
@@ -117,7 +117,7 @@ Distributed Training
 
    .. card::
       :title: MXNet with Horovod
-      :link: https://github.com/apache/incubator-mxnet/tree/master/example/distributed_training-horovod
+      :link: https://github.com/apache/mxnet/tree/master/example/distributed_training-horovod
 
       A set of example scripts demonstrating MNIST and ImageNet training with Horovod as the distributed training backend.
 
diff --git a/docs/python_docs/themes/mx-theme/mxtheme/footer.html b/docs/python_docs/themes/mx-theme/mxtheme/footer.html
index 9c0da629e2..2c07098c3c 100644
--- a/docs/python_docs/themes/mx-theme/mxtheme/footer.html
+++ b/docs/python_docs/themes/mx-theme/mxtheme/footer.html
@@ -8,7 +8,7 @@
                     <li><a class="u-email" href="mailto:user@mxnet.apache.org">User mailing list</a></li>
                     <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li>
                     <li><a href="https://issues.apache.org/jira/projects/MXNET/issues">Jira Tracker</a></li>
-                    <li><a href="https://github.com/apache/incubator-mxnet/labels/Roadmap">Github Roadmap</a></li>
+                    <li><a href="https://github.com/apache/mxnet/labels/Roadmap">Github Roadmap</a></li>
                     <li><a href="https://medium.com/apache-mxnet">Blog</a></li>
                     <li><a href="https://discuss.mxnet.io">Forum</a></li>
                     <li><a href="/community/contribute">Contribute</a></li>
@@ -16,7 +16,7 @@
                 </ul>
             </div>
 
-            <div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/incubator-mxnet"><svg class="svg-icon"><use xlink:href="{{pathto('_static/minima-social-icons.svg#github', 1)}}"></use></svg> <span class="username">apache/incubator-mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="{{pathto('_static/minima-social-icons.svg#twitter', 1)}}"></use></svg> <span class="username">apachemxnet</span></a> [...]
+            <div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/mxnet"><svg class="svg-icon"><use xlink:href="{{pathto('_static/minima-social-icons.svg#github', 1)}}"></use></svg> <span class="username">apache/mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="{{pathto('_static/minima-social-icons.svg#twitter', 1)}}"></use></svg> <span class="username">apachemxnet</span></a></li><li><a href="ht [...]
 </div>
 
             <div class="col-4 footer-text">
diff --git a/docs/python_docs/themes/mx-theme/mxtheme/header_top.html b/docs/python_docs/themes/mx-theme/mxtheme/header_top.html
index 67b487c192..a980160bf3 100644
--- a/docs/python_docs/themes/mx-theme/mxtheme/header_top.html
+++ b/docs/python_docs/themes/mx-theme/mxtheme/header_top.html
@@ -18,7 +18,7 @@
         <a class="page-link" href="/versions/1.9.1/ecosystem">Ecosystem</a>
         <a class="page-link page-current" href="/versions/1.9.1/api">Docs & Tutorials</a>
         <a class="page-link" href="/versions/1.9.1/trusted_by">Trusted By</a>
-        <a class="page-link" href="https://github.com/apache/incubator-mxnet">GitHub</a>
+        <a class="page-link" href="https://github.com/apache/mxnet">GitHub</a>
         <div class="dropdown" style="min-width:100px">
           <span class="dropdown-header">Apache
             <svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg>
diff --git a/docs/static_site/src/_config.yml b/docs/static_site/src/_config.yml
index 7e78422610..78f6344c18 100644
--- a/docs/static_site/src/_config.yml
+++ b/docs/static_site/src/_config.yml
@@ -35,7 +35,7 @@ email: dev@mxnet.apache.org
 description: >- # this means to ignore newlines until "baseurl:"
   A flexible and efficient library for deep learning.
 twitter_username: apachemxnet
-github_username:  apache/incubator-mxnet
+github_username:  apache/mxnet
 youtube_username: apachemxnet
 baseurl: /versions/1.9.1
 versions: 
diff --git a/docs/static_site/src/_config_beta.yml b/docs/static_site/src/_config_beta.yml
index 068dc182ed..be48f4f8e7 100644
--- a/docs/static_site/src/_config_beta.yml
+++ b/docs/static_site/src/_config_beta.yml
@@ -37,7 +37,7 @@ description: >- # this means to ignore newlines until "baseurl:"
 baseurl: /mxnet.io-v2 # the subpath of your site, e.g. /blog
 url: https://thomasdelteil.github.io
 twitter_username: apachemxnet
-github_username:  apache/incubator-mxnet
+github_username:  apache/mxnet
 youtube_username: apachemxnet
 baseurl: /versions/1.9.1
 versions: 
diff --git a/docs/static_site/src/_config_prod.yml b/docs/static_site/src/_config_prod.yml
index f1750fe1a0..f4023a792a 100644
--- a/docs/static_site/src/_config_prod.yml
+++ b/docs/static_site/src/_config_prod.yml
@@ -36,7 +36,7 @@ description: >- # this means to ignore newlines until "baseurl:"
   A flexible and efficient library for deep learning.
 url: https://mxnet.apache.org
 twitter_username: apachemxnet
-github_username:  apache/incubator-mxnet
+github_username:  apache/mxnet
 youtube_username: apachemxnet
 baseurl: /versions/1.9.1
 versions: 
diff --git a/docs/static_site/src/_includes/footer.html b/docs/static_site/src/_includes/footer.html
index 7426e081df..2861ce9175 100644
--- a/docs/static_site/src/_includes/footer.html
+++ b/docs/static_site/src/_includes/footer.html
@@ -7,7 +7,7 @@
                     <li><a href="{{'community/contribute#mxnet-dev-communications'|relative_url}}">Mailing lists</a></li>
                     <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li>
                     <li><a href="https://issues.apache.org/jira/projects/MXNET/issues">Jira Tracker</a></li>
-                    <li><a href="https://github.com/apache/incubator-mxnet/labels/Roadmap">Github Roadmap</a></li>
+                    <li><a href="https://github.com/apache/mxnet/labels/Roadmap">Github Roadmap</a></li>
                     <li><a href="https://medium.com/apache-mxnet">Blog</a></li>
                     <li><a href="https://discuss.mxnet.io">Forum</a></li>
                     <li><a href="{{'community/contribute'|relative_url}}">Contribute</a></li>
@@ -28,16 +28,10 @@
     <div class="wrapper">
         <div class="row">
             <div class="col-3">
-                <img src="{{'/assets/img/apache_incubator_logo.png' | relative_url}}" class="footer-logo col-2">
+                <img src="{{'/assets/img/asf_logo.svg' | relative_url}}" class="footer-logo col-2">
             </div>
             <div class="footer-bottom-warning col-9">
-                <p>Apache MXNet is an effort undergoing incubation at <a href="http://www.apache.org/">The Apache Software Foundation</a> (ASF), <span
-                        style="font-weight:bold">sponsored by the <i>Apache Incubator</i></span>. Incubation is required
-                    of all newly accepted projects until a further review indicates that the infrastructure,
-                    communications, and decision making process have stabilized in a manner consistent with other
-                    successful ASF projects. While incubation status is not necessarily a reflection of the completeness
-                    or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
-                </p><p>"Copyright © 2017-2022, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache
+                </p><p>"Copyright © 2017-2022, The Apache Software Foundation. Licensed under the Apache License, Version 2.0. Apache MXNet, MXNet, Apache, the Apache
                     feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the
                     Apache Software Foundation."</p>
             </div>
diff --git a/docs/static_site/src/_includes/get_started/cloud/cpu.md b/docs/static_site/src/_includes/get_started/cloud/cpu.md
index 440582727b..1f4326ae5d 100644
--- a/docs/static_site/src/_includes/get_started/cloud/cpu.md
+++ b/docs/static_site/src/_includes/get_started/cloud/cpu.md
@@ -7,7 +7,7 @@ but they point to packages that are *not* provided nor endorsed by the Apache
 Software Foundation. As such, they might contain software components with more
 restrictive licenses than the Apache License and you'll need to decide whether
 they are appropriate for your usage. Like all Apache Releases, the official
-Apache MXNet (incubating) releases consist of source code only and are found at
+Apache MXNet releases consist of source code only and are found at
 the [Download page](https://mxnet.apache.org/get_started/download).
 
 * **Amazon Web Services**
diff --git a/docs/static_site/src/_includes/get_started/cloud/gpu.md b/docs/static_site/src/_includes/get_started/cloud/gpu.md
index 8f64a3ac5c..781b9597d4 100644
--- a/docs/static_site/src/_includes/get_started/cloud/gpu.md
+++ b/docs/static_site/src/_includes/get_started/cloud/gpu.md
@@ -7,7 +7,7 @@ but they point to packages that are *not* provided nor endorsed by the Apache
 Software Foundation. As such, they might contain software components with more
 restrictive licenses than the Apache License and you'll need to decide whether
 they are appropriate for your usage. Like all Apache Releases, the official
-Apache MXNet (incubating) releases consist of source code only and are found at
+Apache MXNet releases consist of source code only and are found at
 the [Download page](https://mxnet.apache.org/get_started/download).
 
 * **Alibaba**
diff --git a/docs/static_site/src/_includes/get_started/devices/raspberry_pi.md b/docs/static_site/src/_includes/get_started/devices/raspberry_pi.md
index 3cc8bb91d6..8aacbcb6d4 100644
--- a/docs/static_site/src/_includes/get_started/devices/raspberry_pi.md
+++ b/docs/static_site/src/_includes/get_started/devices/raspberry_pi.md
@@ -165,8 +165,8 @@ Clone the MXNet source code repository using the following `git` command in your
 directory:
 
 {% highlight bash %}
-git clone https://github.com/apache/incubator-mxnet.git --recursive
-cd incubator-mxnet
+git clone https://github.com/apache/mxnet.git --recursive
+cd mxnet
 {% endhighlight %}
 
 Build:
diff --git a/docs/static_site/src/_includes/get_started/linux/python/cpu/docker.md b/docs/static_site/src/_includes/get_started/linux/python/cpu/docker.md
index 7eab6d36de..d1dbbed7b8 100644
--- a/docs/static_site/src/_includes/get_started/linux/python/cpu/docker.md
+++ b/docs/static_site/src/_includes/get_started/linux/python/cpu/docker.md
@@ -3,7 +3,7 @@ your convenience but they point to packages that are *not* provided nor endorsed
 by the Apache Software Foundation. As such, they might contain software
 components with more restrictive licenses than the Apache License and you'll
 need to decide whether they are appropriate for your usage. Like all Apache
-Releases, the official Apache MXNet (incubating) releases consist of source code
+Releases, the official Apache MXNet releases consist of source code
 only and are found at
 the [Download page](https://mxnet.apache.org/get_started/download).
     
diff --git a/docs/static_site/src/_includes/get_started/linux/python/cpu/pip.md b/docs/static_site/src/_includes/get_started/linux/python/cpu/pip.md
index af4307381b..a2dfd681cc 100644
--- a/docs/static_site/src/_includes/get_started/linux/python/cpu/pip.md
+++ b/docs/static_site/src/_includes/get_started/linux/python/cpu/pip.md
@@ -4,7 +4,7 @@ Software Foundation. As such, they might contain software components with more
 restrictive licenses than the Apache License and you'll need to decide whether
 they are appropriate for your usage. The packages linked here contain GPL GCC
 Runtime Library components. Like all Apache Releases, the official Apache MXNet
-(incubating) releases consist of source code only and are found at the [Download
+releases consist of source code only and are found at the [Download
 page](https://mxnet.apache.org/get_started/download).
 
 Run the following command:
diff --git a/docs/static_site/src/_includes/get_started/linux/python/gpu/docker.md b/docs/static_site/src/_includes/get_started/linux/python/gpu/docker.md
index f963bc9c58..eab9e8d101 100644
--- a/docs/static_site/src/_includes/get_started/linux/python/gpu/docker.md
+++ b/docs/static_site/src/_includes/get_started/linux/python/gpu/docker.md
@@ -3,7 +3,7 @@ your convenience but they point to packages that are *not* provided nor endorsed
 by the Apache Software Foundation. As such, they might contain software
 components with more restrictive licenses than the Apache License and you'll
 need to decide whether they are appropriate for your usage. Like all Apache
-Releases, the official Apache MXNet (incubating) releases consist of source code
+Releases, the official Apache MXNet releases consist of source code
 only and are found at
 the [Download page](https://mxnet.apache.org/get_started/download).
 
diff --git a/docs/static_site/src/_includes/get_started/linux/python/gpu/pip.md b/docs/static_site/src/_includes/get_started/linux/python/gpu/pip.md
index e28646d2ee..6ca0d29492 100644
--- a/docs/static_site/src/_includes/get_started/linux/python/gpu/pip.md
+++ b/docs/static_site/src/_includes/get_started/linux/python/gpu/pip.md
@@ -4,7 +4,7 @@ Software Foundation. As such, they might contain software components with more
 restrictive licenses than the Apache License and you'll need to decide whether
 they are appropriate for your usage. The packages linked here contain
 proprietary parts of the NVidia CUDA SDK and GPL GCC Runtime Library components.
-Like all Apache Releases, the official Apache MXNet (incubating) releases
+Like all Apache Releases, the official Apache MXNet releases
 consist of source code only and are found at the [Download
 page](https://mxnet.apache.org/get_started/download).
 
diff --git a/docs/static_site/src/_includes/header.html b/docs/static_site/src/_includes/header.html
index d8531bbe3c..3df5244193 100644
--- a/docs/static_site/src/_includes/header.html
+++ b/docs/static_site/src/_includes/header.html
@@ -85,14 +85,13 @@
         <a class="page-link" href="{{'/ecosystem' | relative_url }}">Ecosystem</a>
         <a class="page-link" href="{{'/api' | relative_url }}">Docs & Tutorials</a>
         <a class="page-link" href="{{'/trusted_by' | relative_url }}">Trusted By</a>
-        <a class="page-link" href="https://github.com/apache/incubator-mxnet">GitHub</a>
+        <a class="page-link" href="https://github.com/apache/mxnet">GitHub</a>
         <div class="dropdown" style="min-width:100px">
           <span class="dropdown-header">Apache
             <svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg>
           </span>
           <div class="dropdown-content" style="min-width:250px">
             <a href="https://www.apache.org/foundation/">Apache Software Foundation</a>
-            <a href="https://incubator.apache.org/">Apache Incubator</a>
             <a href="https://www.apache.org/licenses/">License</a>
             <a href="{{ '/api/faq/security.html' | relative_url }}">Security</a>
             <a href="https://privacy.apache.org/policies/privacy-policy-public.html">Privacy</a>
diff --git a/docs/static_site/src/assets/img/asf_logo.svg b/docs/static_site/src/assets/img/asf_logo.svg
new file mode 100644
index 0000000000..620694c524
--- /dev/null
+++ b/docs/static_site/src/assets/img/asf_logo.svg
@@ -0,0 +1,210 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Generator: Adobe Illustrator 19.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<svg version="1.1" id="Layer_2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
+	 viewBox="0 0 7127.6 2890" enable-background="new 0 0 7127.6 2890" xml:space="preserve">
+<path fill="#6D6E71" d="M7104.7,847.8c15.3,15.3,22.9,33.7,22.9,55.2c0,21.5-7.6,39.9-22.9,55.4c-15.3,15.4-33.8,23.1-55.6,23.1
+	c-21.8,0-40.2-7.6-55.4-22.9c-15.1-15.3-22.7-33.7-22.7-55.2c0-21.5,7.6-39.9,22.9-55.4c15.3-15.4,33.7-23.1,55.4-23.1
+	C7070.9,824.9,7089.4,832.5,7104.7,847.8z M7098.1,951.9c13.3-13.6,20-29.8,20-48.7s-6.6-35-19.8-48.5
+	c-13.2-13.4-29.4-20.1-48.6-20.1c-19.2,0-35.4,6.7-48.7,20.2c-13.3,13.5-19.9,29.7-19.9,48.7c0,19,6.6,35.2,19.7,48.6
+	c13.1,13.4,29.3,20.1,48.5,20.1S7084.7,965.4,7098.1,951.9z M7087.1,888.1c0,14-6.1,22.8-18.4,26.4l22.5,30.5h-18.2l-20.3-28.3
+	h-18.6v28.3h-14.7v-84.6h31.8c12.8,0,22,2.2,27.6,6.6C7084.4,871.4,7087.1,878.4,7087.1,888.1z M7068.2,900c3-2.4,4.4-6.5,4.4-12
+	c0-5.5-1.5-9.4-4.5-11.6c-3-2.2-8.4-3.2-16-3.2h-18v30.5h17.5C7059.7,903.6,7065.3,902.4,7068.2,900z"/>
+<path fill="#6D6E71" d="M1803.6,499.8v155.4h-20V499.8h-56.8v-19.2h133.9v19.2H1803.6z"/>
+<path fill="#6D6E71" d="M2082.2,655.2v-76.9h-105.2v76.9h-20V480.5h20v78.9h105.2v-78.9h20v174.7H2082.2z"/>
+<path fill="#6D6E71" d="M2241.4,499.8v57.4h88.1v19.2h-88.1v59.8h101.8v19h-121.8V480.5H2340v19.2H2241.4z"/>
+<path fill="#D22128" d="M1574.5,1852.4l417.3-997.6h80.1l417.3,997.6h-105.4l-129.3-311.9h-448.2l-127.9,311.9H1574.5z M2032.6,970
+	l-205.1,493.2h404.7L2032.6,970z"/>
+<path fill="#D22128" d="M2596.9,1852.4V854.8H3010c171.4,0,295.1,158.8,295.1,313.3c0,163-115.2,316.1-286.6,316.1h-324.6v368.1
+	H2596.9z M2693.9,1397.1h318.9c118,0,193.9-108.2,193.9-229c0-125.1-92.7-226.2-202.3-226.2h-310.5V1397.1z"/>
+<path fill="#D22128" d="M3250.5,1852.4l417.3-997.6h80.1l417.3,997.6h-105.4l-129.3-311.9h-448.2l-127.9,311.9H3250.5z M3708.6,970
+	l-205.1,493.2h404.7L3708.6,970z"/>
+<path fill="#D22128" d="M4637.3,849.1c177,0,306.3,89.9,368.1,217.8l-78.7,47.8c-63.2-132.1-186.9-177-295.1-177
+	c-238.9,0-369.5,213.6-369.5,414.5c0,220.6,161.6,420.1,373.7,420.1c112.4,0,244.5-56.2,307.7-185.5l81.5,42.1
+	c-64.6,148.9-241.7,231.8-394.8,231.8c-274,0-466.5-261.3-466.5-514.2C4163.8,1106.3,4336.6,849.1,4637.3,849.1z"/>
+<path fill="#D22128" d="M5949.1,854.8v997.6h-98.4v-466.5h-591.5v466.5h-96.9V854.8h96.9v444h591.5v-444H5949.1z"/>
+<path fill="#D22128" d="M6844.6,1765.2v87.1h-670.2V854.8H6832v87.1h-560.6v359.7h489v82.9h-489v380.8H6844.6z"/>
+<path fill="#6D6E71" d="M1667.6,2063.6c11.8,3.5,22.2,8.3,31,14.2l-10.3,22.6c-9-6-18.6-10.4-28.9-13.4c-10.2-2.9-20-4.4-29.2-4.4
+	c-13.6,0-24.5,2.4-32.6,7.3c-8.1,4.9-12.2,11.8-12.2,20.7c0,7.6,2.3,14,6.8,19c4.5,5,10.2,8.9,17,11.7c6.8,2.8,16.1,6,28,9.6
+	c14.4,4.6,26,8.9,34.7,12.9c8.8,4,16.3,9.9,22.5,17.8c6.2,7.8,9.3,18.2,9.3,31c0,11.7-3.2,21.8-9.5,30.6
+	c-6.3,8.7-15.3,15.5-26.8,20.3c-11.6,4.8-24.9,7.2-40,7.2c-15.1,0-29.7-2.9-43.9-8.7c-14.2-5.8-26.4-13.6-36.6-23.4l10.7-21.6
+	c9.6,9.4,20.7,16.7,33.3,21.9c12.6,5.2,24.8,7.8,36.8,7.8c15.3,0,27.3-3,36.1-8.9c8.8-5.9,13.2-13.9,13.2-23.9
+	c0-7.8-2.3-14.3-6.9-19.4c-4.6-5.1-10.3-9-17.1-11.9c-6.8-2.8-16.1-6-28-9.6c-14.2-4.2-25.7-8.3-34.6-12.2
+	c-8.9-3.9-16.4-9.7-22.5-17.5c-6.1-7.7-9.2-17.9-9.2-30.6c0-10.9,3-20.4,9-28.6c6-8.2,14.6-14.6,25.6-19.1
+	c11.1-4.5,23.8-6.8,38.2-6.8C1643.8,2058.3,1655.7,2060.1,1667.6,2063.6z"/>
+<path fill="#6D6E71" d="M1980.1,2072.8c16.8,9.4,30.2,22.3,40,38.4c9.8,16.2,14.8,33.9,14.8,53.3c0,19.5-4.9,37.4-14.8,53.6
+	c-9.8,16.3-23.2,29.1-40,38.6c-16.8,9.5-35.3,14.3-55.2,14.3c-20.3,0-38.8-4.7-55.7-14.3c-16.8-9.5-30.2-22.4-40-38.6
+	c-9.8-16.3-14.8-34.1-14.8-53.6c0-19.5,4.9-37.3,14.8-53.5c9.8-16.2,23.2-29,40-38.3c16.8-9.4,35.4-14,55.7-14
+	C1944.8,2058.6,1963.2,2063.3,1980.1,2072.8z M1881.9,2092.7c-13.1,7.4-23.6,17.5-31.4,30.1c-7.8,12.6-11.8,26.5-11.8,41.7
+	c0,15.3,3.9,29.3,11.8,42c7.8,12.7,18.3,22.8,31.4,30.2c13.1,7.4,27.4,11.1,42.9,11.1c15.5,0,29.7-3.7,42.7-11.1
+	c13-7.4,23.3-17.4,31.1-30.2c7.7-12.7,11.6-26.7,11.6-42s-3.9-29.2-11.6-41.8c-7.7-12.6-18.1-22.6-31.1-30
+	c-13-7.4-27.2-11.2-42.6-11.2C1909.4,2081.5,1895.1,2085.2,1881.9,2092.7z"/>
+<path fill="#6D6E71" d="M2186.5,2082.4v74h98.4v23.2h-98.4v90.2h-24.1v-210.6h133.8v23.2H2186.5z"/>
+<path fill="#6D6E71" d="M2491.6,2082.4v187.4h-24.1v-187.4h-68.4v-23.2h161.4v23.2H2491.6z"/>
+<path fill="#6D6E71" d="M2871.8,2269.8l-56.8-177.4l-57.6,177.4h-24.5l-70.5-210.6h25.9l57.9,182.7l57.1-182.4l24.1-0.3l57.7,182.7
+	l57.1-182.7h25l-70.6,210.6H2871.8z"/>
+<path fill="#6D6E71" d="M3087.3,2216.6l-23.5,53.2h-25.6l94.4-210.6h25l94.1,210.6h-26.1l-23.5-53.2H3087.3z M3144.5,2086.6
+	l-46.9,106.8h94.4L3144.5,2086.6z"/>
+<path fill="#6D6E71" d="M3461.1,2202.7c-6,0.4-10.7,0.6-14.1,0.6h-56v66.5H3367v-210.6h80c26.2,0,46.6,6.2,61.2,18.5
+	c14.5,12.3,21.8,29.8,21.8,52.3c0,17.2-4.1,31.7-12.2,43.3c-8.1,11.6-19.8,20-35,25l49.2,71.5h-27.3L3461.1,2202.7z M3491.3,2167.6
+	c10.3-8.4,15.5-20.8,15.5-37c0-15.9-5.2-27.9-15.5-36c-10.3-8.1-25.1-12.2-44.3-12.2h-56v97.8h56
+	C3466.2,2180.2,3481,2176,3491.3,2167.6z"/>
+<path fill="#6D6E71" d="M3688.3,2082.4v69.2h106.2v23.2h-106.2v72.1h122.8v22.9h-146.9v-210.6h142.9v23.2H3688.3z"/>
+<path fill="#6D6E71" d="M4147,2082.4v74h98.4v23.2H4147v90.2h-24.1v-210.6h133.8v23.2H4147z"/>
+<path fill="#6D6E71" d="M4523.3,2072.8c16.8,9.4,30.2,22.3,40,38.4c9.8,16.2,14.8,33.9,14.8,53.3c0,19.5-4.9,37.4-14.8,53.6
+	c-9.8,16.3-23.2,29.1-40,38.6c-16.8,9.5-35.3,14.3-55.2,14.3c-20.3,0-38.8-4.7-55.7-14.3c-16.8-9.5-30.2-22.4-40-38.6
+	c-9.8-16.3-14.8-34.1-14.8-53.6c0-19.5,4.9-37.3,14.8-53.5c9.8-16.2,23.2-29,40-38.3c16.8-9.4,35.4-14,55.7-14
+	C4488.1,2058.6,4506.5,2063.3,4523.3,2072.8z M4425.2,2092.7c-13.1,7.4-23.6,17.5-31.4,30.1c-7.8,12.6-11.8,26.5-11.8,41.7
+	c0,15.3,3.9,29.3,11.8,42c7.8,12.7,18.3,22.8,31.4,30.2c13.1,7.4,27.4,11.1,42.9,11.1c15.5,0,29.7-3.7,42.7-11.1
+	c13-7.4,23.3-17.4,31.1-30.2c7.7-12.7,11.6-26.7,11.6-42s-3.9-29.2-11.6-41.8c-7.7-12.6-18.1-22.6-31.1-30
+	c-13-7.4-27.2-11.2-42.6-11.2C4452.6,2081.5,4438.3,2085.2,4425.2,2092.7z"/>
+<path fill="#6D6E71" d="M4854.7,2247.7c-15.7,15.5-37.3,23.3-64.8,23.3c-27.7,0-49.4-7.8-65.1-23.3c-15.7-15.5-23.6-37-23.6-64.6
+	v-124h24.1v124c0,20.3,5.8,36.1,17.3,47.5c11.6,11.4,27.3,17.1,47.3,17.1c20.1,0,35.8-5.7,47.1-17c11.4-11.3,17-27.2,17-47.7v-124
+	h24.1v124C4878.2,2210.7,4870.4,2232.2,4854.7,2247.7z"/>
+<path fill="#6D6E71" d="M5169.5,2269.8l-126.3-169.1v169.1h-24.1v-210.6h25l126.3,169.3v-169.3h23.8v210.6H5169.5z"/>
+<path fill="#6D6E71" d="M5478.4,2073.1c16.4,9.3,29.4,21.9,38.9,37.9c9.6,16,14.3,33.9,14.3,53.5s-4.8,37.6-14.3,53.6
+	c-9.5,16.1-22.6,28.7-39.3,37.9c-16.6,9.2-35.2,13.8-55.5,13.8h-84.3v-210.6h85.2C5443.7,2059.2,5462,2063.8,5478.4,2073.1z
+	 M5362.3,2246.9h61.4c15.5,0,29.6-3.5,42.3-10.6c12.7-7.1,22.8-16.9,30.2-29.5c7.4-12.5,11.1-26.5,11.1-42
+	c0-15.5-3.8-29.4-11.3-41.9c-7.5-12.5-17.7-22.3-30.6-29.6c-12.8-7.2-27-10.9-42.6-10.9h-60.5V2246.9z"/>
+<path fill="#6D6E71" d="M5668.6,2216.6l-23.5,53.2h-25.6l94.4-210.6h25l94.1,210.6H5807l-23.5-53.2H5668.6z M5725.8,2086.6
+	l-46.9,106.8h94.4L5725.8,2086.6z"/>
+<path fill="#6D6E71" d="M5991,2082.4v187.4H5967v-187.4h-68.4v-23.2h161.4v23.2H5991z"/>
+<path fill="#6D6E71" d="M6175.9,2269.8v-210.6h24.1v210.6H6175.9z"/>
+<path fill="#6D6E71" d="M6493.7,2072.8c16.8,9.4,30.2,22.3,40,38.4c9.8,16.2,14.8,33.9,14.8,53.3c0,19.5-4.9,37.4-14.8,53.6
+	c-9.8,16.3-23.2,29.1-40,38.6c-16.8,9.5-35.3,14.3-55.2,14.3c-20.3,0-38.8-4.7-55.7-14.3c-16.8-9.5-30.2-22.4-40-38.6
+	c-9.8-16.3-14.8-34.1-14.8-53.6c0-19.5,4.9-37.3,14.8-53.5c9.8-16.2,23.2-29,40-38.3c16.8-9.4,35.4-14,55.7-14
+	C6458.5,2058.6,6476.9,2063.3,6493.7,2072.8z M6395.6,2092.7c-13.1,7.4-23.6,17.5-31.4,30.1c-7.8,12.6-11.8,26.5-11.8,41.7
+	c0,15.3,3.9,29.3,11.8,42c7.8,12.7,18.3,22.8,31.4,30.2c13.1,7.4,27.4,11.1,42.9,11.1c15.5,0,29.7-3.7,42.7-11.1
+	c13-7.4,23.3-17.4,31.1-30.2c7.7-12.7,11.6-26.7,11.6-42s-3.9-29.2-11.6-41.8c-7.7-12.6-18.1-22.6-31.1-30
+	c-13-7.4-27.2-11.2-42.6-11.2C6423,2081.5,6408.8,2085.2,6395.6,2092.7z"/>
+<path fill="#6D6E71" d="M6826.5,2269.8l-126.3-169.1v169.1h-24.1v-210.6h25l126.3,169.3v-169.3h23.8v210.6H6826.5z"/>
+<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="-4516.6152" y1="-2338.7222" x2="-4108.4111" y2="-1861.3982" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0" style="stop-color:#F69923"/>
+	<stop  offset="0.3123" style="stop-color:#F79A23"/>
+	<stop  offset="0.8383" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_1_)" d="M1230.1,13.7c-45.3,26.8-120.6,102.5-210.5,212.3l82.6,155.9c58-82.9,116.9-157.5,176.3-221.2
+	c4.6-5.1,7-7.5,7-7.5c-2.3,2.5-4.6,5-7,7.5c-19.2,21.2-77.5,89.2-165.5,224.4c84.7-4.2,214.9-21.6,321.1-39.7
+	c31.6-177-31-258-31-258S1323.4-41.4,1230.1,13.7z"/>
+<path fill="none" d="M1090.2,903.1c0.6-0.1,1.2-0.2,1.8-0.3l-11.9,1.3c-0.7,0.3-1.4,0.7-2.1,1
+	C1082.1,904.4,1086.2,903.7,1090.2,903.1z"/>
+<path fill="none" d="M1005.9,1182.3c-6.7,1.5-13.7,2.7-20.7,3.7C992.3,1185,999.2,1183.8,1005.9,1182.3z"/>
+<path fill="none" d="M432.9,1808.8c0.9-2.3,1.8-4.7,2.6-7c18.2-48,36.2-94.7,54-140.1c20-51,39.8-100.4,59.3-148.3
+	c20.6-50.4,40.9-99.2,60.9-146.3c21-49.4,41.7-97,62-142.8c16.5-37.3,32.8-73.4,48.9-108.3c5.4-11.7,10.7-23.2,16-34.6
+	c10.5-22.7,21-44.8,31.3-66.5c9.5-20,19-39.6,28.3-58.8c3.1-6.4,6.2-12.8,9.3-19.1c0.5-1,1-2,1.5-3.1l-10.2,1.1l-8-15.9
+	c-0.8,1.6-1.6,3.1-2.4,4.6c-14.5,28.8-28.9,57.9-43.1,87.2c-8.2,16.9-16.4,34-24.6,51c-22.6,47.4-44.8,95.2-66.6,143.3
+	c-22.1,48.6-43.7,97.5-64.9,146.5c-20.8,48.1-41.3,96.2-61.2,144.2c-20,48-39.5,95.7-58.5,143.2c-19.9,49.5-39.2,98.7-58,147.2
+	c-4.2,10.9-8.5,21.9-12.7,32.8c-15,39.2-29.7,77.8-44,116l12.7,25.1l11.4-1.2c0.4-1.1,0.8-2.3,1.3-3.4
+	C396.7,1905.4,414.9,1856.4,432.9,1808.8z"/>
+<path fill="none" d="M980,1186.8L980,1186.8c0.1,0,0.1,0,0.1-0.1C980.1,1186.8,980.1,1186.8,980,1186.8z"/>
+<path fill="#BE202E" d="M952.6,1323c-10.6,1.9-21.4,3.8-32.5,5.7c-0.1,0-0.1,0.1-0.2,0.1c5.6-0.8,11.2-1.7,16.6-2.6
+	C942,1325.2,947.3,1324.1,952.6,1323z"/>
+<path opacity="0.35" fill="#BE202E" d="M952.6,1323c-10.6,1.9-21.4,3.8-32.5,5.7c-0.1,0-0.1,0.1-0.2,0.1c5.6-0.8,11.2-1.7,16.6-2.6
+	C942,1325.2,947.3,1324.1,952.6,1323z"/>
+<path fill="#BE202E" d="M980.3,1186.7C980.2,1186.7,980.2,1186.7,980.3,1186.7c-0.1,0.1-0.2,0.1-0.2,0.1c1.8-0.2,3.5-0.5,5.2-0.8
+	c7-1,13.9-2.2,20.7-3.7C997.5,1183.8,989,1185.2,980.3,1186.7L980.3,1186.7L980.3,1186.7z"/>
+<path opacity="0.35" fill="#BE202E" d="M980.3,1186.7C980.2,1186.7,980.2,1186.7,980.3,1186.7c-0.1,0.1-0.2,0.1-0.2,0.1
+	c1.8-0.2,3.5-0.5,5.2-0.8c7-1,13.9-2.2,20.7-3.7C997.5,1183.8,989,1185.2,980.3,1186.7L980.3,1186.7L980.3,1186.7z"/>
+<linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="-7537.7339" y1="-2391.4075" x2="-4625.4141" y2="-2391.4075" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_2_)" d="M858.6,784.7c25.1-46.9,50.5-92.8,76.2-137.4c26.7-46.4,53.7-91.3,80.9-134.7
+	c1.6-2.6,3.2-5.2,4.8-7.7c27-42.7,54.2-83.7,81.6-122.9L1019.5,226c-6.2,7.6-12.5,15.3-18.8,23.2c-23.8,29.7-48.6,61.6-73.9,95.5
+	c-28.6,38.2-58,78.9-87.8,121.7c-27.6,39.5-55.5,80.9-83.5,123.7c-23.8,36.5-47.7,74-71.4,112.5c-0.9,1.4-1.8,2.9-2.6,4.3
+	l107.5,212.3C811.8,873.6,835.1,828.7,858.6,784.7z"/>
+<linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="-7186.1777" y1="-2099.3059" x2="-5450.7183" y2="-2099.3059" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0" style="stop-color:#282662"/>
+	<stop  offset="9.548390e-02" style="stop-color:#662E8D"/>
+	<stop  offset="0.7882" style="stop-color:#9F2064"/>
+	<stop  offset="0.9487" style="stop-color:#CD2032"/>
+</linearGradient>
+<path fill="url(#SVGID_3_)" d="M369,1981c-14.2,39.1-28.5,78.9-42.9,119.6c-0.2,0.6-0.4,1.2-0.6,1.8c-2,5.7-4.1,11.5-6.1,17.2
+	c-9.7,27.4-18,52.1-37.3,108.2c31.7,14.5,57.1,52.5,81.1,95.6c-2.6-44.7-21-86.6-56.2-119.1c156.1,7,290.6-32.4,360.1-146.6
+	c6.2-10.2,11.9-20.9,17-32.2c-31.6,40.1-70.8,57.1-144.5,53c-0.2,0.1-0.3,0.1-0.5,0.2c0.2-0.1,0.3-0.1,0.5-0.2
+	c108.6-48.6,163.1-95.3,211.2-172.6c11.4-18.3,22.5-38.4,33.8-60.6c-94.9,97.5-205,125.3-320.9,104.2l-86.9,9.5
+	C374.4,1966.3,371.7,1973.6,369,1981z"/>
+<linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="-7374.1626" y1="-2418.5454" x2="-4461.8428" y2="-2418.5454" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_4_)" d="M409.6,1786.3c18.8-48.5,38.1-97.7,58-147.2c19-47.4,38.5-95.2,58.5-143.2
+	c20-48,40.4-96.1,61.2-144.2c21.2-49,42.9-97.8,64.9-146.5c21.8-48.1,44-95.9,66.6-143.3c8.1-17.1,16.3-34.1,24.6-51
+	c14.2-29.3,28.6-58.4,43.1-87.2c0.8-1.6,1.6-3.1,2.4-4.6L681.4,706.8c-1.8,2.9-3.5,5.8-5.3,8.6c-25.1,40.9-50,82.7-74.4,125.4
+	c-24.7,43.1-49,87.1-72.7,131.7c-20,37.6-39.6,75.6-58.6,113.9c-3.8,7.8-7.6,15.5-11.3,23.2c-23.4,48.2-44.6,94.8-63.7,139.5
+	c-21.7,50.7-40.7,99.2-57.5,145.1c-11,30.2-21,59.4-30.1,87.4c-7.5,24-14.7,47.9-21.5,71.8c-16,56.3-29.9,112.4-41.2,168.3
+	L353,1935.1c14.3-38.1,28.9-76.8,44-116C401.1,1808.2,405.4,1797.3,409.6,1786.3z"/>
+<linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="-7161.7642" y1="-2379.1431" x2="-5631.2524" y2="-2379.1431" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0" style="stop-color:#282662"/>
+	<stop  offset="9.548390e-02" style="stop-color:#662E8D"/>
+	<stop  offset="0.7882" style="stop-color:#9F2064"/>
+	<stop  offset="0.9487" style="stop-color:#CD2032"/>
+</linearGradient>
+<path fill="url(#SVGID_5_)" d="M243.5,1729.4c-13.6,68.2-23.2,136.2-28,203.8c-0.2,2.4-0.4,4.7-0.5,7.1
+	c-33.7-54-124-106.8-123.8-106.2c64.6,93.7,113.7,186.7,120.9,278c-34.6,7.1-82-3.2-136.8-23.3c57.1,52.5,100,67,116.7,70.9
+	c-52.5,3.3-107.1,39.3-162.1,80.8c80.5-32.8,145.5-45.8,192.1-35.3C148.1,2414.2,74.1,2645,0,2890c22.7-6.7,36.2-21.9,43.9-42.6
+	c13.2-44.4,100.8-335.6,238-718.2c3.9-10.9,7.8-21.8,11.8-32.9c1.1-3,2.2-6.1,3.3-9.2c14.5-40.1,29.5-81.1,45.1-122.9
+	c3.5-9.5,7.1-19,10.7-28.6c0.1-0.2,0.1-0.4,0.2-0.6l-107.9-213.2C244.6,1724.4,244,1726.9,243.5,1729.4z"/>
+<linearGradient id="SVGID_6_" gradientUnits="userSpaceOnUse" x1="-7374.1626" y1="-2117.1309" x2="-4461.8428" y2="-2117.1309" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_6_)" d="M805.6,937c-3.1,6.3-6.2,12.7-9.3,19.1c-9.3,19.2-18.8,38.8-28.3,58.8
+	c-10.3,21.7-20.7,43.9-31.3,66.5c-5.3,11.4-10.6,22.9-16,34.6c-16.1,35-32.4,71.1-48.9,108.3c-20.3,45.8-41,93.4-62,142.8
+	c-20,47.1-40.3,95.9-60.9,146.3c-19.5,47.9-39.3,97.3-59.3,148.3c-17.8,45.4-35.9,92.1-54,140.1c-0.9,2.3-1.8,4.7-2.6,7
+	c-18,47.6-36.2,96.6-54.6,146.8c-0.4,1.1-0.8,2.3-1.3,3.4l86.9-9.5c-1.7-0.3-3.5-0.5-5.2-0.9c103.9-13,242.1-90.6,331.4-186.5
+	c41.1-44.2,78.5-96.3,113-157.3c25.7-45.4,49.8-95.8,72.8-151.5c20.1-48.7,39.4-101.4,58-158.6c-23.9,12.6-51.2,21.8-81.4,28.2
+	c-5.3,1.1-10.7,2.2-16.1,3.1c-5.5,1-11,1.8-16.6,2.6l0,0l0,0c0.1,0,0.1-0.1,0.2-0.1c96.9-37.3,158-109.2,202.4-197.4
+	c-25.5,17.4-66.9,40.1-116.6,51.1c-6.7,1.5-13.7,2.7-20.7,3.7c-1.7,0.3-3.5,0.6-5.2,0.8l0,0l0,0c0.1,0,0.1,0,0.1-0.1
+	c0,0,0.1,0,0.1,0l0,0c33.6-14.1,62-29.8,86.6-48.4c5.3-4,10.4-8.1,15.3-12.3c7.5-6.5,14.7-13.3,21.5-20.5c4.4-4.6,8.6-9.3,12.7-14.2
+	c9.6-11.5,18.7-23.9,27.1-37.3c2.6-4.1,5.1-8.3,7.6-12.6c3.2-6.2,6.3-12.3,9.3-18.3c13.5-27.2,24.4-51.5,33-72.8
+	c4.3-10.6,8.1-20.5,11.3-29.7c1.3-3.7,2.5-7.2,3.7-10.6c3.4-10.2,6.2-19.3,8.4-27.3c3.3-12,5.3-21.5,6.4-28.4l0,0l0,0
+	c-3.3,2.6-7.1,5.2-11.3,7.7c-29.3,17.5-79.5,33.4-119.9,40.8l79.8-8.8l-79.8,8.8c-0.6,0.1-1.2,0.2-1.8,0.3c-4,0.7-8.1,1.3-12.2,2
+	c0.7-0.3,1.4-0.7,2.1-1l-273,29.9C806.6,935,806.1,936,805.6,937z"/>
+<linearGradient id="SVGID_7_" gradientUnits="userSpaceOnUse" x1="-7554.8232" y1="-2132.0981" x2="-4642.5034" y2="-2132.0981" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_7_)" d="M1112.9,385.1c-24.3,37.3-50.8,79.6-79.4,127.5c-1.5,2.5-3,5.1-4.5,7.6
+	c-24.6,41.5-50.8,87.1-78.3,137c-23.8,43.1-48.5,89.3-74.3,139c-22.4,43.3-45.6,89.2-69.4,137.8l273-29.9
+	c79.5-36.6,115.1-69.7,149.6-117.6c9.2-13.2,18.4-27,27.5-41.3c28-43.8,55.6-92,80.1-139.9c23.7-46.3,44.7-92.2,60.7-133.5
+	c10.2-26.3,18.4-50.8,24.1-72.3c5-19,8.9-36.9,11.9-54.1C1327.9,363.5,1197.6,380.9,1112.9,385.1z"/>
+<path fill="#BE202E" d="M936.5,1326.1c-5.5,1-11,1.8-16.6,2.6l0,0C925.5,1328,931,1327.1,936.5,1326.1z"/>
+<path opacity="0.35" fill="#BE202E" d="M936.5,1326.1c-5.5,1-11,1.8-16.6,2.6l0,0C925.5,1328,931,1327.1,936.5,1326.1z"/>
+<linearGradient id="SVGID_8_" gradientUnits="userSpaceOnUse" x1="-7374.1626" y1="-2027.484" x2="-4461.8433" y2="-2027.484" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_8_)" d="M936.5,1326.1c-5.5,1-11,1.8-16.6,2.6l0,0C925.5,1328,931,1327.1,936.5,1326.1z"/>
+<path fill="#BE202E" d="M980,1186.8c1.8-0.2,3.5-0.5,5.2-0.8C983.5,1186.3,981.8,1186.6,980,1186.8L980,1186.8z"/>
+<path opacity="0.35" fill="#BE202E" d="M980,1186.8c1.8-0.2,3.5-0.5,5.2-0.8C983.5,1186.3,981.8,1186.6,980,1186.8L980,1186.8z"/>
+<linearGradient id="SVGID_9_" gradientUnits="userSpaceOnUse" x1="-7374.1626" y1="-2037.7417" x2="-4461.8433" y2="-2037.7417" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_9_)" d="M980,1186.8c1.8-0.2,3.5-0.5,5.2-0.8C983.5,1186.3,981.8,1186.6,980,1186.8L980,1186.8z"/>
+<path fill="#BE202E" d="M980.2,1186.7C980.2,1186.7,980.2,1186.7,980.2,1186.7L980.2,1186.7L980.2,1186.7L980.2,1186.7
+	C980.2,1186.7,980.2,1186.7,980.2,1186.7z"/>
+<path opacity="0.35" fill="#BE202E" d="M980.2,1186.7C980.2,1186.7,980.2,1186.7,980.2,1186.7L980.2,1186.7L980.2,1186.7
+	L980.2,1186.7C980.2,1186.7,980.2,1186.7,980.2,1186.7z"/>
+<linearGradient id="SVGID_10_" gradientUnits="userSpaceOnUse" x1="-5738.0635" y1="-2039.799" x2="-5094.3457" y2="-2039.799" gradientTransform="matrix(0.4226 -0.9063 0.9063 0.4226 5117.8774 -2859.9343)">
+	<stop  offset="0.3233" style="stop-color:#9E2064"/>
+	<stop  offset="0.6302" style="stop-color:#C92037"/>
+	<stop  offset="0.7514" style="stop-color:#CD2335"/>
+	<stop  offset="1" style="stop-color:#E97826"/>
+</linearGradient>
+<path fill="url(#SVGID_10_)" d="M980.2,1186.7C980.2,1186.7,980.2,1186.7,980.2,1186.7L980.2,1186.7L980.2,1186.7L980.2,1186.7
+	C980.2,1186.7,980.2,1186.7,980.2,1186.7z"/>
+</svg>
diff --git a/docs/static_site/src/index.html b/docs/static_site/src/index.html
index 4edbad7016..509be5d0ab 100644
--- a/docs/static_site/src/index.html
+++ b/docs/static_site/src/index.html
@@ -53,7 +53,7 @@ community:
 - title: GitHub
   text: Report bugs, request features, discuss issues, and more.
   icon: /assets/img/octocat.png
-  link: https://github.com/apache/incubator-mxnet
+  link: https://github.com/apache/mxnet
 - title: Discuss Forum
   text: Browse and join discussions on deep learning with MXNet and Gluon.
   icon: /assets/img/mxnet_m.png
diff --git a/docs/static_site/src/pages/api/api.html b/docs/static_site/src/pages/api/api.html
index 2218d93e9b..1ea26e84f6 100644
--- a/docs/static_site/src/pages/api/api.html
+++ b/docs/static_site/src/pages/api/api.html
@@ -52,7 +52,7 @@ docs:
 - title: Julia
   guide_link: /api/julia
   api_link: /api/julia/docs/api
-  tutorial_link: https://mxnet.incubator.apache.org/api/julia/docs/api/#tutorials
+  tutorial_link: https://mxnet.apache.org/api/julia/docs/api/#tutorials
   description:
   icon: /assets/img/julia_logo.svg
   tag: julia
diff --git a/docs/static_site/src/pages/api/architecture/exception_handling.md b/docs/static_site/src/pages/api/architecture/exception_handling.md
index 61674d678b..5633498947 100644
--- a/docs/static_site/src/pages/api/architecture/exception_handling.md
+++ b/docs/static_site/src/pages/api/architecture/exception_handling.md
@@ -42,7 +42,7 @@ to handle exceptions for the second case.
 ## Prerequisites
 
 To complete this tutorial, we need:
-- MXNet [7b24137](https://github.com/apache/incubator-mxnet/commit/7b24137ed45df605defa4ce72ec91554f6e445f0). See Instructions in [Setup and Installation](https://mxnet.io/get_started).
+- MXNet [7b24137](https://github.com/apache/mxnet/commit/7b24137ed45df605defa4ce72ec91554f6e445f0). See Instructions in [Setup and Installation](https://mxnet.io/get_started).
 
 ## Exception Handling for Iterators
 
diff --git a/docs/static_site/src/pages/api/architecture/note_engine.md b/docs/static_site/src/pages/api/architecture/note_engine.md
index 63400b47e2..68b3689415 100644
--- a/docs/static_site/src/pages/api/architecture/note_engine.md
+++ b/docs/static_site/src/pages/api/architecture/note_engine.md
@@ -379,7 +379,7 @@ Allowing mutation mitigates these issues.
 
 
 ## Source Code of the Generic Dependency Engine
-[MXNet](https://github.com/apache/incubator-mxnet) provides an implementation
+[MXNet](https://github.com/apache/mxnet) provides an implementation
 of the generic dependency engine described in this page.
 We welcome your contributions.
 
diff --git a/docs/static_site/src/pages/api/architecture/program_model.md b/docs/static_site/src/pages/api/architecture/program_model.md
index 25090cb487..c0747adca7 100644
--- a/docs/static_site/src/pages/api/architecture/program_model.md
+++ b/docs/static_site/src/pages/api/architecture/program_model.md
@@ -619,7 +619,7 @@ to create more interesting and intelligent deep learning libraries.
 
 This document is part of our effort to provide [open-source system design notes](overview)
 for deep learning libraries. If you're interested in contributing to Apache MXNet or its
-documentation, [fork us on GitHub](http://github.com/apache/incubator-mxnet).
+documentation, [fork us on GitHub](http://github.com/apache/mxnet).
 
 ## Next Steps
 
diff --git a/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md b/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
index d0b38a0156..18bca8c46a 100644
--- a/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
+++ b/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
@@ -127,20 +127,20 @@ The multi threaded inference example (`multi_threaded_inference.cc`) involves th
 
 ### Step 1: Parse arguments and load input image into ndarray
 
-[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L299-L341](multi_threaded_inference.cc#L299-L341)
+[https://github.com/apache/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L299-L341](multi_threaded_inference.cc#L299-L341)
 
 The above code parses arguments, loads the image file into a ndarray with a specific shape. There are a few things that are set by default and not configurable. For example, `static_alloc` and `static_shape` are by default set to true.
 
 
 ### Step 2: Prepare input data and load parameters, copying data to a specific context
 
-[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L147-L205](multi_threaded_inference.cc#L147-L205)
+[https://github.com/apache/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L147-L205](multi_threaded_inference.cc#L147-L205)
 
 The above code loads params and copies input data and params to specific context.
 
 ### Step 3: Preparing arguments to pass to the CachedOp and calling C API to create cached op
 
-[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L207-L233](multi_threaded_inference.cc#L207-233)
+[https://github.com/apache/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L207-L233](multi_threaded_inference.cc#L207-233)
 
 The above code prepares `flag_key_cstrs` and `flag_val_cstrs` to be passed the Cached op.
 The C API call is made with `MXCreateCachedOpEX`. This will lead to creation of thread safe cached
@@ -150,7 +150,7 @@ true. When this is set to false, it will invoke CachedOp instead of CachedOpThre
 
 ### Step 4: Prepare lambda function which will run in spawned threads
 
-[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L248-L262](multi_threaded_inference.cc#L248-262)
+[https://github.com/apache/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L248-L262](multi_threaded_inference.cc#L248-262)
 
 The above creates the lambda function taking the thread number as the argument.
 If `random_sleep` is set it will sleep for a random number (secs) generated between 0 to 5 seconds.
@@ -159,14 +159,14 @@ When this is set to false, it will invoke CachedOp instead of CachedOpThreadSafe
 
 ### Step 5: Spawn multiple threads and wait for all threads to complete
 
-[https://github.com/anirudh2290/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L264-L276](multi_threaded_inference.cc#L264-L276)
+[https://github.com/anirudh2290/apache/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L264-L276](multi_threaded_inference.cc#L264-L276)
 
 Spawns multiple threads, joins and waits to wait for all ops to complete.
 The other alternative is to wait in the thread on the output ndarray and remove the WaitAll after join.
 
 ### Step 6: Post process data to obtain inference results and cleanup
 
-[https://github.com/apache/incubator-/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L286-L293](multi_threaded_inference.cc#L286-293)
+[https://github.com/apache/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L286-L293](multi_threaded_inference.cc#L286-293)
 
 The above code outputs results for different threads and cleans up the thread safe cached op.
 
@@ -196,4 +196,4 @@ the CPP frontend to run multi-threaded inference as of today.
 ## Future Work
 
 Future work includes Increasing model coverage and addressing most of the limitations mentioned under Current Limitations except the training use case.
-For more updates, please subscribe to discussion activity on RFC: https://github.com/apache/incubator-mxnet/issues/16431.
+For more updates, please subscribe to discussion activity on RFC: https://github.com/apache/mxnet/issues/16431.
diff --git a/docs/static_site/src/pages/api/cpp/docs/tutorials/mxnet_cpp_inference_tutorial.md b/docs/static_site/src/pages/api/cpp/docs/tutorials/mxnet_cpp_inference_tutorial.md
index dcc96d4547..9eb8bf01cb 100644
--- a/docs/static_site/src/pages/api/cpp/docs/tutorials/mxnet_cpp_inference_tutorial.md
+++ b/docs/static_site/src/pages/api/cpp/docs/tutorials/mxnet_cpp_inference_tutorial.md
@@ -29,7 +29,7 @@ tag: cpp
 ## Overview
 MXNet provides various useful tools and interfaces for deploying your model for inference. For example, you can use [MXNet Model Server](https://github.com/awslabs/mxnet-model-server) to start a service and host your trained model easily.
 Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](/api/python/docs/api/), [Java](/api/java/docs/api/#package), [Scala](/api/scala/docs/api), and [C++](/api/cpp/docs/api/) APIs.
-We will focus on the MXNet C++ API. We have slightly modified the code in [C++ Inference Example](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference) for our use case.
+We will focus on the MXNet C++ API. We have slightly modified the code in [C++ Inference Example](https://github.com/apache/mxnet/tree/master/cpp-package/example/inference) for our use case.
 
 ## Prerequisites
 
@@ -53,7 +53,7 @@ After you complete [the previous tutorial](/api/python/docs/tutorials/getting-st
 
 
 Now we need to write the C++ code to load them and run prediction on a test image.
-The full code is available in the [C++ Inference Example](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference), we will walk you through it and point out the necessary changes to make for our use case.
+The full code is available in the [C++ Inference Example](https://github.com/apache/mxnet/tree/master/cpp-package/example/inference), we will walk you through it and point out the necessary changes to make for our use case.
 
 
 
@@ -106,7 +106,7 @@ class Predictor {
 
 ### Load the model, synset file, and normalization values
 
-In the Predictor constructor, you need to provide paths to saved json and param files. After that, add the following methods `LoadModel` and `LoadParameters` to load the network and its parameters. This part is the same as [the example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp).
+In the Predictor constructor, you need to provide paths to saved json and param files. After that, add the following methods `LoadModel` and `LoadParameters` to load the network and its parameters. This part is the same as [the example](https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp).
 
 Next, we need to load synset file, and normalization values. We have made the following change since our synset file contains flower names and we used both mean and standard deviation for image normalization.
 
@@ -190,7 +190,7 @@ NDArray Predictor::LoadInputImage(const std::string& image_file) {
 
 ### Predict the image
 
-Finally, let's run the inference. It's basically using MXNet executor to do a forward pass. To run predictions on multiple images, you can load the images in a list of NDArrays and run prediction in batches. Note that the Predictor class may not be thread safe. Calling it in multi-threaded environments was not tested. To utilize multi-threaded prediction, you need to use the C predict API. Please follow the [C predict example](https://github.com/apache/incubator-mxnet/tree/master/example [...]
+Finally, let's run the inference. It's basically using MXNet executor to do a forward pass. To run predictions on multiple images, you can load the images in a list of NDArrays and run prediction in batches. Note that the Predictor class may not be thread safe. Calling it in multi-threaded environments was not tested. To utilize multi-threaded prediction, you need to use the C predict API. Please follow the [C predict example](https://github.com/apache/mxnet/tree/master/example/image-cla [...]
 
 An additional step is to normalize the image NDArrays values to `(0, 1)` and apply mean and standard deviation we just loaded.
 
@@ -249,14 +249,14 @@ void Predictor::PredictImage(const std::string& image_file) {
 
 ### Compile and run the inference code
 
-You can find the [full code for the inference example](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference) in the `cpp-package` folder of the project
-, and to compile it use this [Makefile](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/Makefile).
+You can find the [full code for the inference example](https://github.com/apache/mxnet/tree/master/cpp-package/example/inference) in the `cpp-package` folder of the project
+, and to compile it use this [Makefile](https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/Makefile).
 
 Make a copy of the example code, rename it to `flower_inference` and apply the changes we mentioned above. Now you will be able to compile and run inference. Run `make all`. Once this is complete, run inference with the following parameters. Remember to set your `LD_LIBRARY_PATH` to point to MXNet library if you have not done so.
 
 ```bash
 make all
-export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=:path/to/incubator-mxnet/lib
+export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=:path/to/mxnet/lib
 ./flower_inference --symbol flower-recognition-symbol.json --params flower-recognition-0040.params --synset synset.txt --mean mean_std_224.nd --image ./data/test/lotus/image_01832.jpg
 ```
 
@@ -280,13 +280,13 @@ Then it will predict your image:
 ## What's next
 
 Now you can explore more ways to run inference and deploy your models:
-1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
-2. [Scala Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer)
+1. [Java Inference examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
+2. [Scala Inference examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer)
 3. [ONNX model inference examples](/api/python/docs/tutorials/packages/onnx/inference_on_onnx_model.html)
 4. [MXNet Model Server Examples](https://github.com/awslabs/mxnet-model-server/tree/master/examples)
 
 ## References
 
 1. [Gluon end to end tutorial](/api/python/docs/tutorials/getting-started/gluon_from_experiment_to_deployment.html)
-2. [Gluon C++ inference example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/)
-3. [Gluon C++ package](https://github.com/apache/incubator-mxnet/tree/master/cpp-package)
+2. [Gluon C++ inference example](https://github.com/apache/mxnet/blob/master/cpp-package/example/inference/)
+3. [Gluon C++ package](https://github.com/apache/mxnet/tree/master/cpp-package)
diff --git a/docs/static_site/src/pages/api/cpp/index.md b/docs/static_site/src/pages/api/cpp/index.md
index 3aff76b853..447dca0d22 100644
--- a/docs/static_site/src/pages/api/cpp/index.md
+++ b/docs/static_site/src/pages/api/cpp/index.md
@@ -37,13 +37,13 @@ The cpp-package directory contains the implementation of C++ API. As mentioned a
 1.  Building the MXNet C++ package requires building MXNet from source.
 2.  Clone the MXNet GitHub repository **recursively** to ensure the code in submodules is available for building MXNet.
 	```
-	git clone --recursive https://github.com/apache/incubator-mxnet mxnet
+	git clone --recursive https://github.com/apache/mxnet
 	```
 
 3.  Install the [prerequisites](<https://mxnet.apache.org/get_started/build_from_source#prerequisites>), desired [BLAS libraries](<https://mxnet.apache.org/get_started/build_from_source#blas-library>) and optional [OpenCV, CUDA, and cuDNN](<https://mxnet.apache.org/get_started/build_from_source#optional>) for building MXNet from source.
-4.  There is a configuration file for make, [make/config.mk](<https://github.com/apache/incubator-mxnet/blob/master/make/config.mk>) that contains all the compilation options. You can edit this file and set the appropriate options prior to running the **make** command.
+4.  There is a configuration file for make, [make/config.mk](<https://github.com/apache/mxnet/blob/master/make/config.mk>) that contains all the compilation options. You can edit this file and set the appropriate options prior to running the **make** command.
 5.  Please refer to  [platform specific build instructions](<https://mxnet.apache.org/get_started/build_from_source#build-instructions-by-operating-system>) and available [build configurations](https://mxnet.apache.org/get_started/build_from_source#build-configurations) for more details.
-5.  For enabling the build of C++ Package, set the **USE\_CPP\_PACKAGE = 1** in [make/config.mk](<https://github.com/apache/incubator-mxnet/blob/master/make/config.mk>). Optionally, the compilation flag can also be specified on **make** command line as follows.
+5.  For enabling the build of C++ Package, set the **USE\_CPP\_PACKAGE = 1** in [make/config.mk](<https://github.com/apache/mxnet/blob/master/make/config.mk>). Optionally, the compilation flag can also be specified on **make** command line as follows.
 	```
 	make -j USE_CPP_PACKAGE=1
 	```
@@ -53,7 +53,7 @@ The cpp-package directory contains the implementation of C++ API. As mentioned a
 In order to consume the C++ API please follow the steps below.
 
 1. Ensure that the MXNet shared library is built from source with the **USE\_CPP\_PACKAGE = 1**.
-2. Include the [MxNetCpp.h](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/include/mxnet-cpp/MxNetCpp.h>) in the program that is going to consume MXNet C++ API.
+2. Include the [MxNetCpp.h](<https://github.com/apache/mxnet/blob/master/cpp-package/include/mxnet-cpp/MxNetCpp.h>) in the program that is going to consume MXNet C++ API.
 	```c++
 	#include <mxnet-cpp/MxNetCpp.h>
 	```
diff --git a/docs/static_site/src/pages/api/developer_guide/1_github_contribution_and_PR_verification_tips.md b/docs/static_site/src/pages/api/developer_guide/1_github_contribution_and_PR_verification_tips.md
index 93cc916f7b..0ec2bf1e5a 100644
--- a/docs/static_site/src/pages/api/developer_guide/1_github_contribution_and_PR_verification_tips.md
+++ b/docs/static_site/src/pages/api/developer_guide/1_github_contribution_and_PR_verification_tips.md
@@ -29,12 +29,12 @@ Use this page for general git workflow tips.
 
 It is recommended that you fork the MXNet repo, and then set the original repo as an upstream remote repo. 
 
-Fork [https://github.com/apache/incubator-mxnet](https://github.com/apache/incubator-mxnet) then:
+Fork [https://github.com/apache/mxnet](https://github.com/apache/mxnet) then:
 
 ```
-git clone --recursive https://github.com/your_username/incubator-mxnet
+git clone --recursive https://github.com/your_username/mxnet
 cd mxnet
-git remote add upstream https://github.com/apache/incubator-mxnet
+git remote add upstream https://github.com/apache/mxnet
 ```
 
 Once `upstream` was added, then create a branch for your contribution.
diff --git a/docs/static_site/src/pages/api/developer_guide/exception_handing_and_custom_error_types.md b/docs/static_site/src/pages/api/developer_guide/exception_handing_and_custom_error_types.md
index 21542b81ae..dbc661e97f 100644
--- a/docs/static_site/src/pages/api/developer_guide/exception_handing_and_custom_error_types.md
+++ b/docs/static_site/src/pages/api/developer_guide/exception_handing_and_custom_error_types.md
@@ -88,7 +88,7 @@ Note that as of writing this document, the following Python error types are supp
 * `IndexError`
 * `NotImplementedError`
 
-Check [this](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/error.py) resource for more details
+Check [this](https://github.com/apache/mxnet/blob/master/python/mxnet/error.py) resource for more details
 about Python supported error types that MXNet supports.
 
 ## How to register a custom error type
diff --git a/docs/static_site/src/pages/api/faq/add_op_in_backend.md b/docs/static_site/src/pages/api/faq/add_op_in_backend.md
index 7595467575..8333ea9503 100644
--- a/docs/static_site/src/pages/api/faq/add_op_in_backend.md
+++ b/docs/static_site/src/pages/api/faq/add_op_in_backend.md
@@ -713,9 +713,9 @@ using nnvm. Congratulations! You now know how to add operators.
 We welcome your contributions to MXNet.
 
 **Note**: Source code in the tutorial can be found in
-[quadratic_op-inl.h](https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/quadratic_op-inl.h),
-[quadratic_op.cc](https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/quadratic_op.cc),
-[quadratic_op.cu](https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/quadratic_op.cu),
+[quadratic_op-inl.h](https://github.com/apache/mxnet/blob/master/src/operator/contrib/quadratic_op-inl.h),
+[quadratic_op.cc](https://github.com/apache/mxnet/blob/master/src/operator/contrib/quadratic_op.cc),
+[quadratic_op.cu](https://github.com/apache/mxnet/blob/master/src/operator/contrib/quadratic_op.cu),
 and
-[test_operator.py](https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_operator.py#L6514).
+[test_operator.py](https://github.com/apache/mxnet/blob/master/tests/python/unittest/test_operator.py#L6514).
 
diff --git a/docs/static_site/src/pages/api/faq/cloud.md b/docs/static_site/src/pages/api/faq/cloud.md
index 2a8c01f3e3..a32727f1ce 100644
--- a/docs/static_site/src/pages/api/faq/cloud.md
+++ b/docs/static_site/src/pages/api/faq/cloud.md
@@ -68,7 +68,7 @@ unzip mnist.zip && s3cmd put t*-ubyte s3://dmlc/mnist/
 ### Use Pre-installed EC2 GPU Instance
 The [Deep Learning AMI](https://aws.amazon.com/marketplace/pp/B01M0AXXQB?qid=1475211685369&sr=0-1&ref_=srh_res_product_title) is an Amazon Linux image
 supported and maintained by Amazon Web Services for use on Amazon Elastic Compute Cloud (Amazon EC2).
-It contains [MXNet-v0.9.3 tag](https://github.com/apache/incubator-mxnet) and the necessary components to get going with deep learning,
+It contains [MXNet-v0.9.3 tag](https://github.com/apache/mxnet) and the necessary components to get going with deep learning,
 including Nvidia drivers, CUDA, cuDNN, Anaconda, Python2 and Python3.   
 The AMI IDs are the following:
 
diff --git a/docs/static_site/src/pages/api/faq/distributed_training.md b/docs/static_site/src/pages/api/faq/distributed_training.md
index 622ace60f7..434e58dbf0 100644
--- a/docs/static_site/src/pages/api/faq/distributed_training.md
+++ b/docs/static_site/src/pages/api/faq/distributed_training.md
@@ -95,7 +95,7 @@ Some iterators in MXNet that support this feature are [mxnet.io.MNISTIterator](/
 If you are using a different iterator, you can look at how the above iterators implement this.
 We can use the kvstore object to get the number of workers (`kv.num_workers`) and rank of the current worker (`kv.rank`).
 These can be passed as arguments to the iterator.
-You can look at [example/gluon/image_classification.py](https://github.com/apache/incubator-mxnet/blob/master/example/gluon/image_classification.py)
+You can look at [example/gluon/image_classification.py](https://github.com/apache/mxnet/blob/master/example/gluon/image_classification.py)
 to see an example usage.
 
 ### Updating weights
@@ -182,15 +182,15 @@ sudo mkdir efs && sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,
 Tip: You might find it helpful to store large datasets on S3 for easy access from all machines in the cluster. Refer [Using data from S3 for training]({{'/api/faq/s3_integration'|relative_url}}) for more information.
 
 ### Using Launch.py
-MXNet provides a script [tools/launch.py](https://github.com/apache/incubator-mxnet/blob/master/tools/launch.py) to make it easy to launch distributed training on a cluster with `ssh`, `mpi`, `sge` or `yarn`.
+MXNet provides a script [tools/launch.py](https://github.com/apache/mxnet/blob/master/tools/launch.py) to make it easy to launch distributed training on a cluster with `ssh`, `mpi`, `sge` or `yarn`.
 You can fetch this script by cloning the mxnet repository.
 
 ```
-git clone --recursive https://github.com/apache/incubator-mxnet
+git clone --recursive https://github.com/apache/mxnet
 ```
 
 #### Example
-Let us consider training a VGG11 model on the CIFAR10 dataset using [example/gluon/image_classification.py](https://github.com/apache/incubator-mxnet/blob/master/tools/launch.py).
+Let us consider training a VGG11 model on the CIFAR10 dataset using [example/gluon/image_classification.py](https://github.com/apache/mxnet/blob/master/tools/launch.py).
 ```
 cd example/gluon/
 ```
diff --git a/docs/static_site/src/pages/api/faq/env_var.md b/docs/static_site/src/pages/api/faq/env_var.md
index 1f5debd9d0..9692157d20 100644
--- a/docs/static_site/src/pages/api/faq/env_var.md
+++ b/docs/static_site/src/pages/api/faq/env_var.md
@@ -342,7 +342,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
 * MXNET_SUBGRAPH_BACKEND
   - Values: String ```(default="MKLDNN")``` if MKLDNN is avaliable, otherwise ```(default="")```
   - This variable controls the subgraph partitioning in MXNet.
-  - This variable is used to perform MKL-DNN FP32 operator fusion and quantization. Please refer to the [MKL-DNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for how this variable is used and the list of fusion passes.
+  - This variable is used to perform MKL-DNN FP32 operator fusion and quantization. Please refer to the [MKL-DNN operator list](https://github.com/apache/mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for how this variable is used and the list of fusion passes.
   - Set ```MXNET_SUBGRAPH_BACKEND=NONE``` to disable subgraph backend.
 
 * MXNET_SAFE_ACCUMULATION
@@ -394,9 +394,9 @@ Settings for controlling OMP tuning
    -            0=disable all
    -            1=enable all
    -            float32, float16, float32=list of types to enable, and disable those not listed
-   - refer : https://github.com/apache/incubator-mxnet/blob/master/src/operator/operator_tune-inl.h#L444
+   - refer : https://github.com/apache/mxnet/blob/master/src/operator/operator_tune-inl.h#L444
 
 - Set ```MXNET_USE_NUM_CORES_OPERATOR_TUNING``` to define num_cores to be used by operator tuning code.
   - This reduces operator tuning overhead when there are multiple instances of mxnet running in the system and we know that
     each mxnet will take only partial num_cores available with system.
-  - refer: https://github.com/apache/incubator-mxnet/pull/13602
+  - refer: https://github.com/apache/mxnet/pull/13602
diff --git a/docs/static_site/src/pages/api/faq/float16.md b/docs/static_site/src/pages/api/faq/float16.md
index 8a6d413449..f637cd0826 100644
--- a/docs/static_site/src/pages/api/faq/float16.md
+++ b/docs/static_site/src/pages/api/faq/float16.md
@@ -67,7 +67,7 @@ If you are using images and DataLoader, you can also use a [Cast transform](/api
 optimizer = mx.optimizer.create('sgd', multi_precision=True, lr=0.01)
 ```
 
-You can play around with mixed precision using the image classification [example](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_imagenet.py). We suggest using the Caltech101 dataset option in that example and using a ResNet50V1 network so you can quickly see the performance improvement and how the accuracy is unaffected. Here's the starter command to run this example.
+You can play around with mixed precision using the image classification [example](https://github.com/apache/mxnet/blob/master/example/image-classification/train_imagenet.py). We suggest using the Caltech101 dataset option in that example and using a ResNet50V1 network so you can quickly see the performance improvement and how the accuracy is unaffected. Here's the starter command to run this example.
 
 ```bash
 python image_classification.py --model resnet50_v1 --dataset caltech101 --gpus 0 --num-worker 30 --dtype float16
@@ -116,7 +116,7 @@ Training a network in float16 with the Symbolic API involves the following steps
 optimizer = mx.optimizer.create('sgd', multi_precision=True, lr=0.01)
 ```
 
-For a full example, please refer to [resnet.py](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/symbols/resnet.py) file on GitHub. A small, relevant excerpt from that file is presented below.
+For a full example, please refer to [resnet.py](https://github.com/apache/mxnet/blob/master/example/image-classification/symbols/resnet.py) file on GitHub. A small, relevant excerpt from that file is presented below.
 
 ```python
 data = mx.sym.Variable(name="data")
@@ -133,7 +133,7 @@ if dtype == 'float16':
 output = mx.sym.SoftmaxOutput(data=net_out, name='softmax')
 ```
 
-If you would like to train ResNet50 model on ImageNet using float16 precision, you can find the full script [here](https://github.com/apache/incubator-mxnet/blob/master/docs/static_site/src/pages/api/faq/float16.md)
+If you would like to train ResNet50 model on ImageNet using float16 precision, you can find the full script [here](https://github.com/apache/mxnet/blob/master/docs/static_site/src/pages/api/faq/float16.md)
 
 If you don't have ImageNet dataset at your disposal, you can still run the script above using synthetic float16 data by providing the following command:
 
@@ -141,13 +141,13 @@ If you don't have ImageNet dataset at your disposal, you can still run the scrip
 python train_imagenet.py --network resnet-v1 --num-layers 50 --benchmark 1 --gpus 0 --batch-size 256 --dtype float16
 ```
 
-There's a similar example for float16 fine tuning [here](https://github.com/apache/incubator-mxnet/tree/master/example/image-classification/fine-tune.py) of selected models: Inception v3, Inception v4, ResNetV1, ResNet50, ResNext or VGG. The command below shows how to use that script to fine-tune a Resnet50 model trained on Imagenet for the Caltech 256 dataset using float16.
+There's a similar example for float16 fine tuning [here](https://github.com/apache/mxnet/tree/master/example/image-classification/fine-tune.py) of selected models: Inception v3, Inception v4, ResNetV1, ResNet50, ResNext or VGG. The command below shows how to use that script to fine-tune a Resnet50 model trained on Imagenet for the Caltech 256 dataset using float16.
 
 ```bash
 python fine-tune.py --network resnet --num-layers 50 --pretrained-model imagenet1k-resnet-50 --data-train ~/.mxnet/dataset/caltech-256/caltech256-train.rec --data-val ~/data/caltech-256/caltech256-val.rec --num-examples 15420 --num-classes 256 --gpus 0 --batch-size 64 --dtype float16
 ```
 
-If you don't have the `Caltech256` dataset, you can download it using the script below, and convert it into .rec file format using [im2rec utility file](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py)
+If you don't have the `Caltech256` dataset, you can download it using the script below, and convert it into .rec file format using [im2rec utility file](https://github.com/apache/mxnet/blob/master/tools/im2rec.py)
 
 ```python
 import os
diff --git a/docs/static_site/src/pages/api/faq/gradient_compression.md b/docs/static_site/src/pages/api/faq/gradient_compression.md
index e2b47c646a..6219fc554f 100644
--- a/docs/static_site/src/pages/api/faq/gradient_compression.md
+++ b/docs/static_site/src/pages/api/faq/gradient_compression.md
@@ -102,7 +102,7 @@ Gradient compression is a run-time configuration parameter to be enabled during
 ```python
 trainer = gluon.Trainer(..., compression_params={'type’:'2bit', 'threshold':0.5})
 ```
-A reference `gluon` implementation with a gradient compression option can be found in the [train.py script from a word-level language modeling RNN example](https://github.com/apache/incubator-mxnet/blob/master/example/gluon/word_language_model/train.py).
+A reference `gluon` implementation with a gradient compression option can be found in the [train.py script from a word-level language modeling RNN example](https://github.com/apache/mxnet/blob/master/example/gluon/word_language_model/train.py).
 
 **Module API**:
 
@@ -110,7 +110,7 @@ A reference `gluon` implementation with a gradient compression option can be fou
 mod = mx.mod.Module(..., compression_params={'type’:'2bit', 'threshold':0.5})
 ```
 
-A `module` example is provided with [this guide for setting up MXNet with distributed training](/api/faq/distributed_training). It comes with the option of turning on gradient compression as an argument to the [train_mnist.py script](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_mnist.py).
+A `module` example is provided with [this guide for setting up MXNet with distributed training](/api/faq/distributed_training). It comes with the option of turning on gradient compression as an argument to the [train_mnist.py script](https://github.com/apache/mxnet/blob/master/example/image-classification/train_mnist.py).
 
 ### Configuration Details
 
diff --git a/docs/static_site/src/pages/api/faq/large_tensor_support.md b/docs/static_site/src/pages/api/faq/large_tensor_support.md
index ab251a78fb..748e4a6b95 100644
--- a/docs/static_site/src/pages/api/faq/large_tensor_support.md
+++ b/docs/static_site/src/pages/api/faq/large_tensor_support.md
@@ -148,7 +148,7 @@ Not supported:
 
 
 ## Other known Issues:
-Randint operator is flaky: https://github.com/apache/incubator-mxnet/issues/16172
+Randint operator is flaky: https://github.com/apache/mxnet/issues/16172
 dgemm operations using BLAS libraries currently don’t support int64.
 linspace() is not supported.
 
@@ -162,7 +162,7 @@ texec.reshape(allow_up_sizing=True, **new_shape)
 
 Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
-  File "/home/ubuntu/incubator-mxnet/python/mxnet/executor.py", line 449, in reshape
+  File "/home/ubuntu/mxnet/python/mxnet/executor.py", line 449, in reshape
     py_array('i', provided_arg_shape_data)),
 OverflowError: signed integer is greater than maximum}
 ```
@@ -179,7 +179,7 @@ texec.reshape(allow_up_sizing=True, **new_shape)
 
 Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
-  File "/home/ubuntu/incubator-mxnet/python/mxnet/executor.py", line 449, in reshape
+  File "/home/ubuntu/mxnet/python/mxnet/executor.py", line 449, in reshape
     py_array('i', provided_arg_shape_data)),
 OverflowError: signed integer is greater than maximum
 ```
diff --git a/docs/static_site/src/pages/api/faq/perf.md b/docs/static_site/src/pages/api/faq/perf.md
index 28c70a8b64..0c29d233d9 100644
--- a/docs/static_site/src/pages/api/faq/perf.md
+++ b/docs/static_site/src/pages/api/faq/perf.md
@@ -268,7 +268,7 @@ To reduce the communication cost, we can consider:
 - Exploring different `--kv-store` options.
 - Increasing the batch size to improve the computation to communication ratio.
 
-Finally, MXNet is integrated with other distributed training frameworks, including [horovod](https://github.com/apache/incubator-mxnet/tree/master/example/distributed_training-horovod) and [BytePS](https://github.com/bytedance/byteps#use-byteps-in-your-code).
+Finally, MXNet is integrated with other distributed training frameworks, including [horovod](https://github.com/apache/mxnet/tree/master/example/distributed_training-horovod) and [BytePS](https://github.com/bytedance/byteps#use-byteps-in-your-code).
 
 ## Input Data
 
diff --git a/docs/static_site/src/pages/api/java/docs/tutorials/mxnet_java_on_intellij.md b/docs/static_site/src/pages/api/java/docs/tutorials/mxnet_java_on_intellij.md
index 866b696a9c..c242453bc4 100644
--- a/docs/static_site/src/pages/api/java/docs/tutorials/mxnet_java_on_intellij.md
+++ b/docs/static_site/src/pages/api/java/docs/tutorials/mxnet_java_on_intellij.md
@@ -131,8 +131,8 @@ Click "Import Changes" in this prompt.
 **Step 5.** Build the project:
 - To build the project, from the menu choose Build, and then choose Build Project.
 
-**Step 6.** Navigate to the App.java class in the project and paste the code in `main` method from HelloWorld.java from [Java Demo project](https://github.com/apache/incubator-mxnet/tree/master/scala-package/mxnet-demo/java-demo/src/main/java/mxnet/HelloWorld.java) on MXNet repository, overwriting the original hello world code.
-You can also grab the entire [Java Demo project](https://github.com/apache/incubator-mxnet/tree/master/scala-package/mxnet-demo/java-demo) and run it by following the instructions on the [README](https://github.com/apache/incubator-mxnet/blob/master/scala-package/mxnet-demo/java-demo/README.md).
+**Step 6.** Navigate to the App.java class in the project and paste the code in `main` method from HelloWorld.java from [Java Demo project](https://github.com/apache/mxnet/tree/master/scala-package/mxnet-demo/java-demo/src/main/java/mxnet/HelloWorld.java) on MXNet repository, overwriting the original hello world code.
+You can also grab the entire [Java Demo project](https://github.com/apache/mxnet/tree/master/scala-package/mxnet-demo/java-demo) and run it by following the instructions on the [README](https://github.com/apache/mxnet/blob/master/scala-package/mxnet-demo/java-demo/README.md).
 
 **Step 7.** Now run the App.java.
 
@@ -184,5 +184,5 @@ java -cp "target/javaMXNet-1.0-SNAPSHOT.jar:target/dependency/*" mxnet.App
 For more information about MXNet Java resources, see the following:
 
 * [Java Inference API]({{'/api/java'|relative_url}})
-* [Java Inference Examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
+* [Java Inference Examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
 * [MXNet Tutorials Index]({{'/api'|relative_url}})
diff --git a/docs/static_site/src/pages/api/java/docs/tutorials/ssd_inference.md b/docs/static_site/src/pages/api/java/docs/tutorials/ssd_inference.md
index 0767e50eda..557f5a28b3 100644
--- a/docs/static_site/src/pages/api/java/docs/tutorials/ssd_inference.md
+++ b/docs/static_site/src/pages/api/java/docs/tutorials/ssd_inference.md
@@ -26,7 +26,7 @@ tag: java
 
 This tutorial shows how to use MXNet Java Inference APIs to run inference on a pre-trained Single Shot Detector (SSD) Model.
 
-The SSD model is trained on the Pascal VOC 2012 dataset. The network is a SSD model built on Resnet50 as the base network to extract image features. The model is trained to detect the following entities (classes): ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']. For more details about the model, you can refer to the [MXNet SSD example](https: [...]
+The SSD model is trained on the Pascal VOC 2012 dataset. The network is a SSD model built on Resnet50 as the base network to extract image features. The model is trained to detect the following entities (classes): ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']. For more details about the model, you can refer to the [MXNet SSD example](https: [...]
 
 ## Prerequisites
 
@@ -52,7 +52,7 @@ wget https://cloud.githubusercontent.com/assets/3307514/20012567/cbb60336-a27d-1
 wget https://cloud.githubusercontent.com/assets/3307514/20012563/cbb41382-a27d-11e6-92a9-18dab4fd1ad3.jpg -O person.jpg
 ```
 
-Alternately, you can get the entire SSD Model artifacts + images in one single script from the MXNet Repository by running [get_ssd_data.sh script](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/scripts/infer/objectdetector/get_ssd_data.sh)
+Alternately, you can get the entire SSD Model artifacts + images in one single script from the MXNet Repository by running [get_ssd_data.sh script](https://github.com/apache/mxnet/blob/master/scala-package/examples/scripts/infer/objectdetector/get_ssd_data.sh)
 
 ## Time to code!
 1\. Following the [MXNet Java Setup on IntelliJ IDEA](mxnet_java_on_intellij) tutorial, in the same project `JavaMXNet`, create a new empty class called : `ObjectDetectionTutorial.java`.
@@ -206,5 +206,5 @@ The results returned by the inference call translate into the regions in the ima
 For more information about MXNet Java resources, see the following:
 
 * [Java Inference API]({{'/api/java'|relative_url}})
-* [Java Inference Examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
+* [Java Inference Examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
 * [MXNet Tutorials Index]({{'/api'|relative_url}})
diff --git a/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md b/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
index b5d6b8fd32..5086178213 100644
--- a/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
+++ b/docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
@@ -128,7 +128,7 @@ In the example, *net* is used as a function to apply to an existing symbol
 
 ## Training a Neural Net
 
-The [model API](https://github.com/apache/incubator-mxnet/blob/master/R-package/R/model.R) is a thin wrapper around the symbolic executors to support neural net training.
+The [model API](https://github.com/apache/mxnet/blob/master/R-package/R/model.R) is a thin wrapper around the symbolic executors to support neural net training.
 
 We encourage you to read [Symbolic Configuration and Execution in Pictures for python package](/api/python/symbol_in_pictures/symbol_in_pictures.md)for a detailed explanation of concepts in pictures.
 
diff --git a/docs/static_site/src/pages/api/scala/docs/tutorials/char_lstm.md b/docs/static_site/src/pages/api/scala/docs/tutorials/char_lstm.md
index b4b106d5fc..801ba72912 100644
--- a/docs/static_site/src/pages/api/scala/docs/tutorials/char_lstm.md
+++ b/docs/static_site/src/pages/api/scala/docs/tutorials/char_lstm.md
@@ -40,10 +40,10 @@ There are three ways to use this tutorial:
 
 2) Reuse the code by making changes to relevant parameters and running it from command line.
 
-3) [Run the source code directly](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn) by running the [provided scripts](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/scripts/rnn).
+3) [Run the source code directly](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn) by running the [provided scripts](https://github.com/apache/mxnet/tree/master/scala-package/examples/scripts/rnn).
 
 To run the scripts:
-- Build and train the model with the [run_train_charrnn.sh script](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/scripts/rnn/run_train_charrnn.sh). Edit the script as follows:
+- Build and train the model with the [run_train_charrnn.sh script](https://github.com/apache/mxnet/tree/master/scala-package/examples/scripts/rnn/run_train_charrnn.sh). Edit the script as follows:
 
 Edit the CLASS_PATH variable in the script to include your operating system-specific folder (e.g., linux-x86_64-cpu/linux-x86_64-gpu/osx-x86_64-cpu) in the path. Run the script with the following command:
 
@@ -198,7 +198,7 @@ Now, create a multi-layer LSTM model.
 To create the model:
 
 1) Load the helper files (`Lstm.scala`, `BucketIo.scala` and `RnnModel.scala`).
-`Lstm.scala` contains the definition of the LSTM cell. `BucketIo.scala` creates a sentence iterator. `RnnModel.scala` is used for model inference. The helper files are available on the [MXNet site](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn).
+`Lstm.scala` contains the definition of the LSTM cell. `BucketIo.scala` creates a sentence iterator. `RnnModel.scala` is used for model inference. The helper files are available on the [MXNet site](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn).
 To load them, at the Scala command prompt type:
 
 ```scala
diff --git a/docs/static_site/src/pages/api/scala/docs/tutorials/infer.md b/docs/static_site/src/pages/api/scala/docs/tutorials/infer.md
index 6e1e11935b..2f5df2c75b 100644
--- a/docs/static_site/src/pages/api/scala/docs/tutorials/infer.md
+++ b/docs/static_site/src/pages/api/scala/docs/tutorials/infer.md
@@ -33,7 +33,7 @@ To use the Infer API you must first install the MXNet Scala package. Instruction
 * [Installing the MXNet Scala for Linux]({{'get_started/ubuntu_setup.html#install-the-mxnet-package-for-scala'|relative_url}})
 
 ## Inference
-The Scala Infer API includes both single image and batch modes. Here is an example of running inference on a single image by using the `ImageClassifier` class. A complete [image classification example](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala) using ResNet-152 is provided in the [Scala package's example folder](https://github.com/apache/incubator-mxnet/tree/maste [...]
+The Scala Infer API includes both single image and batch modes. Here is an example of running inference on a single image by using the `ImageClassifier` class. A complete [image classification example](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala) using ResNet-152 is provided in the [Scala package's example folder](https://github.com/apache/mxnet/tree/master/scala-package/exam [...]
 
 ```scala
 def runInferenceOnSingleImage(modelPathPrefix: String, inputImagePath: String,
@@ -61,5 +61,5 @@ IndexedSeq[IndexedSeq[(String, Float)]] = {
 
 ## Related Resources
 * [Infer API Scaladocs]({{'/api/scala/docs/api/#org.apache.mxnet.infer.package'|relative_url}})
-* [Single Shot Detector Inference Example](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector)
-* [Image Classification Example](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier)
+* [Single Shot Detector Inference Example](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector)
+* [Image Classification Example](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier)
diff --git a/docs/static_site/src/pages/api/scala/docs/tutorials/io.md b/docs/static_site/src/pages/api/scala/docs/tutorials/io.md
index 6f661fdcc2..9df1466cf1 100644
--- a/docs/static_site/src/pages/api/scala/docs/tutorials/io.md
+++ b/docs/static_site/src/pages/api/scala/docs/tutorials/io.md
@@ -107,7 +107,7 @@ First, explicitly specify the kind of data (MNIST, ImageRecord, etc.) to fetch.
 ## How to Get Data
 
 
-We provide [scripts](https://github.com/apache/incubator-mxnet/tree/master/scala-package/core/scripts) to download MNIST data and CIFAR10 ImageRecord data. If you want to create your own dataset, we recommend using the Image RecordIO data format.
+We provide [scripts](https://github.com/apache/mxnet/tree/master/scala-package/core/scripts) to download MNIST data and CIFAR10 ImageRecord data. If you want to create your own dataset, we recommend using the Image RecordIO data format.
 
 ## Create a Dataset Using RecordIO
 
@@ -117,7 +117,7 @@ RecordIO implements a file format for a sequence of records. We recommend storin
 * Packing data together allows continuous reading on the disk.
 * RecordIO has a simple way to partition, simplifying distributed setting. We provide an example later.
 
-We provide the [im2rec tool](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.cc) so you can create an Image RecordIO dataset by yourself. The following walkthrough shows you how.
+We provide the [im2rec tool](https://github.com/apache/mxnet/blob/master/tools/im2rec.cc) so you can create an Image RecordIO dataset by yourself. The following walkthrough shows you how.
 
 ### Prerequisites
 Download the data. You don't need to resize the images manually. You can use `im2rec` to resize them automatically. For details, see "Extension: Using Multiple Labels for a Single Image," later in this topic.
@@ -185,4 +185,4 @@ val dataiter = IO.ImageRecordIter(Map(
 
 ## Next Steps
 * [NDArray API](ndarray) for vector/matrix/tensor operations
-* [KVStore API](kvstore) for multi-GPU and multi-host distributed training
\ No newline at end of file
+* [KVStore API](kvstore) for multi-GPU and multi-host distributed training
diff --git a/docs/static_site/src/pages/api/scala/docs/tutorials/mxnet_scala_on_intellij.md b/docs/static_site/src/pages/api/scala/docs/tutorials/mxnet_scala_on_intellij.md
index f67ceaf33c..55eec6a2c9 100644
--- a/docs/static_site/src/pages/api/scala/docs/tutorials/mxnet_scala_on_intellij.md
+++ b/docs/static_site/src/pages/api/scala/docs/tutorials/mxnet_scala_on_intellij.md
@@ -71,7 +71,7 @@ brew install opencv
 **Step 1.**: Download the MXNet source.
 
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet
+git clone --recursive https://github.com/apache/mxnet.git mxnet
 cd mxnet
 ```
 
@@ -410,14 +410,14 @@ If you chose to "Build from Source" when following the [install instructions]({{
       <groupId>org.apache.mxnet</groupId>
       <artifactId>mxnet-core_${scala.version}-${platform}-sources</artifactId>
       <scope>system</scope>
-      <systemPath>/PathToMXNetSource/incubator-mxnet/scala-package/assembly/osx-x86_64-cpu/target/mxnet-full_${scala.version}-osx-x86_64-cpu-1.9.1-SNAPSHOT-sources.jar</systemPath>
+      <systemPath>/PathToMXNetSource/mxnet/scala-package/assembly/osx-x86_64-cpu/target/mxnet-full_${scala.version}-osx-x86_64-cpu-1.9.1-SNAPSHOT-sources.jar</systemPath>
     </dependency>
 
     <dependency>
       <groupId>org.apache.mxnet</groupId>
       <artifactId>mxnet-full_${scala.version}-${platform}</artifactId>
       <scope>system</scope>
-      <systemPath>/PathToMXNetSource/incubator-mxnet/scala-package/assembly/osx-x86_64-cpu/target/mxnet-full_${scala.version}-osx-x86_64-cpu-1.9.1-SNAPSHOT.jar</systemPath>
+      <systemPath>/PathToMXNetSource/mxnet/scala-package/assembly/osx-x86_64-cpu/target/mxnet-full_${scala.version}-osx-x86_64-cpu-1.9.1-SNAPSHOT.jar</systemPath>
     </dependency>
 ```
 
@@ -451,5 +451,5 @@ The build generates a new jar file in the `target` folder called `scalaInference
 For more information about MXNet Scala resources, see the following:
 
 * [Scala API]({{'/api/scala'|relative_url}})
-* [Scala Examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/)
+* [Scala Examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/)
 * [MXNet Tutorials Index]({{'/api'|relative_url}})
diff --git a/docs/static_site/src/pages/api/scala/index.md b/docs/static_site/src/pages/api/scala/index.md
index b825f68f77..cc83d297a0 100644
--- a/docs/static_site/src/pages/api/scala/index.md
+++ b/docs/static_site/src/pages/api/scala/index.md
@@ -55,5 +55,5 @@ You can perform tensor or matrix computation in pure Scala:
 
 ## Related Resources
 
-* [Neural Style in Scala on MXNet](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala)
-* [More Scala Examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples)
+* [Neural Style in Scala on MXNet](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala)
+* [More Scala Examples](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples)
diff --git a/docs/static_site/src/pages/community/contribute.md b/docs/static_site/src/pages/community/contribute.md
index 613f388fca..e7141ff358 100644
--- a/docs/static_site/src/pages/community/contribute.md
+++ b/docs/static_site/src/pages/community/contribute.md
@@ -25,7 +25,7 @@ permalink: /community/contribute
 
 # Contributing to Apache MXNet
 
-Apache MXNet (incubating) is a community led, open source deep learning project. We welcome new members and look forward to your contributions. Here you will find how to get started and links to detailed information on Apache MXNet best practices and processes.
+Apache MXNet is a community led, open source deep learning project. We welcome new members and look forward to your contributions. Here you will find how to get started and links to detailed information on Apache MXNet best practices and processes.
 
 
 ## Getting Started
@@ -126,10 +126,10 @@ The process for setting up MXNet for development depends on several factors, and
 
 ## Your First Contribution
 
-**Step 1**: Visit the project on GitHub and review the [calls for contribution](https://github.com/apache/incubator-mxnet/labels/Call%20for%20Contribution). Click the GitHub button:
-<a class="github-button" href="https://github.com/apache/incubator-mxnet/labels/Call%20for%20Contribution" data-size="large" data-show-count="true" aria-label="Issue apache/incubator-mxnet on GitHub">Call for Contribution</a>
+**Step 1**: Visit the project on GitHub and review the [calls for contribution](https://github.com/apache/mxnet/labels/Call%20for%20Contribution). Click the GitHub button:
+<a class="github-button" href="https://github.com/apache/mxnet/labels/Call%20for%20Contribution" data-size="large" data-show-count="true" aria-label="Issue apache/mxnet on GitHub">Call for Contribution</a>
 
-**Step 2**: Tackle a smaller issue or improve documentation to get familiar with the process. As part of your pull request, add your name to [CONTRIBUTORS.md](https://github.com/apache/incubator-mxnet/blob/master/CONTRIBUTORS.md).
+**Step 2**: Tackle a smaller issue or improve documentation to get familiar with the process. As part of your pull request, add your name to [CONTRIBUTORS.md](https://github.com/apache/mxnet/blob/master/CONTRIBUTORS.md).
 
 **Step 3**: Follow the [formal pull request (PR) process](#formal-pull-request-process) to submit your PR for review.
 
@@ -140,14 +140,14 @@ The process for setting up MXNet for development depends on several factors, and
 
 Please let us know if you experienced a problem with MXNet. Please provide detailed information about the problem you encountered and, if possible, add a description that helps to reproduce the problem. You have two alternatives for filing a bug report:
 <p><a href="http://issues.apache.org/jira/browse/MXNet"><i class="fas fa-bug"></i> JIRA</a></p>
-<p><a href="https://github.com/apache/incubator-mxnet/issues"><i class="fab fa-github"></i> GitHub</a></p>
+<p><a href="https://github.com/apache/mxnet/issues"><i class="fab fa-github"></i> GitHub</a></p>
 
 
 ## Minor Fixes
 
 If you have found an issue and would like to contribute a bug fix or documentation update, follow these guidelines:
 
-* If it is trivial, just create a [pull request](https://github.com/apache/incubator-mxnet/pulls).
+* If it is trivial, just create a [pull request](https://github.com/apache/mxnet/pulls).
 * If it is non-trivial, you should follow the [formal pull request process](#formal-pull-request-process) described in the next section.
 
 
@@ -157,7 +157,7 @@ Any new features of improvements that are non-trivial should follow the complete
 
 1. [Review the contribution standards](https://cwiki.apache.org/confluence/display/MXNET/Development+Process) for your type of submission.
 1. [Create a JIRA issue](https://issues.apache.org/jira/secure/CreateIssue!default.jspa).
-1. [Create the PR on GitHub](https://github.com/apache/incubator-mxnet/pulls) and add the JIRA issue ID to the PR's title.
+1. [Create the PR on GitHub](https://github.com/apache/mxnet/pulls) and add the JIRA issue ID to the PR's title.
 
 Further details on this process can be found on the [Wiki](https://cwiki.apache.org/confluence/display/MXNET/Development).
 
@@ -176,7 +176,7 @@ Detailed information is also required, if you plan to contribute the improvement
 
 Apache MXNet is evolving fast. To see what's next and what the community is currently working on, check out the Roadmap issues on GitHub and the JIRA Boards:
 
-<a class="github-button" href="https://github.com/apache/incubator-mxnet/labels/Roadmap" data-size="large" data-show-count="true" aria-label="Issue apache/incubator-mxnet on GitHub">Roadmap</a>
+<a class="github-button" href="https://github.com/apache/mxnet/labels/Roadmap" data-size="large" data-show-count="true" aria-label="Issue apache/mxnet on GitHub">Roadmap</a>
 <br/>
 [JIRA boards](https://issues.apache.org/jira/secure/RapidBoard.jspa) <i class="fas fa-lock"></i>
 
@@ -194,4 +194,4 @@ Apache MXNet is evolving fast. To see what's next and what the community is curr
 ## Contributors
 Apache MXNet has been developed by and is used by a group of active community members. Contribute to improving it!
 
-<i class="fab fa-github"></i> [Contributors and Committers](https://github.com/apache/incubator-mxnet/blob/master/CONTRIBUTORS.md)
+<i class="fab fa-github"></i> [Contributors and Committers](https://github.com/apache/mxnet/blob/master/CONTRIBUTORS.md)
diff --git a/docs/static_site/src/pages/ecosystem.html b/docs/static_site/src/pages/ecosystem.html
index 2fc4b9a15b..914c65a607 100644
--- a/docs/static_site/src/pages/ecosystem.html
+++ b/docs/static_site/src/pages/ecosystem.html
@@ -53,7 +53,7 @@ ecosystem_other:
   text: Model Server for Apache MXNet (MMS) is a flexible and easy to use tool for serving deep learning models exported from Apache MXNet or the Open Neural Network Exchange (ONNX).
   link: https://github.com/awslabs/mxnet-model-server
 - title: Sockeye
-  text: Sockeye is a sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet Incubating. It implements state-of-the-art encoder-decoder architectures.
+  text: Sockeye is a sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet. It implements state-of-the-art encoder-decoder architectures.
   link: https://awslabs.github.io/sockeye/
 - title: TensorLy
   text: TensorLy is a high level API for tensor methods and deep tensorized neural networks in Python that aims to make tensor learning simple.
diff --git a/docs/static_site/src/pages/get_started/build_from_source.md b/docs/static_site/src/pages/get_started/build_from_source.md
index e6340f52f2..0e871b8346 100644
--- a/docs/static_site/src/pages/get_started/build_from_source.md
+++ b/docs/static_site/src/pages/get_started/build_from_source.md
@@ -44,12 +44,12 @@ page](/api) for an overview of all supported languages and their APIs.
 
 ## Obtaining the source code
 
-To obtain the source code of the latest Apache MXNet (incubating) release,
+To obtain the source code of the latest Apache MXNet release,
 please access the [Download page](/get_started/download) and download the
 `.tar.gz` source archive corresponding to the release you wish to build.
 
 Developers can also obtain the unreleased development code from the git
-repository via `git clone --recursive https://github.com/apache/incubator-mxnet mxnet`
+repository via `git clone --recursive https://github.com/apache/mxnet`
 
 Building a MXNet 1.x release from source requires a C++11 compliant compiler.
 
@@ -65,7 +65,7 @@ dependencies of MXNet.
 
 ### Debian Linux derivatives (Debian, Ubuntu, ...)
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet mxnet
+git clone --recursive https://github.com/apache/mxnet
 cd mxnet
 sudo apt-get update
 sudo apt-get install -y build-essential git ninja-build ccache libopenblas-dev libopencv-dev cmake
@@ -197,11 +197,11 @@ the Python package manager `pip` with `python3 -m pip install --user --upgrade
 `~/.local/bin/cmake` or directly as `cmake`.
 
 Please see the [`cmake configuration
-files`](https://github.com/apache/incubator-mxnet/tree/v1.x/config) files for
+files`](https://github.com/apache/mxnet/tree/v1.x/config) files for
 instructions on how to configure and build MXNet with cmake.
 
 Up to the MXNet 1.6 release, please follow the instructions in the
-[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/v1.x/make/config.mk)
+[`make/config.mk`](https://github.com/apache/mxnet/blob/v1.x/make/config.mk)
 file on how to configure and compile MXNet. This method is supported on all 1.x
 releases.
 
@@ -267,7 +267,7 @@ Please also see the [MXNet C++ API](/api/cpp) page.
 ### Install the MXNet Package for Clojure
 
 Refer to the [Clojure setup
-guide](https://github.com/apache/incubator-mxnet/tree/master/contrib/clojure-package).
+guide](https://github.com/apache/mxnet/tree/master/contrib/clojure-package).
 
 Please also see the [MXNet Clojure API](/api/clojure) page.
 
@@ -279,8 +279,8 @@ To use the Julia binding you need to set the `MXNET_HOME` and `LD_LIBRARY_PATH`
 environment variables. For example,
 
 ```bash
-export MXNET_HOME=$HOME/incubator-mxnet
-export LD_LIBRARY_PATH=$HOME/incubator-mxnet/build:$LD_LIBRARY_PATH
+export MXNET_HOME=$HOME/mxnet
+export LD_LIBRARY_PATH=$HOME/mxnet/build:$LD_LIBRARY_PATH
 ```
 
 Then install MXNet with Julia:
diff --git a/docs/static_site/src/pages/get_started/download.md b/docs/static_site/src/pages/get_started/download.md
index e2712a781b..f4854f20b2 100644
--- a/docs/static_site/src/pages/get_started/download.md
+++ b/docs/static_site/src/pages/get_started/download.md
@@ -31,7 +31,7 @@ Policy](http://www.apache.org/legal/release-policy.html).
 
 If you would like to actively participate in the Apache MXNet development, you are
 encouraged to contribute to our development version on
-[GitHub](https://github.com/apache/incubator-mxnet).
+[GitHub](https://github.com/apache/mxnet).
 
 | Version | Source                                                                                                      | PGP                                                                                                             | SHA                                                                                                                |
 |---------|-------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
diff --git a/docs/static_site/src/pages/get_started/index.html b/docs/static_site/src/pages/get_started/index.html
index 29eb0b05fc..f7011a2b84 100644
--- a/docs/static_site/src/pages/get_started/index.html
+++ b/docs/static_site/src/pages/get_started/index.html
@@ -25,7 +25,7 @@ permalink: /get_started/index.html
 </div>
 <div class="get-started-from-source">
 <div class="wrapper">
-    <h2>Build and install Apache MXNet (incubating) from source</h2>
+    <h2>Build and install Apache MXNet from source</h2>
     <p>
         To build and install Apache MXNet from the official Apache Software Foundation
         signed source code please follow our <a href="/get_started/build_from_source">Building From Source</a> guide.
diff --git a/docs/static_site/src/pages/get_started/java_setup.md b/docs/static_site/src/pages/get_started/java_setup.md
index f994cdb342..4d147a98b4 100644
--- a/docs/static_site/src/pages/get_started/java_setup.md
+++ b/docs/static_site/src/pages/get_started/java_setup.md
@@ -55,7 +55,7 @@ sudo apt-get install openjdk-8-jdk maven
 
 **Step 2.** Run the demo MXNet-Java project.
 
-Go to the [MXNet-Java demo project's README](https://github.com/apache/incubator-mxnet/tree/master/scala-package/mxnet-demo/java-demo) and follow the directions to test the MXNet-Java package installation.
+Go to the [MXNet-Java demo project's README](https://github.com/apache/mxnet/tree/master/scala-package/mxnet-demo/java-demo) and follow the directions to test the MXNet-Java package installation.
 
 #### Maven Repository
 
@@ -108,7 +108,7 @@ The previously mentioned setup with Maven is recommended. Otherwise, the followi
 |---|---|---|
 |macOS | [Shared Library for macOS](osx_setup.html#build-the-shared-library) | [Scala Package for macOS](osx_setup.html#install-the-mxnet-package-for-scala) |
 | Ubuntu | [Shared Library for Ubuntu](ubuntu_setup.html#installing-mxnet-on-ubuntu) | [Scala Package for Ubuntu](ubuntu_setup.html#install-the-mxnet-package-for-scala) |
-| Windows | <a class="github-button" href="https://github.com/apache/incubator-mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/incubator-mxnet on GitHub"> | <a class="github-button" href="https://github.com/apache/incubator-mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/incubator-mxnet on GitHub">Call for Contribution</a> |
+| Windows | <a class="github-button" href="https://github.com/apache/mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/mxnet on GitHub"> | <a class="github-button" href="https://github.com/apache/mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/mxnet on GitHub">Call for Contribution</a> |
 
 
 #### Build Java from an Existing MXNet Installation
@@ -124,7 +124,7 @@ This will install both the Java Inference API and the required MXNet-Scala packa
 
 Javadocs are generated as part of the docs build pipeline. You can find them published in the [Java API]({{'/api/java'|relative_url}}) section of the website or by going to the [scaladocs output]({{'/api/scala/docs/api/#org.apache.mxnet.package'|relative_url}}) directly.
 
-To build the docs yourself, follow the [developer build docs instructions](https://github.com/apache/incubator-mxnet/tree/master/docs/README.md).
+To build the docs yourself, follow the [developer build docs instructions](https://github.com/apache/mxnet/tree/master/docs/README.md).
 
 <hr>
 
diff --git a/docs/static_site/src/pages/get_started/jetson_setup.md b/docs/static_site/src/pages/get_started/jetson_setup.md
index 75ca197a63..a8cd947d68 100644
--- a/docs/static_site/src/pages/get_started/jetson_setup.md
+++ b/docs/static_site/src/pages/get_started/jetson_setup.md
@@ -70,12 +70,12 @@ These steps are optional, but some of the following instructions expect MXNet so
 Clone the MXNet source code repository using the following `git` command in your home directory:
 
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet
+git clone --recursive https://github.com/apache/mxnet.git mxnet
 ```
 
 You can also checkout a particular branch of MXNet. For example, to install MXNet v1.6:
 ```bash
-git clone --recursive -b v1.6.x https://github.com/apache/incubator-mxnet.git mxnet
+git clone --recursive -b v1.6.x https://github.com/apache/mxnet.git mxnet
 ```
 
 Setup your environment variables for MXNet and CUDA in your `.profile` file in your home directory.
diff --git a/docs/static_site/src/pages/get_started/osx_setup.md b/docs/static_site/src/pages/get_started/osx_setup.md
index c29550c9cd..107207cf40 100644
--- a/docs/static_site/src/pages/get_started/osx_setup.md
+++ b/docs/static_site/src/pages/get_started/osx_setup.md
@@ -86,7 +86,7 @@ the configuration file described below.
 Clone the repository:
 
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet
+git clone --recursive https://github.com/apache/mxnet.git mxnet
 cd mxnet
 cp config/darwin.cmake config.cmake
 ```
@@ -162,7 +162,7 @@ Refer to the [C++ Package setup guide](c_plus_plus).
 
 ### Install the MXNet Package for Clojure
 
-Refer to the [Clojure setup guide](https://github.com/apache/incubator-mxnet/tree/master/contrib/clojure-package).
+Refer to the [Clojure setup guide](https://github.com/apache/mxnet/tree/master/contrib/clojure-package).
 <hr>
 
 
diff --git a/docs/static_site/src/pages/get_started/scala_setup.md b/docs/static_site/src/pages/get_started/scala_setup.md
index bf5072363d..765a82563d 100644
--- a/docs/static_site/src/pages/get_started/scala_setup.md
+++ b/docs/static_site/src/pages/get_started/scala_setup.md
@@ -53,8 +53,8 @@ brew install maven
 These scripts will install Maven and its dependencies.
 
 ```bash
-wget https://raw.githubusercontent.com/apache/incubator-mxnet/master/ci/docker/install/ubuntu_core.sh
-wget https://raw.githubusercontent.com/apache/incubator-mxnet/master/ci/docker/install/ubuntu_scala.sh
+wget https://raw.githubusercontent.com/apache/mxnet/master/ci/docker/install/ubuntu_core.sh
+wget https://raw.githubusercontent.com/apache/mxnet/master/ci/docker/install/ubuntu_scala.sh
 chmod +x ubuntu_core.sh
 chmod +x ubuntu_scala.sh
 sudo ./ubuntu_core.sh
@@ -63,7 +63,7 @@ sudo ./ubuntu_scala.sh
 
 **Step 2.** Run the demo MXNet-Scala project.
 
-Go to the [MXNet-Scala demo project's README](https://github.com/apache/incubator-mxnet/tree/master/scala-package/mxnet-demo) and follow the directions to test the MXNet-Scala package installation.
+Go to the [MXNet-Scala demo project's README](https://github.com/apache/mxnet/tree/master/scala-package/mxnet-demo) and follow the directions to test the MXNet-Scala package installation.
 
 #### Maven Repository
 
@@ -111,7 +111,7 @@ The previously mentioned setup with Maven is recommended. Otherwise, the followi
 |---|---|---|
 |macOS | [Shared Library for macOS](osx_setup.html#build-the-shared-library) | [Scala Package for macOS](osx_setup.html#install-the-mxnet-package-for-scala) |
 | Ubuntu | [Shared Library for Ubuntu](ubuntu_setup.html#installing-mxnet-on-ubuntu) | [Scala Package for Ubuntu](ubuntu_setup.html#install-the-mxnet-package-for-scala) |
-| Windows | <a class="github-button" href="https://github.com/apache/incubator-mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/incubator-mxnet on GitHub"> | <a class="github-button" href="https://github.com/apache/incubator-mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/incubator-mxnet on GitHub">Call for Contribution</a> |
+| Windows | <a class="github-button" href="https://github.com/apache/mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/mxnet on GitHub"> | <a class="github-button" href="https://github.com/apache/mxnet/issues/10549" data-size="large" data-show-count="true" aria-label="Issue apache/mxnet on GitHub">Call for Contribution</a> |
 
 
 #### Build Scala from an Existing MXNet Installation
@@ -154,7 +154,7 @@ If you receive a "NumberFormatException" when running the interpreter, run `expo
 
 Scaladocs are generated as part of the docs build pipeline. You can find them published in the [Scala API]({{'/api/scala'|relative_url}}) section of the website or by going to the [scaladocs output]({{'/api/scala/docs/api/#org.apache.mxnet.package'|relative_url}}) directly.
 
-To build the docs yourself, follow the [developer build docs instructions](https://github.com/apache/incubator-mxnet/tree/master/docs/README.md).
+To build the docs yourself, follow the [developer build docs instructions](https://github.com/apache/mxnet/tree/master/docs/README.md).
 
 <hr>
 
diff --git a/docs/static_site/src/pages/get_started/ubuntu_setup.md b/docs/static_site/src/pages/get_started/ubuntu_setup.md
index 0c104fc033..1880173b2e 100644
--- a/docs/static_site/src/pages/get_started/ubuntu_setup.md
+++ b/docs/static_site/src/pages/get_started/ubuntu_setup.md
@@ -107,7 +107,7 @@ python3-pip`. After installing cmake with `pip3`, it is usually available at
 Clone the repository:
 
 ```bash
-git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet
+git clone --recursive https://github.com/apache/mxnet.git mxnet
 cd mxnet
 cp config/linux.cmake config.cmake  # or config/linux_gpu.cmake for build with CUDA
 ```
@@ -192,7 +192,7 @@ Refer to the [C++ Package setup guide](c_plus_plus).
 
 ### Install the MXNet Package for Clojure
 
-Refer to the [Clojure setup guide](https://github.com/apache/incubator-mxnet/tree/master/contrib/clojure-package).
+Refer to the [Clojure setup guide](https://github.com/apache/mxnet/tree/master/contrib/clojure-package).
 <hr>
 
 
@@ -262,14 +262,14 @@ To use the Julia binding with an existing `libmxnet` installation, set the `MXNE
 MXNet source root. For example:
 
 ```bash
-export MXNET_HOME=$HOME/incubator-mxnet
+export MXNET_HOME=$HOME/mxnet
 ```
 
 Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is found. If you can't find it, you might
 have skipped the building MXNet step. Go back and [build MXNet](#build-the-shared-library) first. For example:
 
 ```bash
-export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export LD_LIBRARY_PATH=$HOME/mxnet/lib:$LD_LIBRARY_PATH
 ```
 
 Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment variable.
@@ -283,8 +283,8 @@ or `.bash_profile`.
 ```
 export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
 export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
-export MXNET_HOME=$HOME/incubator-mxnet
-export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export MXNET_HOME=$HOME/mxnet
+export LD_LIBRARY_PATH=$HOME/mxnet/lib:$LD_LIBRARY_PATH
 export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
 ```
 
diff --git a/docs/static_site/src/pages/get_started/validate_mxnet.md b/docs/static_site/src/pages/get_started/validate_mxnet.md
index 392682acc6..86806463f3 100644
--- a/docs/static_site/src/pages/get_started/validate_mxnet.md
+++ b/docs/static_site/src/pages/get_started/validate_mxnet.md
@@ -154,4 +154,4 @@ You should see the following output:
 
 ### Scala
 
-Run the <a href="https://github.com/apache/incubator-mxnet/tree/master/scala-package/mxnet-demo">MXNet-Scala demo project</a> to validate your Maven package installation.
+Run the <a href="https://github.com/apache/mxnet/tree/master/scala-package/mxnet-demo">MXNet-Scala demo project</a> to validate your Maven package installation.
diff --git a/docs/static_site/src/pages/get_started/windows_setup.md b/docs/static_site/src/pages/get_started/windows_setup.md
index 693f0e23bf..9b1c8e7e9f 100644
--- a/docs/static_site/src/pages/get_started/windows_setup.md
+++ b/docs/static_site/src/pages/get_started/windows_setup.md
@@ -48,7 +48,7 @@ The following describes how to install with pip for computers with CPUs, Intel C
 * Python 2.7 or 3.6
 * pip
 
-<sup id="fn1">1. There are [known issues](https://github.com/apache/incubator-mxnet/issues?utf8=%E2%9C%93&q=is%3Aissue+windows7+label%3AWindows+) with Windows 7. <a href="#ref1" title="Return to source text.">↩</a></sup>
+<sup id="fn1">1. There are [known issues](https://github.com/apache/mxnet/issues?utf8=%E2%9C%93&q=is%3Aissue+windows7+label%3AWindows+) with Windows 7. <a href="#ref1" title="Return to source text.">↩</a></sup>
 
 ### Recommended System Requirements
 
@@ -100,7 +100,7 @@ When using supported NVIDIA GPU hardware, inference and training can be vastly f
 
 The following steps will setup MXNet with CUDA. cuDNN can be enabled only when building from source.
 1. Install [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/) or [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/).
-1. Download and install [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal). CUDA versions 9.2 or 9.0 are recommended. Some [issues with CUDA 9.1](https://github.com/apache/incubator-mxnet/labels/CUDA) have been identified in the past.
+1. Download and install [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal). CUDA versions 9.2 or 9.0 are recommended. Some [issues with CUDA 9.1](https://github.com/apache/mxnet/labels/CUDA) have been identified in the past.
 1. Download and install [NVIDIA_CUDA_DNN](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows)
 1. Install MXNet with CUDA support with pip:
 
@@ -147,7 +147,7 @@ You also have the option to install MXNet with MKL or MKL-DNN. In this case it i
 
 **Option 1: Build with Microsoft Visual Studio 2017 (VS2017)**
 
-To build and install MXNet yourself using [VS2017](https://www.visualstudio.com/downloads/), you need the following dependencies. You may try a newer version of a particular dependency, but please open a pull request or [issue](https://github.com/apache/incubator-mxnet/issues/new) to update this guide if a newer version is validated.
+To build and install MXNet yourself using [VS2017](https://www.visualstudio.com/downloads/), you need the following dependencies. You may try a newer version of a particular dependency, but please open a pull request or [issue](https://github.com/apache/mxnet/issues/new) to update this guide if a newer version is validated.
 
 1. Install or update VS2017.
     - If [VS2017](https://www.visualstudio.com/downloads/) is not already installed, download and install it. You can download and install the free community edition.
@@ -177,17 +177,17 @@ After you have installed all of the required dependencies, build the MXNet sourc
 2. Download the MXNet source code from GitHub by using following command:
 ```
 cd C:\
-git clone https://github.com/apache/incubator-mxnet.git --recursive
+git clone https://github.com/apache/mxnet.git --recursive
 ```
-3. Verify that the `DCUDNN_INCLUDE` and `DCUDNN_LIBRARY` environment variables are pointing to the `include` folder and `cudnn.lib` file of your CUDA installed location, and `C:\incubator-mxnet` is the location of the source code you just cloned in the previous step.
+3. Verify that the `DCUDNN_INCLUDE` and `DCUDNN_LIBRARY` environment variables are pointing to the `include` folder and `cudnn.lib` file of your CUDA installed location, and `C:\mxnet` is the location of the source code you just cloned in the previous step.
 4. Create a build dir using the following command and go to the directory, for example:
 ```
-mkdir C:\incubator-mxnet\build
-cd C:\incubator-mxnet\build
+mkdir C:\mxnet\build
+cd C:\mxnet\build
 ```
 5. Compile the MXNet source code with `cmake` by using following command:
 ```
-cmake -G "Visual Studio 15 2017 Win64" -T cuda=9.2,host=x64 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_BLAS=open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_LIST=Common -DCUDA_TOOLSET=9.2 -DCUDNN_INCLUDE=C:\cuda\include -DCUDNN_LIBRARY=C:\cuda\lib\x64\cudnn.lib "C:\incubator-mxnet"
+cmake -G "Visual Studio 15 2017 Win64" -T cuda=9.2,host=x64 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_BLAS=open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_LIST=Common -DCUDA_TOOLSET=9.2 -DCUDNN_INCLUDE=C:\cuda\include -DCUDNN_LIBRARY=C:\cuda\lib\x64\cudnn.lib "C:\mxnet"
 ```
 * Make sure you set the environment variables correctly (OpenBLAS_HOME, OpenCV_DIR) and change the version of the Visual studio 2017 to v14.11 before enter above command.
 6. After the CMake successfully completed, compile the MXNet source code by using following command:
@@ -198,7 +198,7 @@ msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount
 
 **Option 2: Build with Visual Studio 2015**
 
-To build and install MXNet yourself using [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/), you need the following dependencies. You may try a newer version of a particular dependency, but please open a pull request or [issue](https://github.com/apache/incubator-mxnet/issues/new) to update this guide if a newer version is validated.
+To build and install MXNet yourself using [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/), you need the following dependencies. You may try a newer version of a particular dependency, but please open a pull request or [issue](https://github.com/apache/mxnet/issues/new) to update this guide if a newer version is validated.
 
 1. If [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) is not already installed, download and install it. You can download and install the free community edition. At least Update 3 of Microsoft Visual Studio 2015 is required to build MXNet from source. Upgrade via it's ```Tools -> Extensions and Updates... | Product Updates``` menu.
 2. Download and install [CMake](https://cmake.org/) if it is not already installed.
@@ -213,7 +213,7 @@ To build and install MXNet yourself using [Microsoft Visual Studio 2015](https:/
 
 After you have installed all of the required dependencies, build the MXNet source code:
 
-1. Download the MXNet source code from [GitHub](https://github.com/apache/incubator-mxnet) (make sure you also download third parties submodules e.g. ```git clone --recurse-submodules```).
+1. Download the MXNet source code from [GitHub](https://github.com/apache/mxnet) (make sure you also download third parties submodules e.g. ```git clone --recurse-submodules```).
 2. Use [CMake](https://cmake.org/) to create a Visual Studio solution in ```./build```.
 3. In Visual Studio, open the solution file,```.sln```, and compile it.
 These commands produce a library called ```mxnet.dll``` in the ```./build/Release/``` or ```./build/Debug``` folder.
@@ -293,7 +293,7 @@ For CPU-only package:
 
 1. Clone the MXNet github repo.
 ```sh
-git clone --recursive https://github.com/apache/incubator-mxnet
+git clone --recursive https://github.com/apache/mxnet
 ```
 The `--recursive` is to clone all the submodules used by MXNet. You will be editing the ```"/mxnet/R-package"``` folder.
 
@@ -331,7 +331,7 @@ These dlls can be found in `prebuildbase_win10_x64_vc14/3rdparty`, `mxnet_x64_vc
 
 7. Also make sure that Rtools is installed and the executable is added to your ```PATH``` in the environment variables.
 
-8. Temporary patch - im2rec currently results in crashes during the build. Remove the im2rec.h and im2rec.cc files in R-package/src/ from cloned repository and comment out the two im2rec lines in [R-package/src/mxnet.cc](https://github.com/apache/incubator-mxnet/blob/master/R-package/src/mxnet.cc) as shown below.
+8. Temporary patch - im2rec currently results in crashes during the build. Remove the im2rec.h and im2rec.cc files in R-package/src/ from cloned repository and comment out the two im2rec lines in [R-package/src/mxnet.cc](https://github.com/apache/mxnet/blob/master/R-package/src/mxnet.cc) as shown below.
 ```
 #include "./kvstore.h"
 #include "./export.h"
@@ -403,7 +403,7 @@ Change cu92 to cu80, cu90 or cu91 based on your CUDA toolkit version. Currently,
 After you have installed above software, continue with the following steps to build MXNet-R:
 1. Clone the MXNet github repo.
 ```sh
-git clone --recursive https://github.com/apache/incubator-mxnet
+git clone --recursive https://github.com/apache/mxnet
 ```
 The `--recursive` is to clone all the submodules used by MXNet. You will be editing the ```"/mxnet/R-package"``` folder.
 2. Download prebuilt GPU-enabled MXNet libraries for Windows from https://github.com/yajiedesign/mxnet/releases. You will need `mxnet_x64_vc14_gpu_cuX.7z` and `prebuildbase_win10_x64_vc14.7z` where X stands for your CUDA toolkit version
@@ -434,7 +434,7 @@ These dlls can be found in `prebuildbase_win10_x64_vc14/3rdparty`, `mxnet_x64_vc
 ```
 6. Make sure that R executable is added to your ```PATH``` in the environment variables. Running the ```where R``` command at the command prompt should return the location.
 7. Also make sure that Rtools is installed and the executable is added to your ```PATH``` in the environment variables.
-8. Temporary patch - im2rec currently results in crashes during the build. Remove the im2rec.h and im2rec.cc files in R-package/src/ from cloned repository and comment out the two im2rec lines in [R-package/src/mxnet.cc](https://github.com/apache/incubator-mxnet/blob/master/R-package/src/mxnet.cc) as shown below.
+8. Temporary patch - im2rec currently results in crashes during the build. Remove the im2rec.h and im2rec.cc files in R-package/src/ from cloned repository and comment out the two im2rec lines in [R-package/src/mxnet.cc](https://github.com/apache/mxnet/blob/master/R-package/src/mxnet.cc) as shown below.
 ```bash
 #include "./kvstore.h"
 #include "./export.h"
diff --git a/example/README.md b/example/README.md
index b68afd5e3e..bdb9b2b57f 100644
--- a/example/README.md
+++ b/example/README.md
@@ -58,7 +58,7 @@ Do not forget to udpdate the `docs/tutorials/index.md` for your tutorial to show
 
 #### Tutorial formatting
 
-The site expects the format to be markdown, so export your notebook as a .md via the Jupyter web interface menu (File > Download As > Markdown). Then, to enable the download notebook button in the web site's UI ([example](https://mxnet.apache.org/tutorials/python/linear-regression.html)), add the following as the last line of the file ([example](https://github.com/apache/incubator-mxnet/blame/master/docs/tutorials/python/linear-regression.md#L194)):
+The site expects the format to be markdown, so export your notebook as a .md via the Jupyter web interface menu (File > Download As > Markdown). Then, to enable the download notebook button in the web site's UI ([example](https://mxnet.apache.org/tutorials/python/linear-regression.html)), add the following as the last line of the file ([example](https://github.com/apache/mxnet/blame/master/docs/tutorials/python/linear-regression.md#L194)):
 
 ```
 <!-- INSERT SOURCE DOWNLOAD BUTTONS -->
@@ -85,7 +85,7 @@ If your tutorial depends on specific packages, simply add them to this provision
 ### <a name="language-binding-examples"></a>Languages Binding Examples
 ------------------
 * [MXNet C++ API](https://mxnet.apache.org/api/c++/index.html)
-   - [C++ examples](https://github.com/apache/incubator-mxnet/tree/master/example/image-classification/predict-cpp) - Example code for using C++ interface, including NDArray, symbolic layer and models.
+   - [C++ examples](https://github.com/apache/mxnet/tree/master/example/image-classification/predict-cpp) - Example code for using C++ interface, including NDArray, symbolic layer and models.
 * [MXNet Python API](https://mxnet.apache.org/api/python/index.html)
 * [MXNet Java API](https://mxnet.apache.org/api/java/index.html)
 * [MXNet Scala API](https://mxnet.apache.org/api/scala/index.html)
diff --git a/example/cnn_chinese_text_classification/README.md b/example/cnn_chinese_text_classification/README.md
index e28a0ec9ac..c780d0acae 100644
--- a/example/cnn_chinese_text_classification/README.md
+++ b/example/cnn_chinese_text_classification/README.md
@@ -17,7 +17,7 @@
 
 Implementing  CNN + Highway Network for Chinese Text Classification in MXNet
 ============
-Sentiment classification forked from [incubator-mxnet/cnn_text_classification/](https://github.com/apache/incubator-mxnet/tree/master/example/cnn_text_classification), i've implemented the [Highway Networks](https://arxiv.org/pdf/1505.00387.pdf) architecture.The final train model is CNN + Highway Network structure, and this version can achieve a best dev accuracy of 94.75% with the Chinese corpus.
+Sentiment classification forked from [mxnet/cnn_text_classification/](https://github.com/apache/mxnet/tree/master/example/cnn_text_classification), i've implemented the [Highway Networks](https://arxiv.org/pdf/1505.00387.pdf) architecture.The final train model is CNN + Highway Network structure, and this version can achieve a best dev accuracy of 94.75% with the Chinese corpus.
 
 It is a slightly simplified implementation of Kim's [Convolutional Neural Networks for Sentence Classification](http://arxiv.org/abs/1408.5882) paper in MXNet.
 
diff --git a/example/ctc/README.md b/example/ctc/README.md
index dcdd6c5502..803b5e4e88 100644
--- a/example/ctc/README.md
+++ b/example/ctc/README.md
@@ -55,7 +55,7 @@ $ sudo make install
 ```
 
 #### Building MXNet from source with warp-ctc integration
-In order to build MXNet from source, you need to follow [instructions here](https://mxnet.apache.org/install/index.html). After choosing your system configuration, Python environment, and "Build from Source" options, before running `make` in step 4, you need to enable warp-ctc integration by uncommenting the following lines in `make/config.mk` in `incubator-mxnet` directory:
+In order to build MXNet from source, you need to follow [instructions here](https://mxnet.apache.org/install/index.html). After choosing your system configuration, Python environment, and "Build from Source" options, before running `make` in step 4, you need to enable warp-ctc integration by uncommenting the following lines in `make/config.mk` in `mxnet` directory:
 ```
 WARPCTC_PATH = $(HOME)/warp-ctc
 MXNET_PLUGINS += plugin/warpctc/warpctc.mk
diff --git a/example/ctc/lstm_ocr_train.py b/example/ctc/lstm_ocr_train.py
index 49d9531920..7790a4de37 100644
--- a/example/ctc/lstm_ocr_train.py
+++ b/example/ctc/lstm_ocr_train.py
@@ -68,7 +68,7 @@ def main():
         font_paths=get_fonts(args.font_path), h=hp.seq_length, w=30,
         num_digit_min=3, num_digit_max=4, num_processes=args.num_proc, max_queue_size=hp.batch_size * 2)
     try:
-        # Must call start() before any call to mxnet module (https://github.com/apache/incubator-mxnet/issues/9213)
+        # Must call start() before any call to mxnet module (https://github.com/apache/mxnet/issues/9213)
         mp_captcha.start()
 
         if args.gpu:
diff --git a/example/distributed_training/README.md b/example/distributed_training/README.md
index af25b9efd1..0610781b72 100644
--- a/example/distributed_training/README.md
+++ b/example/distributed_training/README.md
@@ -183,7 +183,7 @@ for batch in train_data:
 
 ## Final Step: Launching the distributed training
 
-Note that there are several processes that needs to be launched on multiple machines to do distributed training. One worker and one parameter server needs to be launched on each host. Scheduler needs to be launched on one of the hosts. While this can be done manually, MXNet provides the [`launch.py`](https://github.com/apache/incubator-mxnet/blob/master/tools/launch.py) tool to make this easy.
+Note that there are several processes that needs to be launched on multiple machines to do distributed training. One worker and one parameter server needs to be launched on each host. Scheduler needs to be launched on one of the hosts. While this can be done manually, MXNet provides the [`launch.py`](https://github.com/apache/mxnet/blob/master/tools/launch.py) tool to make this easy.
 
 For example, the following command launches distributed training on two machines:
 
diff --git a/example/gluon/audio/urban_sounds/requirements.txt b/example/gluon/audio/urban_sounds/requirements.txt
index d885e0beec..d70d0558d3 100644
--- a/example/gluon/audio/urban_sounds/requirements.txt
+++ b/example/gluon/audio/urban_sounds/requirements.txt
@@ -1,2 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
 librosa>=0.6.2 # librosa is a library that is used to load the audio(wav) files and provides capabilities of feature extraction.
-argparse # used for parsing arguments
\ No newline at end of file
+argparse # used for parsing arguments
diff --git a/example/gluon/lipnet/requirements.txt b/example/gluon/lipnet/requirements.txt
index f1fcda31d9..93ce04af51 100644
--- a/example/gluon/lipnet/requirements.txt
+++ b/example/gluon/lipnet/requirements.txt
@@ -1,3 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
 dlib==19.15.0
 Pillow==4.1.0
 scipy==0.19.0
diff --git a/example/gluon/sn_gan/data.py b/example/gluon/sn_gan/data.py
index 782f74ffca..a83e0c898b 100644
--- a/example/gluon/sn_gan/data.py
+++ b/example/gluon/sn_gan/data.py
@@ -17,7 +17,7 @@
 
 # This example is inspired by https://github.com/jason71995/Keras-GAN-Library,
 # https://github.com/kazizzad/DCGAN-Gluon-MxNet/blob/master/MxnetDCGAN.ipynb
-# https://github.com/apache/incubator-mxnet/blob/master/example/gluon/dc_gan/dcgan.py
+# https://github.com/apache/mxnet/blob/master/example/gluon/dc_gan/dcgan.py
 
 import numpy as np
 
diff --git a/example/gluon/sn_gan/model.py b/example/gluon/sn_gan/model.py
index cfd7f93e8d..9a3c1ab68e 100644
--- a/example/gluon/sn_gan/model.py
+++ b/example/gluon/sn_gan/model.py
@@ -17,7 +17,7 @@
 
 # This example is inspired by https://github.com/jason71995/Keras-GAN-Library,
 # https://github.com/kazizzad/DCGAN-Gluon-MxNet/blob/master/MxnetDCGAN.ipynb
-# https://github.com/apache/incubator-mxnet/blob/master/example/gluon/dc_gan/dcgan.py
+# https://github.com/apache/mxnet/blob/master/example/gluon/dc_gan/dcgan.py
 
 import mxnet as mx
 from mxnet import nd
diff --git a/example/gluon/sn_gan/train.py b/example/gluon/sn_gan/train.py
index 46e44791ce..1ecd6b2683 100644
--- a/example/gluon/sn_gan/train.py
+++ b/example/gluon/sn_gan/train.py
@@ -17,7 +17,7 @@
 
 # This example is inspired by https://github.com/jason71995/Keras-GAN-Library,
 # https://github.com/kazizzad/DCGAN-Gluon-MxNet/blob/master/MxnetDCGAN.ipynb
-# https://github.com/apache/incubator-mxnet/blob/master/example/gluon/dc_gan/dcgan.py
+# https://github.com/apache/mxnet/blob/master/example/gluon/dc_gan/dcgan.py
 
 
 import os
diff --git a/example/gluon/sn_gan/utils.py b/example/gluon/sn_gan/utils.py
index 1a77a6e90e..6c00735733 100644
--- a/example/gluon/sn_gan/utils.py
+++ b/example/gluon/sn_gan/utils.py
@@ -17,7 +17,7 @@
 
 # This example is inspired by https://github.com/jason71995/Keras-GAN-Library,
 # https://github.com/kazizzad/DCGAN-Gluon-MxNet/blob/master/MxnetDCGAN.ipynb
-# https://github.com/apache/incubator-mxnet/blob/master/example/gluon/dc_gan/dcgan.py
+# https://github.com/apache/mxnet/blob/master/example/gluon/dc_gan/dcgan.py
 
 import math
 
diff --git a/example/image-classification/predict-cpp/CMakeLists.txt b/example/image-classification/predict-cpp/CMakeLists.txt
index 416f61e7c9..d619b3b7ab 100644
--- a/example/image-classification/predict-cpp/CMakeLists.txt
+++ b/example/image-classification/predict-cpp/CMakeLists.txt
@@ -1,3 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
 # Check OpenCV
 if(NOT USE_OPENCV OR NOT OpenCV_FOUND OR OpenCV_VERSION_MAJOR LESS 3)
   message(WARNING "\
diff --git a/example/model-parallel/README.md b/example/model-parallel/README.md
index 537562070a..537ce3cb83 100644
--- a/example/model-parallel/README.md
+++ b/example/model-parallel/README.md
@@ -17,4 +17,4 @@
 
 # Run parts of a model on different devices
 
-This folder contains the example [matrix_factorization](https://github.com/apache/incubator-mxnet/tree/master/example/model-parallel/matrix_factorization) that demonstrates the basic usage of `group2ctxs`. 
+This folder contains the example [matrix_factorization](https://github.com/apache/mxnet/tree/master/example/model-parallel/matrix_factorization) that demonstrates the basic usage of `group2ctxs`. 
diff --git a/example/module/README.md b/example/module/README.md
index 24c917d5c8..36bbc92548 100644
--- a/example/module/README.md
+++ b/example/module/README.md
@@ -19,8 +19,8 @@
 
 This folder contains usage examples for MXNet module.
 
-[mnist_mlp.py](https://github.com/apache/incubator-mxnet/blob/master/example/module/mnist_mlp.py): Trains a simple multilayer perceptron on the MNIST dataset
+[mnist_mlp.py](https://github.com/apache/mxnet/blob/master/example/module/mnist_mlp.py): Trains a simple multilayer perceptron on the MNIST dataset
 
-[python_loss](https://github.com/apache/incubator-mxnet/blob/master/example/module/python_loss.py): Usage example for PythonLossModule
+[python_loss](https://github.com/apache/mxnet/blob/master/example/module/python_loss.py): Usage example for PythonLossModule
 
-[sequential_module](https://github.com/apache/incubator-mxnet/blob/master/example/module/sequential_module.py): Usage example for SequentialModule
+[sequential_module](https://github.com/apache/mxnet/blob/master/example/module/sequential_module.py): Usage example for SequentialModule
diff --git a/example/multi_threaded_inference/README.md b/example/multi_threaded_inference/README.md
index 627cdb2293..2396348dbf 100644
--- a/example/multi_threaded_inference/README.md
+++ b/example/multi_threaded_inference/README.md
@@ -16,4 +16,4 @@
 <!--- under the License. -->
 
 
-Please refer to : https://github.com/apache/incubator-mxnet/blob/master/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md for detailed tutorial.
+Please refer to : https://github.com/apache/mxnet/blob/master/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md for detailed tutorial.
diff --git a/example/multi_threaded_inference/multi_threaded_inference.cc b/example/multi_threaded_inference/multi_threaded_inference.cc
index ebbe9b5d63..b40963f62d 100644
--- a/example/multi_threaded_inference/multi_threaded_inference.cc
+++ b/example/multi_threaded_inference/multi_threaded_inference.cc
@@ -39,7 +39,7 @@ const float DEFAULT_MEAN = 117.0;
 
 
 // Code to load image, PrintOutput results, helper functions for the same obtained from:
-// https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/predict-cpp/
+// https://github.com/apache/mxnet/blob/master/example/image-classification/predict-cpp/
 
 static std::string trim(const std::string &input) {
   auto not_space = [](int ch) { return !std::isspace(ch); };
diff --git a/example/named_entity_recognition/README.md b/example/named_entity_recognition/README.md
index eaa358d4f1..2a35aba001 100644
--- a/example/named_entity_recognition/README.md
+++ b/example/named_entity_recognition/README.md
@@ -39,5 +39,5 @@ To run inference using trained model:
 1. Recreate the bucketing module using `sym_gen` defined in `ner.py`
 2. Loading saved parameters using `module.set_params()`
 
-Refer to the `test` function in the [Bucketing Module example](https://github.com/apache/incubator-mxnet/blob/master/example/rnn/bucketing/cudnn_rnn_bucketing.py)
-and this [issue](https://github.com/apache/incubator-mxnet/issues/5008) on Bucketing Module Prediction
\ No newline at end of file
+Refer to the `test` function in the [Bucketing Module example](https://github.com/apache/mxnet/blob/master/example/rnn/bucketing/cudnn_rnn_bucketing.py)
+and this [issue](https://github.com/apache/mxnet/issues/5008) on Bucketing Module Prediction
\ No newline at end of file
diff --git a/example/onnx/README.md b/example/onnx/README.md
index 8761609dd1..40c7cb5a0e 100644
--- a/example/onnx/README.md
+++ b/example/onnx/README.md
@@ -19,7 +19,7 @@
 
 This folder contains examples that use mx2onnx module to export MXNet models to ONNX format.
 
-Please refer to [this link](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#onnx-export-support-for-mxnet)
+Please refer to [this link](https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx#onnx-export-support-for-mxnet)
 for more details.
 
 - cv_model_inference.py.
diff --git a/example/onnx/cv_model_inference.py b/example/onnx/cv_model_inference.py
index f28ab5b908..a434bd2b5e 100644
--- a/example/onnx/cv_model_inference.py
+++ b/example/onnx/cv_model_inference.py
@@ -76,7 +76,7 @@ mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_types, onnx_file)
 
 # download and process the input image
 img_dir = './images'
-img_url = 'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/car.jpg'
+img_url = 'https://github.com/apache/mxnet-ci/raw/master/test-data/images/car.jpg'
 fname = os.path.join(img_dir, os.path.basename(urlparse(img_url).path))
 mx.test_utils.download(img_url, fname=fname)
 img_data = preprocess_image(fname)
diff --git a/example/quantization/README.md b/example/quantization/README.md
index edb1db8a0f..a5c25a0404 100644
--- a/example/quantization/README.md
+++ b/example/quantization/README.md
@@ -139,8 +139,8 @@ The following models have been tested on Linux systems. Accuracy is collected on
 |[Inception V3](#7)|[Gluon-CV](https://gluon-cv.mxnet.io/model_zoo/classification.html)|[Validation Dataset](http://data.mxnet.io/data/val_256_q90.rec)|77.76%/93.83% |78.05%/93.91% |
 |[ResNet152-V2](#8)|[MXNet ModelZoo](http://data.mxnet.io/models/imagenet/resnet/152-layers/)|[Validation Dataset](http://data.mxnet.io/data/val_256_q90.rec)|76.65%/93.07%|76.25%/92.89%|
 |[Inception-BN](#9)|[MXNet ModelZoo](http://data.mxnet.io/models/imagenet/inception-bn/)|[Validation Dataset](http://data.mxnet.io/data/val_256_q90.rec)|72.28%/90.63%|72.02%/90.53%|
-| [SSD-VGG16](#10) | [example/ssd](https://github.com/apache/incubator-mxnet/tree/master/example/ssd)  | VOC2007/2012  | 0.8366 mAP  | 0.8357 mAP  |
-| [SSD-VGG16](#10) | [example/ssd](https://github.com/apache/incubator-mxnet/tree/master/example/ssd)  | COCO2014  | 0.2552 mAP  | 0.253 mAP  |
+| [SSD-VGG16](#10) | [example/ssd](https://github.com/apache/mxnet/tree/master/example/ssd)  | VOC2007/2012  | 0.8366 mAP  | 0.8357 mAP  |
+| [SSD-VGG16](#10) | [example/ssd](https://github.com/apache/mxnet/tree/master/example/ssd)  | COCO2014  | 0.2552 mAP  | 0.253 mAP  |
 
 <h3 id='3'>ResNetV1</h3>
 
@@ -300,7 +300,7 @@ bash ./launch_inference_mkldnn.sh -s ./model/imagenet1k-inception-bn-quantized-5
 
 <h3 id='10'>SSD-VGG16</h3>
 
-SSD model is located in [example/ssd](https://github.com/apache/incubator-mxnet/tree/master/example/ssd), follow [the insturctions](https://github.com/apache/incubator-mxnet/tree/master/example/ssd#quantize-model) to run quantized SSD model.
+SSD model is located in [example/ssd](https://github.com/apache/mxnet/tree/master/example/ssd), follow [the insturctions](https://github.com/apache/mxnet/tree/master/example/ssd#quantize-model) to run quantized SSD model.
 
 <h3 id='11'>Custom Model</h3>
 
diff --git a/example/sparse/README.md b/example/sparse/README.md
index 27721e5ea3..787104409a 100644
--- a/example/sparse/README.md
+++ b/example/sparse/README.md
@@ -17,8 +17,8 @@
 
 # Examples using Sparse Symbol API
 This folder contains examples that demonstrate the usage of [Sparse Symbol API](https://mxnet.apache.org/api/python/symbol/sparse.html)
-- [Factorization Machine](https://github.com/apache/incubator-mxnet/tree/master/example/sparse/factorization_machine) uses sparse weights
-- [Linear Classification Using Sparse Matrix Multiplication](https://github.com/apache/incubator-mxnet/tree/master/example/sparse/linear_classification) shows how to use a sparse data loader, sparse dot operator and sparse gradient updaters
-- [Matrix Factorization w/ Sparse Embedding](https://github.com/apache/incubator-mxnet/tree/master/example/sparse/matrix_factorization) uses sparse weights
-- [Wide and Deep Learning](https://github.com/apache/incubator-mxnet/tree/master/example/sparse/wide_deep) shows how to run sparse wide and deep classification
+- [Factorization Machine](https://github.com/apache/mxnet/tree/master/example/sparse/factorization_machine) uses sparse weights
+- [Linear Classification Using Sparse Matrix Multiplication](https://github.com/apache/mxnet/tree/master/example/sparse/linear_classification) shows how to use a sparse data loader, sparse dot operator and sparse gradient updaters
+- [Matrix Factorization w/ Sparse Embedding](https://github.com/apache/mxnet/tree/master/example/sparse/matrix_factorization) uses sparse weights
+- [Wide and Deep Learning](https://github.com/apache/mxnet/tree/master/example/sparse/wide_deep) shows how to run sparse wide and deep classification
 
diff --git a/example/speech_recognition/README.md b/example/speech_recognition/README.md
index d2125f6578..f138d43044 100644
--- a/example/speech_recognition/README.md
+++ b/example/speech_recognition/README.md
@@ -46,7 +46,7 @@ With rich functionalities and convenience explained above, you can build your ow
 <code>pip install soundfile</code>
 </pre>
 - Warp CTC: Follow [this instruction](https://github.com/baidu-research/warp-ctc) to compile Baidu's Warp CTC. (Note: If you are using V100, make sure to use this [fix](https://github.com/baidu-research/warp-ctc/pull/118))
-- You need to compile MXNet with WarpCTC, follow the instructions [here](https://github.com/apache/incubator-mxnet/tree/master/example/ctc)
+- You need to compile MXNet with WarpCTC, follow the instructions [here](https://github.com/apache/mxnet/tree/master/example/ctc)
 - You might need to set `LD_LIBRARY_PATH` to the right path if MXNet fails to find your `libwarpctc.so`
 - **We strongly recommend that you first test a model of small networks.**
 
diff --git a/example/ssd/README.md b/example/ssd/README.md
index c78c260c2b..a12a06fb10 100644
--- a/example/ssd/README.md
+++ b/example/ssd/README.md
@@ -101,7 +101,7 @@ insanely slow. Using CUDNN is optional, but highly recommended.
 
 * Run
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 # download the test images
 python data/demo/download_demo_images.py
 # run the demo
@@ -143,12 +143,12 @@ The suggested directory structure is to store `VOC2007` and `VOC2012` directorie
 in the same `VOCdevkit` folder.
 * Then link `VOCdevkit` folder to `data/VOCdevkit` by default:
 ```
-ln -s /path/to/VOCdevkit /path/to/incubator-mxnet/example/ssd/data/VOCdevkit
+ln -s /path/to/VOCdevkit /path/to/mxnet/example/ssd/data/VOCdevkit
 ```
 Use hard link instead of copy could save us a bit disk space.
 * Create packed binary file for faster training:
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 bash tools/prepare_pascal.sh
 # or if you are using windows
 python tools/prepare_dataset.py --dataset pascal --year 2007,2012 --set trainval --target ./data/train.lst
@@ -156,7 +156,7 @@ python tools/prepare_dataset.py --dataset pascal --year 2007 --set test --target
 ```
 * Start training:
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 python train.py
 ```
 * By default, this example will use `batch-size=32` and `learning_rate=0.002`.
@@ -182,12 +182,12 @@ unzip annotations_trainval2014.zip
 * We are going to use `train2014,valminusminival2014` set in COCO2014 for training and `minival2014` for evaluation as a common strategy.
 * Then link `COCO2014` folder to `data/coco` by default:
 ```
-ln -s /path/to/COCO2014 /path/to/incubator-mxnet/example/ssd/data/coco
+ln -s /path/to/COCO2014 /path/to/mxnet/example/ssd/data/coco
 ```
 Use hard link instead of copy could save us a bit disk space.
 * Create packed binary file for faster training:
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 bash tools/prepare_coco.sh
 # or if you are using windows
 python tools/prepare_dataset.py --dataset coco --set train2014,valminusminival2014 --target ./data/train.lst --root ./data/coco
@@ -195,14 +195,14 @@ python tools/prepare_dataset.py --dataset coco --set minival2014 --target ./data
 ```
 * Start training:
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 python train.py --label-width=560 --num-class=80 --class-names=./dataset/names/coco_label --pretrained="" --num-example=117265 --batch-size=64
 ```
 
 ### Evalute trained model
 Make sure you have val.rec as validation dataset. It's the same one as used in training. Use:
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 python evaluate.py --gpus 0,1 --batch-size 128 --epoch 0
 
 # Evaluate on COCO dataset
@@ -211,9 +211,9 @@ python evaluate.py --gpus 0,1 --batch-size 128 --epoch 0 --num-class=80 --class-
 
 ### Quantize model
 
-To quantize a model on VOC dataset, follow the [Train instructions](https://github.com/apache/incubator-mxnet/tree/master/example/ssd#train-the-model-on-VOC) to train a FP32 `SSD-VGG16_reduced_300x300` model based on Pascal VOC dataset. You can also download our [SSD-VGG16 pre-trained model](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_vgg16_reduced_300-dd479559.zip) and [packed binary data](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dat [...]
+To quantize a model on VOC dataset, follow the [Train instructions](https://github.com/apache/mxnet/tree/master/example/ssd#train-the-model-on-VOC) to train a FP32 `SSD-VGG16_reduced_300x300` model based on Pascal VOC dataset. You can also download our [SSD-VGG16 pre-trained model](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_vgg16_reduced_300-dd479559.zip) and [packed binary data](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/ssd-v [...]
 
-To quantize a model on COCO dataset, follow the [Train instructions](https://github.com/apache/incubator-mxnet/tree/master/example/ssd#train-the-model-on-COCO) to train a FP32 `SSD-VGG16_reduced_300x300` model based on COCO dataset. You can also download our [SSD-VGG16 pre-trained model](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_vgg16_reduced_300-7fedd4ad.zip) and [packed binary data](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset [...]
+To quantize a model on COCO dataset, follow the [Train instructions](https://github.com/apache/mxnet/tree/master/example/ssd#train-the-model-on-COCO) to train a FP32 `SSD-VGG16_reduced_300x300` model based on COCO dataset. You can also download our [SSD-VGG16 pre-trained model](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_vgg16_reduced_300-7fedd4ad.zip) and [packed binary data](http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/ssd_coco- [...]
 
 ```
 data/
@@ -257,16 +257,16 @@ python benchmark_score.py --deploy --prefix=./model/cqssd_
 This simply removes all loss layers, and attach a layer for merging results and non-maximum suppression.
 Useful when loading python symbol is not available.
 ```
-# cd /path/to/incubator-mxnet/example/ssd
+# cd /path/to/mxnet/example/ssd
 python deploy.py --num-class 20
 ```
 
 ### Convert caffe model
-Converter from caffe is available at `/path/to/incubator-mxnet/example/ssd/tools/caffe_converter`
+Converter from caffe is available at `/path/to/mxnet/example/ssd/tools/caffe_converter`
 
 This is specifically modified to handle custom layer in caffe-ssd. Usage:
 ```
-cd /path/to/incubator-mxnet/example/ssd/tools/caffe_converter
+cd /path/to/mxnet/example/ssd/tools/caffe_converter
 make
 python convert_model.py deploy.prototxt name_of_pretrained_caffe_model.caffemodel ssd_converted
 # you will use this model in deploy mode without loading from python symbol(layer names inconsistent)
diff --git a/julia/NEWS.md b/julia/NEWS.md
index 4d560ae803..1c4e6b51a1 100644
--- a/julia/NEWS.md
+++ b/julia/NEWS.md
@@ -497,10 +497,10 @@ See #396.
 
 * Update `libmxnet` to
     * On Windows: v0.12.0.
-    (See https://github.com/apache/incubator-mxnet/releases/tag/0.12.0)
+    (See https://github.com/apache/mxnet/releases/tag/0.12.0)
 
     * On Linux/macOS: v0.12.1.
-    (See https://github.com/apache/incubator-mxnet/releases/tag/0.12.1)
+    (See https://github.com/apache/mxnet/releases/tag/0.12.1)
 
 * Drop 0.5 support. ([#300][300])
 
diff --git a/julia/README.md b/julia/README.md
index 43b8deb066..fe04e048ff 100644
--- a/julia/README.md
+++ b/julia/README.md
@@ -20,7 +20,7 @@
 [![MXNet](http://pkg.julialang.org/badges/MXNet_0.6.svg)](http://pkg.julialang.org/?pkg=MXNet)
 
 
-MXNet.jl is the [Apache MXNet](https://github.com/apache/incubator-mxnet) [Julia](http://julialang.org/) package. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight of its features include:
+MXNet.jl is the [Apache MXNet](https://github.com/apache/mxnet) [Julia](http://julialang.org/) package. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight of its features include:
 
 * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes.
 * Flexible symbolic manipulation to composite and construction of state-of-the-art deep learning models.
diff --git a/julia/deps/build.jl b/julia/deps/build.jl
index 5666993090..c565c472d4 100644
--- a/julia/deps/build.jl
+++ b/julia/deps/build.jl
@@ -163,7 +163,7 @@ if !libmxnet_detected
       @build_steps begin
         BinDeps.DirectoryRule(_mxdir, @build_steps begin
           ChangeDirectory(_srcdir)
-          `git clone https://github.com/apache/incubator-mxnet mxnet`
+          `git clone https://github.com/apache/mxnet mxnet`
         end)
         @build_steps begin
           ChangeDirectory(_mxdir)
@@ -192,13 +192,13 @@ if !libmxnet_detected
           if HAS_CUDA
             @build_steps begin
               `sed -i -s 's/USE_CUDA = 0/USE_CUDA = 1/' config.mk`
-              # address https://github.com/apache/incubator-mxnet/pull/7856
+              # address https://github.com/apache/mxnet/pull/7856
               `sed -i -s "s/ADD_LDFLAGS =\(.*\)/ADD_LDFLAGS =\1 -lcublas -lcusolver -lcurand -lcudart/" config.mk`
               if haskey(ENV, "CUDA_HOME")
                 `sed -i -s "s@USE_CUDA_PATH = NONE@USE_CUDA_PATH = $(ENV["CUDA_HOME"])@" config.mk`
               end
               if haskey(ENV, "CUDA_HOME")
-                # address https://github.com/apache/incubator-mxnet/pull/7838
+                # address https://github.com/apache/mxnet/pull/7838
                 flag = "-L$(ENV["CUDA_HOME"])/lib64 -L$(ENV["CUDA_HOME"])/lib"
                 `sed -i -s "s@ADD_LDFLAGS =\(.*\)@ADD_LDFLAGS =\1 $flag@" config.mk`
               end
diff --git a/julia/docs/mkdocs.yml b/julia/docs/mkdocs.yml
index 880fad24d5..4dd31200eb 100644
--- a/julia/docs/mkdocs.yml
+++ b/julia/docs/mkdocs.yml
@@ -16,7 +16,7 @@
 # under the License.
 
 site_name: MXNet.jl
-repo_url:  https://github.com/apache/incubator-mxnet/tree/master/julia#mxnet
+repo_url:  https://github.com/apache/mxnet/tree/master/julia#mxnet
 
 theme: material
 
diff --git a/julia/docs/src/index.md b/julia/docs/src/index.md
index b5dc964e28..cfd104e06d 100644
--- a/julia/docs/src/index.md
+++ b/julia/docs/src/index.md
@@ -29,7 +29,7 @@ include:
   state-of-the-art deep learning models.
 
 For more details, see documentation below. Please also checkout the
-[examples](https://github.com/apache/incubator-mxnet/tree/master/julia/examples) directory.
+[examples](https://github.com/apache/mxnet/tree/master/julia/examples) directory.
 
 ## Tutorials
 
diff --git a/julia/docs/src/tutorial/char-lstm.md b/julia/docs/src/tutorial/char-lstm.md
index c7dc9d6c07..4f030993ad 100644
--- a/julia/docs/src/tutorial/char-lstm.md
+++ b/julia/docs/src/tutorial/char-lstm.md
@@ -31,14 +31,14 @@ networks yet, the example shown here is an implementation of LSTM by
 using the default FeedForward model via explicitly unfolding over time.
 We will be using fixed-length input sequence for training. The code is
 adapted from the [char-rnn example for MXNet's Python
-binding](https://github.com/apache/incubator-mxnet/blob/8004a027ad6a73f8f6eae102de8d249fbdfb9a2d/example/rnn/old/char-rnn.ipynb),
+binding](https://github.com/apache/mxnet/blob/8004a027ad6a73f8f6eae102de8d249fbdfb9a2d/example/rnn/old/char-rnn.ipynb),
 which demonstrates how to use low-level
 [Symbolic API](@ref) to build customized neural
 network models directly.
 
 The most important code snippets of this example is shown and explained
 here. To see and run the complete code, please refer to the
-[examples/char-lstm](https://github.com/apache/incubator-mxnet/blob/master/julia/docs/src/tutorial/char-lstm.md)
+[examples/char-lstm](https://github.com/apache/mxnet/blob/master/julia/docs/src/tutorial/char-lstm.md)
 directory. You will need to install
 [Iterators.jl](https://github.com/JuliaLang/Iterators.jl) and
 [StatsBase.jl](https://github.com/JuliaStats/StatsBase.jl) to run this
@@ -165,7 +165,7 @@ char-lstm. To train the model, we just follow the standard high-level
 API. Firstly, we construct a LSTM symbolic architecture:
 
 Note all the parameters are defined in
-[examples/char-lstm/config.jl](https://github.com/apache/incubator-mxnet/blob/master/julia/examples/char-lstm/config.jl).
+[examples/char-lstm/config.jl](https://github.com/apache/mxnet/blob/master/julia/examples/char-lstm/config.jl).
 Now we load the text file and define the data provider. The data
 `input.txt` we used in this example is [a tiny Shakespeare
 dataset](https://github.com/dmlc/web-data/tree/master/mxnet/tinyshakespeare).
@@ -305,7 +305,7 @@ post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) on more
 examples and links including Linux source codes, Algebraic Geometry
 Theorems, and even cooking recipes. The code for sampling can be found
 in
-[examples/char-lstm/sampler.jl](https://github.com/apache/incubator-mxnet/tree/master/julia/examples/char-lstm/sampler.jl).
+[examples/char-lstm/sampler.jl](https://github.com/apache/mxnet/tree/master/julia/examples/char-lstm/sampler.jl).
 
 Visualizing the LSTM
 --------------------
@@ -318,6 +318,6 @@ illustrations](http://colah.github.io/posts/2015-08-Understanding-LSTMs/),
 but could otherwise be very useful for debugging. As we can see, the
 LSTM unfolded over time is just a (very) deep neural network. The
 complete code for producing this visualization can be found in
-[examples/char-lstm/visualize.jl](https://github.com/apache/incubator-mxnet/blob/master/julia/examples/char-lstm/visualize.jl).
+[examples/char-lstm/visualize.jl](https://github.com/apache/mxnet/blob/master/julia/examples/char-lstm/visualize.jl).
 
 ![image](images/char-lstm-vis.svg)
diff --git a/julia/docs/src/tutorial/mnist.md b/julia/docs/src/tutorial/mnist.md
index 9427523645..9f301381b2 100644
--- a/julia/docs/src/tutorial/mnist.md
+++ b/julia/docs/src/tutorial/mnist.md
@@ -23,7 +23,7 @@ multi-layer perceptron and then a convolutional neural network (the
 LeNet architecture) on the [MNIST handwritten digit
 dataset](http://yann.lecun.com/exdb/mnist/). The code for this tutorial
 could be found in
-[examples/mnist](https://github.com/apache/incubator-mxnet/tree/master/julia/examples/mnist).  There are also two Jupyter notebooks that expand a little more on the [MLP](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistMLP.ipynb) and the [LeNet](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistLenet.ipynb), using the more general `ArrayDataProvider`. 
+[examples/mnist](https://github.com/apache/mxnet/tree/master/julia/examples/mnist).  There are also two Jupyter notebooks that expand a little more on the [MLP](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistMLP.ipynb) and the [LeNet](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistLenet.ipynb), using the more general `ArrayDataProvider`. 
 
 Simple 3-layer MLP
 ------------------
diff --git a/julia/src/autograd.jl b/julia/src/autograd.jl
index 8b5edae577..9e38c62488 100644
--- a/julia/src/autograd.jl
+++ b/julia/src/autograd.jl
@@ -17,7 +17,7 @@
 
 # Autograd for NDArray
 # this is a port of Python's autograd module
-# https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/autograd.py
+# https://github.com/apache/mxnet/blob/master/python/mxnet/autograd.py
 
 ###############################################################################
 #  Private util functions
diff --git a/julia/src/deprecated.jl b/julia/src/deprecated.jl
index 7c49b66b14..0e72615bfe 100644
--- a/julia/src/deprecated.jl
+++ b/julia/src/deprecated.jl
@@ -157,7 +157,7 @@ function broadcast_hypot(x::NDArray, y::NDArray)
   hypot.(x, y)
 end
 
-# Introduced by https://github.com/apache/incubator-mxnet/pull/12845
+# Introduced by https://github.com/apache/mxnet/pull/12845
 import Base: sum, maximum, minimum, prod, cat
 @deprecate sum(x::NDArray, dims) sum(x, dims = dims)
 @deprecate maximum(x::NDArray, dims) maximum(x, dims = dims)
diff --git a/julia/test/unittest/ndarray.jl b/julia/test/unittest/ndarray.jl
index fb59b71edd..3371c7cd22 100644
--- a/julia/test/unittest/ndarray.jl
+++ b/julia/test/unittest/ndarray.jl
@@ -992,7 +992,7 @@ function test_power()
     @test copy(x .^ π) ≈ A .^ π
   end
 
-  # TODO: Float64: wait for https://github.com/apache/incubator-mxnet/pull/8012
+  # TODO: Float64: wait for https://github.com/apache/mxnet/pull/8012
 
   @info("NDArray::broadcast_power")
   let
diff --git a/python/mxnet/contrib/onnx/onnx2mx/_op_translations.py b/python/mxnet/contrib/onnx/onnx2mx/_op_translations.py
index 69bec1dd60..de9b6caace 100644
--- a/python/mxnet/contrib/onnx/onnx2mx/_op_translations.py
+++ b/python/mxnet/contrib/onnx/onnx2mx/_op_translations.py
@@ -525,7 +525,7 @@ def split(attrs, inputs, proto_obj):
         else:
             raise NotImplementedError("Operator {} in MXNet does not support variable splits."
                                       "Tracking the issue to support variable split here: "
-                                      "https://github.com/apache/incubator-mxnet/issues/11594"
+                                      "https://github.com/apache/mxnet/issues/11594"
                                       .format('split'))
 
     new_attrs['num_outputs'] = num_outputs
@@ -830,7 +830,7 @@ def hardmax(attrs, inputs, proto_obj):
     # since reshape doesn't take a tensor for shape,
     # computing with np.prod. This needs to be changed to
     # to use mx.sym.prod() when mx.sym.reshape() is fixed.
-    # (https://github.com/apache/incubator-mxnet/issues/10789)
+    # (https://github.com/apache/mxnet/issues/10789)
     new_shape = (int(np.prod(input_shape[:axis])),
                  int(np.prod(input_shape[axis:])))
     reshape_op = symbol.reshape(inputs[0], new_shape)
diff --git a/python/mxnet/error.py b/python/mxnet/error.py
index b4110809c9..3ae65b6592 100644
--- a/python/mxnet/error.py
+++ b/python/mxnet/error.py
@@ -47,7 +47,7 @@ class InternalError(MXNetError):
         # Patch up additional hint message.
         if "MXNet hint:" not in msg:
             msg += ("\nMXNet hint: You hit an internal error. Please open an issue in "
-                    "https://github.com/apache/incubator-mxnet/issues/new/choose"
+                    "https://github.com/apache/mxnet/issues/new/choose"
                     " to report it.")
         super(InternalError, self).__init__(msg)
 
diff --git a/python/mxnet/gluon/block.py b/python/mxnet/gluon/block.py
index f8dc1bc416..85c72cdb9e 100644
--- a/python/mxnet/gluon/block.py
+++ b/python/mxnet/gluon/block.py
@@ -1201,7 +1201,7 @@ class HybridBlock(Block):
         assert self._cached_op, "Gluon failed to build the cache. " \
                                 "This should never happen. " \
                                 "Please submit an issue on Github" \
-                                " https://github.com/apache/incubator-mxnet."
+                                " https://github.com/apache/mxnet."
         if self._callback:
             self._cached_op._register_op_hook(self._callback, self._monitor_all)
             if len(self._flags) >= 2 and (self._flags[1] or self._flags[0]):
@@ -1310,7 +1310,7 @@ class HybridBlock(Block):
         assert self._cached_op, "Gluon failed to build the cache. " \
                                 "This should never happen. " \
                                 "Please submit an issue on Github" \
-                                " https://github.com/apache/incubator-mxnet."
+                                " https://github.com/apache/mxnet."
         # do not actually call the cached_op
 
     def _clear_cached_op(self):
diff --git a/python/mxnet/gluon/contrib/data/text.py b/python/mxnet/gluon/contrib/data/text.py
index cc5da7ce1c..e8b78e42b8 100644
--- a/python/mxnet/gluon/contrib/data/text.py
+++ b/python/mxnet/gluon/contrib/data/text.py
@@ -91,7 +91,7 @@ class _WikiText(_LanguageModelDataset):
 
         data, label = self._read_batch(path)
 
-        # https://github.com/apache/incubator-mxnet/issues/18886 breaks this unless array size is
+        # https://github.com/apache/mxnet/issues/18886 breaks this unless array size is
         # multiple of self._seq_len. Truncating the source is consistent with pre #18886 outcome
         seq_len_mult = len(data) // self._seq_len * self._seq_len
         self._data = nd.array(data, dtype=data.dtype)[:seq_len_mult].reshape((-1, self._seq_len))
diff --git a/python/mxnet/gluon/utils.py b/python/mxnet/gluon/utils.py
index 05ba061bc6..48307bbc44 100644
--- a/python/mxnet/gluon/utils.py
+++ b/python/mxnet/gluon/utils.py
@@ -83,7 +83,7 @@ def split_data(data, num_slice, batch_axis=0, even_split=True):
                 end = div_points[i + 1]
                 slices.append(ndarray.slice_axis(data, axis=batch_axis, begin=st, end=end))
         else:
-            # Fixes issue: https://github.com/apache/incubator-mxnet/issues/19268
+            # Fixes issue: https://github.com/apache/mxnet/issues/19268
             slices = [data[div_points[i]:div_points[i + 1]] if i < num_slice - 1 else data[div_points[i]:size]
                       for i in range(num_slice)]
     return slices
diff --git a/python/mxnet/onnx/mx2onnx/_export_model.py b/python/mxnet/onnx/mx2onnx/_export_model.py
index e0fc71cb94..fb2883bc83 100644
--- a/python/mxnet/onnx/mx2onnx/_export_model.py
+++ b/python/mxnet/onnx/mx2onnx/_export_model.py
@@ -55,7 +55,7 @@ def export_model(sym, params, in_shapes=None, in_types=np.float32,
     """Exports the MXNet model file, passed as a parameter, into ONNX model.
     Accepts both symbol,parameter objects as well as json and params filepaths as input.
     Operator support and coverage -
-    https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix
+    https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix
 
     Parameters
     ----------
diff --git a/python/mxnet/onnx/setup.py b/python/mxnet/onnx/setup.py
index d0ef2332d3..a0820bf17b 100644
--- a/python/mxnet/onnx/setup.py
+++ b/python/mxnet/onnx/setup.py
@@ -27,7 +27,7 @@ setup(
     description='Module to convert MXNet models to the ONNX format',
     author='',
     author_email='',
-    url='https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx',
+    url='https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx',
     install_requires=[
         'onnx >= 1.7.0',
     ],
diff --git a/python/setup.py b/python/setup.py
index dcd84cef1e..7ec71001f7 100644
--- a/python/setup.py
+++ b/python/setup.py
@@ -122,7 +122,7 @@ setup(name='mxnet',
       description=open(os.path.join(CURRENT_DIR, 'README.md')).read(),
       packages=find_packages(),
       data_files=[('mxnet', [LIB_PATH[0]])],
-      url='https://github.com/apache/incubator-mxnet',
+      url='https://github.com/apache/mxnet',
       ext_modules=config_cython(),
       classifiers=[
           # https://pypi.org/pypi?%3Aaction=list_classifiers
diff --git a/rat-excludes b/rat-excludes
index 3e3b375c11..1498035e57 100644
--- a/rat-excludes
+++ b/rat-excludes
@@ -108,7 +108,6 @@ R-package/*
 
 # Specific files
 # Files that don't support comment
-DISCLAIMER
 MANIFEST
 Changes
 .codecov.yml
diff --git a/scala-package/README.md b/scala-package/README.md
index 2ba170df76..e14d679f48 100644
--- a/scala-package/README.md
+++ b/scala-package/README.md
@@ -179,7 +179,7 @@ mvn deploy -Pstaging
 
 Examples & Usage
 -------
-Assuming you use `mvn install`, you can find the `mxnet-full_scala_version-INTERNAL.jar` e.g. `mxnet-full_2.11-INTERNAL.jar` under the path `incubator-mxnet/scala-package/assembly/target`.
+Assuming you use `mvn install`, you can find the `mxnet-full_scala_version-INTERNAL.jar` e.g. `mxnet-full_2.11-INTERNAL.jar` under the path `mxnet/scala-package/assembly/target`.
 
 Adding the following configuration in `pom.xml`
 ```HTML
@@ -211,7 +211,7 @@ Caused by: java.lang.ClassNotFoundException: org.apache.mxnet.NDArray
 Please make sure your $CLASSPATH contains `mxnet-full_scala_version-INTERNAL.jar`.
 
 - To set up the Scala Project using IntelliJ IDE on macOS follow the instructions [here](https://mxnet.apache.org/tutorials/scala/mxnet_scala_on_intellij.html).
-- Several examples on using the Scala APIs are provided in the [Scala Examples Folder](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/)
+- Several examples on using the Scala APIs are provided in the [Scala Examples Folder](https://github.com/apache/mxnet/tree/master/scala-package/examples/)
 
 Scala Training APIs
 -------
@@ -229,21 +229,21 @@ Other available Scala APIs for training can be found [here](https://mxnet.apache
 
 Scala Inference APIs
 -------
-The [Scala Inference APIs](https://mxnet.apache.org/api/scala/infer.html) provide an easy, out of the box solution to load a pre-trained MXNet model and run inference on it. The Inference APIs are present in the [Infer Package](https://github.com/apache/incubator-mxnet/tree/master/scala-package/infer) under the MXNet Scala Package repository, while the documentation for the Infer API is available [here](https://mxnet.apache.org/api/scala/docs/index.html#org.apache.mxnet.infer.package).  
+The [Scala Inference APIs](https://mxnet.apache.org/api/scala/infer.html) provide an easy, out of the box solution to load a pre-trained MXNet model and run inference on it. The Inference APIs are present in the [Infer Package](https://github.com/apache/mxnet/tree/master/scala-package/infer) under the MXNet Scala Package repository, while the documentation for the Infer API is available [here](https://mxnet.apache.org/api/scala/docs/index.html#org.apache.mxnet.infer.package).  
 
 Java Inference APIs
 -------
-The [Java Inference APIs](https://mxnet.apache.org/api/java/index.html) also provide an easy, out of the box solution to load a pre-trained MXNet model and run inference on it. The Inference APIs are present in the [Infer Package](https://github.com/apache/incubator-mxnet/tree/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi) under the MXNet Scala Package repository, while the documentation for the Infer API is available [here](https://mxnet.apache.org/api/java/do [...]
+The [Java Inference APIs](https://mxnet.apache.org/api/java/index.html) also provide an easy, out of the box solution to load a pre-trained MXNet model and run inference on it. The Inference APIs are present in the [Infer Package](https://github.com/apache/mxnet/tree/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi) under the MXNet Scala Package repository, while the documentation for the Infer API is available [here](https://mxnet.apache.org/api/java/docs/index.h [...]
 More APIs will be added to the Java Inference APIs soon.
 
 JVM Memory Management
 -------
 The Scala/Java APIs also provide an automated resource management system, thus making it easy to manage the native memory footprint without any degradation in performance.
-More details about JVM Memory Management are available [here](https://github.com/apache/incubator-mxnet/blob/master/scala-package/memory-management.md).
+More details about JVM Memory Management are available [here](https://github.com/apache/mxnet/blob/master/scala-package/memory-management.md).
 
 License
 -------
-MXNet Scala Package is licensed under [Apache-2](https://github.com/apache/incubator-mxnet/blob/master/scala-package/LICENSE) license.
+MXNet Scala Package is licensed under [Apache-2](https://github.com/apache/mxnet/blob/master/scala-package/LICENSE) license.
 
 MXNet uses some 3rd party softwares. Following 3rd party license files are bundled inside Scala jar file:
 * cub/LICENSE.TXT
diff --git a/scala-package/dev/compile-mxnet-backend.sh b/scala-package/dev/compile-mxnet-backend.sh
index 114bf07664..a14544bee5 100755
--- a/scala-package/dev/compile-mxnet-backend.sh
+++ b/scala-package/dev/compile-mxnet-backend.sh
@@ -32,7 +32,7 @@ MXNETDIR=$2
 
 
 # below routine shamelessly copied from
-# https://github.com/apache/incubator-mxnet/blob/master/setup-utils/install-mxnet-osx-python.sh
+# https://github.com/apache/mxnet/blob/master/setup-utils/install-mxnet-osx-python.sh
 # This routine executes a command,
 # prints error message on the console on non-zero exit codes and
 # returns the exit code to the caller.
diff --git a/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/objectdetector/README.md b/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/objectdetector/README.md
index 21c062938e..f7e56f4a06 100644
--- a/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/objectdetector/README.md
+++ b/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/objectdetector/README.md
@@ -19,7 +19,7 @@
 
 In this example, you will learn how to use Java Inference API to run Inference on pre-trained Single Shot Multi Object Detection (SSD) MXNet model.
 
-The model is trained on the [Pascal VOC 2012 dataset](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html). The network is a SSD model built on Resnet50 as base network to extract image features. The model is trained to detect the following entities (classes): ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']. For more details about the m [...]
+The model is trained on the [Pascal VOC 2012 dataset](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html). The network is a SSD model built on Resnet50 as base network to extract image features. The model is trained to detect the following entities (classes): ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']. For more details about the m [...]
 
 
 ## Contents
diff --git a/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/README.md b/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/README.md
index 141d55a636..c561ded2ef 100644
--- a/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/README.md
+++ b/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/README.md
@@ -30,7 +30,7 @@ build and run pre-trained Resnet 18 model.
 ## Prerequisites
 
 1. Build from source with [MXNet](https://mxnet.apache.org/install/index.html)
-2. [IntelliJ IDE (or alternative IDE) project setup](https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/java/mxnet_java_on_intellij.md) with the MXNet Java Package
+2. [IntelliJ IDE (or alternative IDE) project setup](https://github.com/apache/mxnet/blob/master/docs/tutorials/java/mxnet_java_on_intellij.md) with the MXNet Java Package
 3. wget
 
 ## Download Artifacts
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/benchmark/README.md b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/benchmark/README.md
index 67ee1ef65b..db24b192e3 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/benchmark/README.md
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/benchmark/README.md
@@ -17,12 +17,12 @@
 
 # Benchmarking Scala Inference APIs 
 
-This folder contains a base class [ScalaInferenceBenchmark](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/benchmark/) and provides a mechanism for benchmarking [MXNet Inference APIs]((https://github.com/apache/incubator-mxnet/tree/master/scala-package/infer)) in Scala.
+This folder contains a base class [ScalaInferenceBenchmark](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/benchmark/) and provides a mechanism for benchmarking [MXNet Inference APIs]((https://github.com/apache/mxnet/tree/master/scala-package/infer)) in Scala.
 The benchmarking scripts provided runs an experiment for single inference calls and batch inference calls. It collects the time taken to perform an inference operation and emits the P99, P50 and Average values for these metrics.  One can easily add/modify any new/existing examples to the ScalaInferenceBenchmark framework in order to get the benchmark numbers for inference calls.
 Currently the ScalaInferenceBenchmark script supports three Scala examples : 
-1. [ImageClassification using ResNet-152](https://github.com/apache/incubator-mxnet/blob/master/scala-package/mxnet-demo/src/main/scala/sample/ImageClassificationExample.scala)
-2. [Object Detection Example](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala)
-3. [Text Generation through RNNs](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn/TestCharRnn.scala)
+1. [ImageClassification using ResNet-152](https://github.com/apache/mxnet/blob/master/scala-package/mxnet-demo/src/main/scala/sample/ImageClassificationExample.scala)
+2. [Object Detection Example](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala)
+3. [Text Generation through RNNs](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn/TestCharRnn.scala)
 
 This script can be easily placed in an automated environment to run benchmark regressions on the Scala APIs. The script automatically picks up whether you are running it on a CPU machine or on a GPU machine and appropriately uses that.
 
@@ -39,12 +39,12 @@ This script can be easily placed in an automated environment to run benchmark re
 4. Model files and datasets for the model one will try to benchmark
 
 ## Scripts
-To help you easily run the benchmarks, a starter shell script has been provided for each of three examples mentioned above. The scripts can be found [here](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/scripts/benchmark).
+To help you easily run the benchmarks, a starter shell script has been provided for each of three examples mentioned above. The scripts can be found [here](https://github.com/apache/mxnet/blob/master/scala-package/examples/scripts/benchmark).
 Each of the script takes some parameters as inputs, details of which can be found either in the bash scripts or in the example classes itself. 
 
 * *ImageClassification Example*
 <br> The following shows an example of running ImageClassifier under the benchmark script. The script takes as parameters, the platform type (cpu/gpu), number of iterations for inference calls, the batch size for batch inference calls, the model path, input file, and input directory. 
-For more details to run ImageClassificationExample as a standalone file, refer to the [README](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/README.md) for ImageClassifierExample.
+For more details to run ImageClassificationExample as a standalone file, refer to the [README](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/README.md) for ImageClassifierExample.
 You may need to run ```chmod u+x run_image_inference_bm.sh``` before running this script.
     ```bash
     cd <Path-To-MXNET-Repo>/scala-package/examples/scripts/infer/imageclassifier
@@ -65,7 +65,7 @@ You may need to run ```chmod u+x run_image_inference_bm.sh``` before running thi
 
 * *Object Detection Example*
 <br> The following shows an example of running SSDClassifier under the benchmark script. The script takes in the number of iterations for inference calls, the batch size for batch inference calls, the model path, input file, and input directory. 
-For more details to run SSDClassifierExample as a standalone file, refer to the [README](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/README.md) for SSDClassifierExample.
+For more details to run SSDClassifierExample as a standalone file, refer to the [README](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/README.md) for SSDClassifierExample.
 You may need to run ```chmod u+x run_image_inference_bm.sh``` before running this script.
     ```bash
     cd <Path-To-MXNET-Repo>/scala-package/examples/scripts/infer/objectdetector
@@ -85,7 +85,7 @@ You may need to run ```chmod u+x run_image_inference_bm.sh``` before running thi
     
 * *Text Generation through RNNs*
 <br>The following shows an example of running TestCharRnn under the benchmark script. The script takes in the number of iterations for inference calls, the model path and the input text file. 
-For more details to run TestCharRnn as a standalone file, refer to the [README](https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn/README.md) for TextCharRnn.
+For more details to run TestCharRnn as a standalone file, refer to the [README](https://github.com/apache/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn/README.md) for TextCharRnn.
 You may need to run ```chmod u+x run_text_charrnn_bm.sh``` before running this script.
     ```bash
     wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/RNN/obama.zip
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/cnntextclassification/README.md b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/cnntextclassification/README.md
index fe4fbef18f..6ef0714ff5 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/cnntextclassification/README.md
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/cnntextclassification/README.md
@@ -37,4 +37,4 @@ I used the SLIM version, you can try with the full version to see if the accurac
 https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/CNN/GoogleNews-vectors-negative300-SLIM.bin
 ```
 ### Train the model
-Please configure the [args](https://github.com/apache/incubator-mxnet/blob/scala-package/examples/src/main/scala/org/apache/mxnet/examples/cnntextclassification/CNNTextClassification.scala#L299-L312) required for the model here and then run it.
+Please configure the [args](https://github.com/apache/mxnet/blob/scala-package/examples/src/main/scala/org/apache/mxnet/examples/cnntextclassification/CNNTextClassification.scala#L299-L312) required for the model here and then run it.
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/customop/README.md b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/customop/README.md
index a3952aabfb..8b61564591 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/customop/README.md
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/customop/README.md
@@ -36,4 +36,4 @@ Then you need to define the arguments that you would like to pass in the model:
 --data-path <location of your downloaded file>
 ```
  
-you can find more in [here](https://github.com/apache/incubator-mxnet/blob/scala-package/examples/src/main/scala/org/apache/mxnet/examples/customop/ExampleCustomOp.scala#L218-L221)
\ No newline at end of file
+you can find more in [here](https://github.com/apache/mxnet/blob/scala-package/examples/src/main/scala/org/apache/mxnet/examples/customop/ExampleCustomOp.scala#L218-L221)
\ No newline at end of file
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
index 48e55004cf..22ff3b6e63 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
@@ -38,7 +38,7 @@ import scala.collection.mutable.ListBuffer
 /**
   * <p>
   * Example inference showing usage of the Infer package on a resnet-152 model.
-  * @see <a href="https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier" target="_blank">Instructions to run this example</a>
+  * @see <a href="https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier" target="_blank">Instructions to run this example</a>
   */
 // scalastyle:on
 object ImageClassifierExample {
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/README.md b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/README.md
index 6b26e316ed..4e4e504dc5 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/README.md
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/README.md
@@ -17,7 +17,7 @@
 
 # Image Classification
 
-This folder contains an example for image classification with the [MXNet Scala Infer API](https://github.com/apache/incubator-mxnet/tree/master/scala-package/infer).
+This folder contains an example for image classification with the [MXNet Scala Infer API](https://github.com/apache/mxnet/tree/master/scala-package/infer).
 The goal of image classification is to identify the objects contained in images.
 The following example shows recognized object classes with corresponding probabilities using a pre-trained model.
 
@@ -86,16 +86,16 @@ The available arguments are as follows:
 
 ## Pretrained Models
 
-The MXNet project repository provides several [pre-trained models on various datasets](https://github.com/apache/incubator-mxnet/tree/master/example/image-classification#pre-trained-models) and examples on how to train them. You may use the [modelzoo.py](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/common/modelzoo.py) helper script to download these models. Many ImageNet models may be also be downloaded directly from [http://data.mxnet.io/models/imag [...]
+The MXNet project repository provides several [pre-trained models on various datasets](https://github.com/apache/mxnet/tree/master/example/image-classification#pre-trained-models) and examples on how to train them. You may use the [modelzoo.py](https://github.com/apache/mxnet/blob/master/example/image-classification/common/modelzoo.py) helper script to download these models. Many ImageNet models may be also be downloaded directly from [http://data.mxnet.io/models/imagenet/](http://data.m [...]
 
 
 ## Infer API Details
 
-This example uses the [ImageClassifier](https://github.com/apache/incubator-mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/ImageClassifier.scala)
-class provided by the [MXNet Scala Infer API](https://github.com/apache/incubator-mxnet/tree/master/scala-package/infer).
+This example uses the [ImageClassifier](https://github.com/apache/mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/ImageClassifier.scala)
+class provided by the [MXNet Scala Infer API](https://github.com/apache/mxnet/tree/master/scala-package/infer).
 It provides methods to load the images, create a NDArray out of a `BufferedImage`, and run prediction using the following Infer APIs:
-* [Classifier](https://github.com/apache/incubator-mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/Classifier.scala)
-* [Predictor](https://github.com/apache/incubator-mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/Predictor.scala)
+* [Classifier](https://github.com/apache/mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/Classifier.scala)
+* [Predictor](https://github.com/apache/mxnet/blob/master/scala-package/infer/src/main/scala/org/apache/mxnet/infer/Predictor.scala)
 
 
 ## Next Steps
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/README.md b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/README.md
index 0624489812..a55c68872e 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/README.md
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/README.md
@@ -19,7 +19,7 @@
 
 In this example, you will learn how to use Scala Inference API to run Inference on pre-trained Single Shot Multi Object Detection (SSD) MXNet model.
 
-The model is trained on the [Pascal VOC 2012 dataset](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html). The network is a SSD model built on Resnet50 as base network to extract image features. The model is trained to detect the following entities (classes): ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']. For more details about the m [...]
+The model is trained on the [Pascal VOC 2012 dataset](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html). The network is a SSD model built on Resnet50 as base network to extract image features. The model is trained to detect the following entities (classes): ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']. For more details about the m [...]
 
 
 ## Contents
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
index 8c5366d627..85243c7a76 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
@@ -40,7 +40,7 @@ import scala.collection.mutable.ListBuffer
   * <p>
   * Example single shot detector (SSD) using the Infer package
   * on a ssd_resnet50_512 model.
-  * @see <a href="https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector" target="_blank">Instructions to run this example</a>
+  * @see <a href="https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector" target="_blank">Instructions to run this example</a>
   */
 // scalastyle:on
 object SSDClassifierExample {
diff --git a/scala-package/memory-management.md b/scala-package/memory-management.md
index b97bcbcb90..7916014c6d 100644
--- a/scala-package/memory-management.md
+++ b/scala-package/memory-management.md
@@ -24,7 +24,7 @@ Allocating native memory is straight forward and is done during the construction
 MXNet Scala provides a few easy modes of operation which are explained in detail below.
 
 ## Memory Management in Scala 
-### 1.  [ResourceScope.using](https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/org/apache/mxnet/ResourceScope.scala#L106) (Recommended)
+### 1.  [ResourceScope.using](https://github.com/apache/mxnet/blob/master/scala-package/core/src/main/scala/org/apache/mxnet/ResourceScope.scala#L106) (Recommended)
 `ResourceScope.using` provides the familiar Java try-with-resources primitive in Scala and will automatically manage the memory of all the MXNet objects created in the associated code block (`body`). It works by tracking the allocations performed inside the code block deallocating when exiting the block. 
 Passing MXNet objects out of a using block can be easily accomplished by simply returning an object or an iterable containing multiple MXNet objects. If you have nested using blocks, then the returned objects will be moved into the parent scope as well.
 
@@ -106,7 +106,7 @@ def showDispose(): Unit = {
 ```
 
 ## Memory Management in Java
-Memory Management in MXNet Java is similar to Scala. We recommend you use [ResourceScope](https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/org/apache/mxnet/ResourceScope.scala#L32) in a `try-with-resources` block or in a `try-finally` block.
+Memory Management in MXNet Java is similar to Scala. We recommend you use [ResourceScope](https://github.com/apache/mxnet/blob/master/scala-package/core/src/main/scala/org/apache/mxnet/ResourceScope.scala#L32) in a `try-with-resources` block or in a `try-finally` block.
 The [try-with-resource](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) tracks the resources declared in the try block and automatically closes them upon exiting (supported from Java 7 onwards). 
 The ResourceScope discussed above implements AutoCloseable and tracks all MXNet Objects created at a Thread Local scope level. 
 
diff --git a/scala-package/mxnet-demo/java-demo/README.md b/scala-package/mxnet-demo/java-demo/README.md
index 354bb2f8ab..45fae40fc3 100644
--- a/scala-package/mxnet-demo/java-demo/README.md
+++ b/scala-package/mxnet-demo/java-demo/README.md
@@ -29,7 +29,7 @@ You are required to use Maven to build the package with the following commands u
 ```
 mvn package
 ```
-This command will pick the default values specified in the [pom](https://github.com/apache/incubator-mxnet/blob/master/scala-package/mxnet-demo/java-demo/pom.xml) file.
+This command will pick the default values specified in the [pom](https://github.com/apache/mxnet/blob/master/scala-package/mxnet-demo/java-demo/pom.xml) file.
 
 Note: If you are planning to use GPU, please add `-Dmxnet.profile=linux-x86_64-gpu`
 
diff --git a/scala-package/mxnet-demo/scala-demo/README.md b/scala-package/mxnet-demo/scala-demo/README.md
index 6c63819468..e7c1b5d073 100644
--- a/scala-package/mxnet-demo/scala-demo/README.md
+++ b/scala-package/mxnet-demo/scala-demo/README.md
@@ -56,7 +56,7 @@ We also provide an example to do image classification, which downloads a ImageNe
 ```Bash
 Classes with top 5 probability = Vector((n02110958 pug, pug-dog,0.49161583), (n02108422 bull mastiff,0.40025946), (n02108089 boxer,0.04657662), (n04409515 tennis ball,0.028773671), (n02109047 Great Dane,0.009004086)) 
 ```
-You can review the complete example [here](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier)
+You can review the complete example [here](https://github.com/apache/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier)
 
 you can run using the command shown below:
 ```Bash
diff --git a/scala-package/pom.xml b/scala-package/pom.xml
index 040e46bbf4..0f013eb7bc 100644
--- a/scala-package/pom.xml
+++ b/scala-package/pom.xml
@@ -29,7 +29,7 @@
   <artifactId>mxnet-parent</artifactId>
   <version>INTERNAL</version>
   <name>MXNet Scala Package - Parent</name>
-  <url>https://github.com/apache/incubator-mxnet/tree/master/scala-package</url>
+  <url>https://github.com/apache/mxnet/tree/master/scala-package</url>
   <description>
     Scala Package for Apache MXNet (Incubating) - flexible and efficient library for deep learning.
   </description>
@@ -46,9 +46,9 @@
   </licenses>
 
   <scm>
-    <connection>scm:git:git@github.com:apache/incubator-mxnet.git</connection>
-    <developerConnection>scm:git:git@github.com:apache/incubator-mxnet.git</developerConnection>
-    <url>https://github.com/apache/incubator-mxnet</url>
+    <connection>scm:git:git@github.com:apache/mxnet.git</connection>
+    <developerConnection>scm:git:git@github.com:apache/mxnet.git</developerConnection>
+    <url>https://github.com/apache/mxnet</url>
     <tag>HEAD</tag>
   </scm>
 
diff --git a/setup-utils/install-mxnet-osx-python.sh b/setup-utils/install-mxnet-osx-python.sh
index 0dde8096b3..11be1a5eb1 100755
--- a/setup-utils/install-mxnet-osx-python.sh
+++ b/setup-utils/install-mxnet-osx-python.sh
@@ -26,7 +26,7 @@
 
 #set -ex
 
-export MXNET_GITPATH="https://github.com/apache/incubator-mxnet"
+export MXNET_GITPATH="https://github.com/apache/mxnet"
 
 
 if [ -z ${MXNET_TAG} ];
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 89dabacf13..52cfbd30eb 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -1151,7 +1151,7 @@ void CachedOpBackward(const OpStatePtr& state_ptr,
   // If it is, we need to copy data back.
   // For example, when the inputs and outputs share the same NDArrays,
   // the outputs will be replaced by inputs.
-  // https://github.com/apache/incubator-mxnet/blob/v1.2.0/src/imperative/cached_op.cc#L385
+  // https://github.com/apache/mxnet/blob/v1.2.0/src/imperative/cached_op.cc#L385
   for (size_t i = 0; i < out_bufs.size(); i++)
     if (!out_bufs[i].IsSame(outputs[i]))
       CopyFromTo(out_bufs[i], outputs[i]);
diff --git a/src/operator/linalg_impl.h b/src/operator/linalg_impl.h
index 47b54f6ac3..947b79de3f 100644
--- a/src/operator/linalg_impl.h
+++ b/src/operator/linalg_impl.h
@@ -188,7 +188,7 @@ void linalg_gemm<cpu, mshadow::half::half_t>(const Tensor<cpu, 2, mshadow::half:
   }
 
 // Use cublasSgemmEx when it is available (CUDA >= 7.5). Resolves precision issues with
-// cublasSgemm. Please see https://github.com/apache/incubator-mxnet/pull/11630
+// cublasSgemm. Please see https://github.com/apache/mxnet/pull/11630
 #if CUDA_VERSION >= 7050
 template <>
 inline void linalg_gemm<gpu, float>(const Tensor<gpu, 2, float>& A,
diff --git a/src/operator/nn/fully_connected-inl.h b/src/operator/nn/fully_connected-inl.h
index 8e610434f3..42ff0e0125 100644
--- a/src/operator/nn/fully_connected-inl.h
+++ b/src/operator/nn/fully_connected-inl.h
@@ -477,7 +477,7 @@ void FullyConnectedGradCompute(const nnvm::NodeAttrs& attrs,
 // w_grad_grad : o_y.T * o_x_grad
 // b_grad_grad: if param.no_bias is false
 //
-// For implementation details see this PR: https://github.com/apache/incubator-mxnet/pull/14779
+// For implementation details see this PR: https://github.com/apache/mxnet/pull/14779
 
 /**
  * Second order gradient for Fully Connected
diff --git a/src/operator/nn/mkldnn/mkldnn_base-inl.h b/src/operator/nn/mkldnn/mkldnn_base-inl.h
index 57cae5b3c3..65d75db7bc 100644
--- a/src/operator/nn/mkldnn/mkldnn_base-inl.h
+++ b/src/operator/nn/mkldnn/mkldnn_base-inl.h
@@ -287,7 +287,7 @@ inline static mkldnn::memory::desc GetMemDesc(const NDArray &arr, int dtype = -1
 
 inline static bool ChooseBRGEMMImpl(const mkldnn::memory::dims& weight_dims, size_t batch_size) {
   // Conditions based on measurement results done on CLX8280
-  // https://github.com/apache/incubator-mxnet/pull/20533
+  // https://github.com/apache/mxnet/pull/20533
   return weight_dims[0] >= 1024 && weight_dims[1] >= 1024 && batch_size >= 16384 &&
          weight_dims[0] % 64 == 0 && weight_dims[1] % 64 == 0;
 }
diff --git a/tests/CMakeLists.txt b/tests/CMakeLists.txt
index a221f058a1..7fbf23f166 100644
--- a/tests/CMakeLists.txt
+++ b/tests/CMakeLists.txt
@@ -1,3 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
 
 # ---[ Google Test
 if(NOT GTEST_ROOT)
diff --git a/tests/cpp/operator/batchnorm_test.cc b/tests/cpp/operator/batchnorm_test.cc
index 53be30afa4..155aefd92f 100644
--- a/tests/cpp/operator/batchnorm_test.cc
+++ b/tests/cpp/operator/batchnorm_test.cc
@@ -981,7 +981,7 @@ static void timingTest(const std::string& label,
 /*! \brief Stress-test random batch size/channels/dimension(s) */
 TEST(BATCH_NORM, DISABLED_TestStochasticTiming_2D) {
   // Test is disabled due to suspected flakiness
-  // https://github.com/apache/incubator-mxnet/issues/14411
+  // https://github.com/apache/mxnet/issues/14411
   MSHADOW_REAL_TYPE_SWITCH_EX(
     mshadow::kFloat32, DType, AccReal,
     {
diff --git a/tests/jenkins/run_test_installation_docs.sh b/tests/jenkins/run_test_installation_docs.sh
index 954b370ae5..092e1c2fbe 100755
--- a/tests/jenkins/run_test_installation_docs.sh
+++ b/tests/jenkins/run_test_installation_docs.sh
@@ -252,23 +252,23 @@ function set_instruction_set() {
 
 # given a $buildfromsource_commands string, filter out any build commands that should not be executed
 # during the build from source tests. An example, the build from source instructions include the commands:
-# $ git clone --recursive https://github.com/apache/incubator-mxnet 
-# $ cd incubator-mxnet 
+# $ git clone --recursive https://github.com/apache/mxnet 
+# $ cd mxnet 
 # if these commands get executed in the jenkins job, we will be testing the build from source instructions
 # against the master branch and not against the version of the repository that Jenkins checks out for testing.
 # This presents a particularly big problem for the version branches and their nightly builds. Because, 
 # we would, in effect, be testing the build from source instructions for one version of MXNet against
 # the master branch.
 # in this function we target the commands cited in the example above.
-# See also gh issue: https://github.com/apache/incubator-mxnet/issues/13800
+# See also gh issue: https://github.com/apache/mxnet/issues/13800
 function filter_build_commands() {
     filtered_build_commands="${1}"
 
     # Remove git commands
     filtered_build_commands=`echo "${filtered_build_commands}" | perl -pe 's/git .*?;//g'`
 
-    # Remove 'cd incubator-mxnet'
-    filtered_build_commands=`echo "${filtered_build_commands}" | perl -pe 's/cd incubator-mxnet;//'`
+    # Remove 'cd mxnet'
+    filtered_build_commands=`echo "${filtered_build_commands}" | perl -pe 's/cd mxnet;//'`
 
     echo "${filtered_build_commands}"
 }
diff --git a/tests/nightly/test_large_array.py b/tests/nightly/test_large_array.py
index b7a27c3f43..ef4839750c 100644
--- a/tests/nightly/test_large_array.py
+++ b/tests/nightly/test_large_array.py
@@ -148,7 +148,7 @@ def test_nn():
         return x
 
     @unittest.skip("log_softmax flaky, tracked at "
-                   "https://github.com/apache/incubator-mxnet/issues/17397")
+                   "https://github.com/apache/mxnet/issues/17397")
     def check_log_softmax():
         ndim = 2
         shape = (SMALL_Y, LARGE_X)
@@ -582,7 +582,7 @@ def test_tensor():
         assert a[-1][0] != 0
 
     @unittest.skip("Randint flaky, tracked at "
-                   "https://github.com/apache/incubator-mxnet/issues/16172")
+                   "https://github.com/apache/mxnet/issues/16172")
     @with_seed()
     def check_ndarray_random_randint():
         a = nd.random.randint(100, 10000, shape=(LARGE_X, SMALL_Y))
@@ -796,7 +796,7 @@ def test_tensor():
         assert res.shape == b.shape
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, "
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_depthtospace():
         def numpy_depth_to_space(x, blocksize):
             b, c, h, w = x.shape[0], x.shape[1], x.shape[2], x.shape[3]
@@ -815,7 +815,7 @@ def test_tensor():
         assert_almost_equal(output.asnumpy(), expected, atol=1e-3, rtol=1e-3)
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, "
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_spacetodepth():
         def numpy_space_to_depth(x, blocksize):
             b, c, h, w = x.shape[0], x.shape[1], x.shape[2], x.shape[3]
@@ -880,7 +880,7 @@ def test_tensor():
         assert (indices_2d.asnumpy() == np.array(original_2d_indices)).all()
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_transpose():
         check_dtypes = [np.float32, np.int64]
         for dtype in check_dtypes:
@@ -891,7 +891,7 @@ def test_tensor():
             assert_almost_equal(t.asnumpy(), ref_out, rtol=1e-10)
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_swapaxes():
         b = create_2d_tensor(rows=LARGE_X, columns=SMALL_Y)
         t = nd.swapaxes(b, dim1=0, dim2=1)
@@ -899,7 +899,7 @@ def test_tensor():
         assert t.shape == (SMALL_Y, LARGE_X)
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_flip():
         b = create_2d_tensor(rows=LARGE_X, columns=SMALL_Y)
         t = nd.flip(b, axis=0)
@@ -1424,7 +1424,7 @@ def test_basic():
         assert idx.shape[0] == SMALL_Y
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_argsort():
         b = create_2d_tensor(rows=LARGE_X, columns=SMALL_Y)
         s = nd.argsort(b, axis=0, is_ascend=False, dtype=np.int64)
@@ -1432,7 +1432,7 @@ def test_basic():
         assert (s[0].asnumpy() == (LARGE_X - 1)).all()
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_sort():
         b = create_2d_tensor(rows=LARGE_X, columns=SMALL_Y)
         s = nd.sort(b, axis=0, is_ascend=False)
@@ -1441,7 +1441,7 @@ def test_basic():
         assert np.sum(s[0].asnumpy() == 0).all()
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_topk():
         b = create_2d_tensor(rows=LARGE_X, columns=SMALL_Y)
         k = nd.topk(b, k=10, axis=0, dtype=np.int64)
diff --git a/tests/nightly/test_large_vector.py b/tests/nightly/test_large_vector.py
index 4c81ddd7af..c1548bfdef 100644
--- a/tests/nightly/test_large_vector.py
+++ b/tests/nightly/test_large_vector.py
@@ -185,7 +185,7 @@ def test_tensor():
         assert a[-1] != 0
 
     @unittest.skip("Randint flaky, tracked at "
-                   "https://github.com/apache/incubator-mxnet/issues/16172")
+                   "https://github.com/apache/mxnet/issues/16172")
     @with_seed()
     def check_ndarray_random_randint():
         # check if randint can generate value greater than 2**32 (large)
@@ -483,14 +483,14 @@ def test_basic():
         assert idx.shape[0] == 1
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_argsort():
         a = create_vector(size=LARGE_X)
         s = nd.argsort(a, axis=0, is_ascend=False, dtype=np.int64)
         assert s[0] == (LARGE_X - 1)
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_sort():
         a = create_vector(size=LARGE_X)
 
@@ -506,7 +506,7 @@ def test_basic():
         check_ascend(a)
 
     @unittest.skip("Memory doesn't free up after stacked execution with other ops, " +
-                   "tracked at https://github.com/apache/incubator-mxnet/issues/17411")
+                   "tracked at https://github.com/apache/mxnet/issues/17411")
     def check_topk():
         a = create_vector(size=LARGE_X)
         ind = nd.topk(a, k=10, axis=0, dtype=np.int64)
diff --git a/tests/python-pytest/onnx/test_onnxruntime_cv.py b/tests/python-pytest/onnx/test_onnxruntime_cv.py
index f0c454e3d2..20fafa9937 100644
--- a/tests/python-pytest/onnx/test_onnxruntime_cv.py
+++ b/tests/python-pytest/onnx/test_onnxruntime_cv.py
@@ -94,16 +94,16 @@ def obj_class_test_images(tmpdir_factory):
     tmpdir = tmpdir_factory.mktemp("obj_class_data")
     from urllib.parse import urlparse
     test_image_urls = [
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/bikers.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/car.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/dancer.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/duck.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/flower.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/runners.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/shark.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/soccer2.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/tree.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/bikers.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/car.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/dancer.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/duck.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/flower.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/runners.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/shark.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/soccer2.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/tree.jpg',
     ]
     paths = []
     for url in test_image_urls:
@@ -227,12 +227,12 @@ def obj_detection_test_images(tmpdir_factory):
     tmpdir = tmpdir_factory.mktemp("obj_det_data")
     from urllib.parse import urlparse
     test_image_urls = [
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/flower.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/runners.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/shark.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/soccer2.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/tree.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/flower.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/runners.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/shark.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/soccer2.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/tree.jpg',
     ]
     paths = []
     for url in test_image_urls:
@@ -365,16 +365,16 @@ def img_segmentation_test_images(tmpdir_factory):
     tmpdir = tmpdir_factory.mktemp("img_seg_data")
     from urllib.parse import urlparse
     test_image_urls = [
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/bikers.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/car.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/dancer.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/duck.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/flower.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/runners.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/shark.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/soccer2.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/tree.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/bikers.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/car.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/dancer.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/duck.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/flower.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/runners.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/shark.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/soccer2.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/tree.jpg',
     ]
     paths = []
     for url in test_image_urls:
@@ -440,11 +440,11 @@ def pose_estimation_test_images(tmpdir_factory):
     tmpdir = tmpdir_factory.mktemp("pose_est_data")
     from urllib.parse import urlparse
     test_image_urls = [
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/bikers.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/dancer.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/runners.jpg',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/images/soccer2.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/bikers.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/dancer.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/fieldhockey.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/runners.jpg',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/images/soccer2.jpg',
     ]
     paths = []
     for url in test_image_urls:
@@ -503,10 +503,10 @@ def act_recognition_test_data(tmpdir_factory):
     tmpdir = tmpdir_factory.mktemp("act_rec_data")
     from urllib.parse import urlparse
     test_image_urls = [
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/actions/biking.rec',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/actions/diving.rec',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/actions/golfing.rec',
-        'https://github.com/apache/incubator-mxnet-ci/raw/master/test-data/actions/sledding.rec',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/actions/biking.rec',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/actions/diving.rec',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/actions/golfing.rec',
+        'https://github.com/apache/mxnet-ci/raw/master/test-data/actions/sledding.rec',
     ]
     paths = []
     for url in test_image_urls:
diff --git a/tests/python/gpu/test_contrib_amp.py b/tests/python/gpu/test_contrib_amp.py
index 527f853496..4d14b12221 100644
--- a/tests/python/gpu/test_contrib_amp.py
+++ b/tests/python/gpu/test_contrib_amp.py
@@ -329,7 +329,7 @@ def test_amp_conversion():
                            batch_end_callback=mx.callback.Speedometer(batch_size, 1))
 
         # AMP conversion with cast_optional_params set to true
-        # Flaky test when cast_optional_params set to True : https://github.com/apache/incubator-mxnet/issues/16030
+        # Flaky test when cast_optional_params set to True : https://github.com/apache/mxnet/issues/16030
         '''
         result_model = amp.convert_bucketing_module(model, cast_optional_params=True)
         result_model.bind(data_val.provide_data, data_val.provide_label, for_training=False)
diff --git a/tests/python/gpu/test_gluon_gpu.py b/tests/python/gpu/test_gluon_gpu.py
index 6b5b4fbbd7..53b286e775 100644
--- a/tests/python/gpu/test_gluon_gpu.py
+++ b/tests/python/gpu/test_gluon_gpu.py
@@ -584,7 +584,7 @@ def _test_bulking(test_bulking_func):
         .format(fully_bulked_time - fastest_half_bulked_time, times_str)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/14970')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/14970')
 def test_bulking_gluon_gpu():
     _test_bulking(_test_bulking_in_process)
 
diff --git a/tests/python/gpu/test_operator_gpu.py b/tests/python/gpu/test_operator_gpu.py
index ceb8c6ee5d..821e669f83 100644
--- a/tests/python/gpu/test_operator_gpu.py
+++ b/tests/python/gpu/test_operator_gpu.py
@@ -637,7 +637,7 @@ def check_consistency_NxM(sym_list, ctx_list):
     check_consistency(np.repeat(sym_list, len(ctx_list)), ctx_list * len(sym_list), scale=0.5)
 
 
-@unittest.skip("test fails intermittently. temporarily disabled till it gets fixed. tracked at https://github.com/apache/incubator-mxnet/issues/10141")
+@unittest.skip("test fails intermittently. temporarily disabled till it gets fixed. tracked at https://github.com/apache/mxnet/issues/10141")
 @with_seed()
 def test_convolution_options():
     # 1D convolution
@@ -2248,7 +2248,7 @@ def kernel_error_check_symbolic():
             f.forward()
             g = f.outputs[0].asnumpy()
 
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/20011')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/20011')
 def test_kernel_error_checking():
     # Running tests that may throw exceptions out of worker threads will stop CI testing
     # if not run in a separate process (with its own address space for CUDA compatibility).
@@ -2428,12 +2428,12 @@ def _test_bulking_in_process(seed, time_per_iteration):
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/16517')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/16517')
 def test_bulking_operator_gpu():
     _test_bulking(_test_bulking_in_process)
 
 
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/14970')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/14970')
 def test_bulking():
     # test case format: (max_fwd_segment_size, max_bwd_segment_size, enable_bulking_in_training)
     test_cases = [(0,0,True), (1,1,True), (15,15,False), (15,0,True), (0,15,True), (15,15,True)]
diff --git a/tests/python/mkl/test_mkldnn.py b/tests/python/mkl/test_mkldnn.py
index 061cc180f3..50bdc1aa66 100644
--- a/tests/python/mkl/test_mkldnn.py
+++ b/tests/python/mkl/test_mkldnn.py
@@ -607,7 +607,7 @@ def test_conv_transpose():
         np.allclose(t.asnumpy(), n)
 
 
-# This test case is contributed by @awsbillz in https://github.com/apache/incubator-mxnet/issues/14766
+# This test case is contributed by @awsbillz in https://github.com/apache/mxnet/issues/14766
 @with_seed()
 def test_reshape_transpose_6d():
     class Reshape2D(gluon.HybridBlock):
diff --git a/tests/python/quantization/test_quantization.py b/tests/python/quantization/test_quantization.py
index dea189c54c..49582bfffc 100644
--- a/tests/python/quantization/test_quantization.py
+++ b/tests/python/quantization/test_quantization.py
@@ -201,7 +201,7 @@ def test_quantized_conv():
             print('skipped testing quantized_conv for native cpu since it is not supported yet')
             return
         elif is_test_for_mkldnn():
-            # (TODO)Xinyu: https://github.com/apache/incubator-mxnet/issues/16830
+            # (TODO)Xinyu: https://github.com/apache/mxnet/issues/16830
             print('skipped testing quantized_conv for mkldnn cpu since it is a flaky case')
             return
         elif qdtype == 'uint8' and is_test_for_gpu():
diff --git a/tests/python/unittest/test_contrib_svrg_module.py b/tests/python/unittest/test_contrib_svrg_module.py
index 79407d15fd..5eb6a06aa5 100644
--- a/tests/python/unittest/test_contrib_svrg_module.py
+++ b/tests/python/unittest/test_contrib_svrg_module.py
@@ -94,7 +94,7 @@ def test_module_bind():
     assert mod._mod_aux.binded == True
 
 
-@unittest.skip("Flaky test https://gitsvrhub.com/apache/incubator-mxnet/issues/12510")
+@unittest.skip("Flaky test https://gitsvrhub.com/apache/mxnet/issues/12510")
 @with_seed()
 def test_module_save_load():
     import tempfile
@@ -135,7 +135,7 @@ def test_module_save_load():
     assert mod3._symbol.tojson() == mod4._symbol.tojson()
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/12510")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/12510")
 @with_seed()
 def test_svrgmodule_reshape():
     data = mx.sym.Variable("data")
@@ -163,7 +163,7 @@ def test_svrgmodule_reshape():
     assert mod.get_outputs()[0].shape == dshape
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/12510")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/12510")
 @with_seed()
 def test_update_full_grad():
     def create_network():
@@ -206,7 +206,7 @@ def test_update_full_grad():
     assert same(full_grads_weights, svrg_mod._param_dict[0]['fc1_weight'])
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/12510")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/12510")
 @with_seed()
 def test_svrg_with_sgd():
     def create_module_with_sgd():
@@ -270,7 +270,7 @@ def test_svrg_with_sgd():
     assert svrg_mse < sgd_mse
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/12510")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/12510")
 @with_seed()
 def test_accumulate_kvstore():
     # Test KVStore behavior when push a list of values
@@ -294,7 +294,7 @@ def test_accumulate_kvstore():
     assert same(svrg_mod._param_dict[0]["fc1_weight"], b[0])
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/12510")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/12510")
 @with_seed()
 def test_fit():
     di, mod = setup()
diff --git a/tests/python/unittest/test_executor.py b/tests/python/unittest/test_executor.py
index 29fda3b08f..a81130805c 100644
--- a/tests/python/unittest/test_executor.py
+++ b/tests/python/unittest/test_executor.py
@@ -105,7 +105,7 @@ def test_bind():
 
 
 # @roywei: Removing fixed seed as flakiness in this test is fixed
-# tracked at https://github.com/apache/incubator-mxnet/issues/11686
+# tracked at https://github.com/apache/mxnet/issues/11686
 @with_seed()
 def test_dot():
     nrepeat = 10
diff --git a/tests/python/unittest/test_gluon.py b/tests/python/unittest/test_gluon.py
index 0eb8340107..130a32c406 100644
--- a/tests/python/unittest/test_gluon.py
+++ b/tests/python/unittest/test_gluon.py
@@ -2120,7 +2120,7 @@ def test_group_conv2d_16c():
                 check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_deconv2d_16c():
     in_chn_list = [1024, 512, 256, 128, 64, 32, 16]
     out_chn_list = [512, 256, 128, 64, 32, 16, 3]
@@ -2144,7 +2144,7 @@ def test_deconv2d_16c():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_batchnorm_16c():
     chn_list = [16, 1024]
     shape = np.random.randint(low=1, high=300, size=10)
@@ -2262,7 +2262,7 @@ def test_reshape_conv():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_conv_reshape_conv():
     class Net(gluon.HybridBlock):
         def __init__(self, **kwargs):
@@ -2321,7 +2321,7 @@ def test_slice_conv_slice_conv():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_slice_conv_reshape_conv():
     class Net(gluon.HybridBlock):
         def __init__(self, **kwargs):
@@ -2501,7 +2501,7 @@ def test_reshape_dense_slice_dense():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_batchnorm():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, **kwargs):
@@ -2547,7 +2547,7 @@ def test_slice_batchnorm():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_slice_batchnorm_slice_batchnorm():
     class Net(gluon.HybridBlock):
         def __init__(self, slice, **kwargs):
@@ -2573,7 +2573,7 @@ def test_slice_batchnorm_slice_batchnorm():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_batchnorm_reshape_batchnorm():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, **kwargs):
@@ -2626,7 +2626,7 @@ def test_slice_batchnorm_reshape_batchnorm():
 
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_batchnorm_slice_batchnorm():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, slice, **kwargs):
@@ -2653,7 +2653,7 @@ def test_reshape_batchnorm_slice_batchnorm():
     check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_pooling2d():
     max_pooling = nn.MaxPool2D(strides=(2, 3), padding=(1, 1))
     avg_pooling = nn.AvgPool2D(strides=(2, 2), padding=(1, 1))
@@ -2720,7 +2720,7 @@ def test_slice_pooling2d():
             check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_pooling2d_reshape_pooling2d():
     max_pooling = nn.MaxPool2D(strides=(2, 2), padding=(1, 1))
     avg_pooling = nn.AvgPool2D(strides=(2, 2), padding=(1, 1))
@@ -2791,7 +2791,7 @@ def test_slice_pooling2d_slice_pooling2d():
             check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_slice_pooling2d_reshape_pooling2d():
     max_pooling = nn.MaxPool2D(strides=(2, 3), padding=(1, 1))
     avg_pooling = nn.AvgPool2D(strides=(2, 2), padding=(1, 1))
@@ -2828,7 +2828,7 @@ def test_slice_pooling2d_reshape_pooling2d():
             check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_pooling2d_slice_pooling2d():
     max_pooling = nn.MaxPool2D(strides=(2, 3), padding=(1, 1))
     avg_pooling = nn.AvgPool2D(strides=(2, 2), padding=(1, 1))
@@ -2867,7 +2867,7 @@ def test_reshape_pooling2d_slice_pooling2d():
             check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_deconv():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, **kwargs):
@@ -2886,7 +2886,7 @@ def test_reshape_deconv():
     check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_slice_deconv():
     class Net(gluon.HybridBlock):
         def __init__(self, slice, **kwargs):
@@ -2905,7 +2905,7 @@ def test_slice_deconv():
     check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_deconv_reshape_deconv():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, **kwargs):
@@ -2928,7 +2928,7 @@ def test_reshape_deconv_reshape_deconv():
     check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_slice_deconv_slice_deconv():
     class Net(gluon.HybridBlock):
         def __init__(self, slice, **kwargs):
@@ -2951,7 +2951,7 @@ def test_slice_deconv_slice_deconv():
     check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_reshape_deconv_slice_deconv():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, slice, **kwargs):
@@ -2976,7 +2976,7 @@ def test_reshape_deconv_slice_deconv():
     check_layer_forward_withinput(net, x)
 
 @with_seed()
-@unittest.skip('skippping temporarily, tracked by https://github.com/apache/incubator-mxnet/issues/11164')
+@unittest.skip('skippping temporarily, tracked by https://github.com/apache/mxnet/issues/11164')
 def test_slice_deconv_reshape_deconv():
     class Net(gluon.HybridBlock):
         def __init__(self, shape, slice, **kwargs):
diff --git a/tests/python/unittest/test_gluon_utils.py b/tests/python/unittest/test_gluon_utils.py
index bc816b1794..c40379893b 100644
--- a/tests/python/unittest/test_gluon_utils.py
+++ b/tests/python/unittest/test_gluon_utils.py
@@ -53,7 +53,7 @@ def test_download_retries():
 def _download_successful(tmp):
     """ internal use for testing download successfully """
     mx.gluon.utils.download(
-        "https://raw.githubusercontent.com/apache/incubator-mxnet/master/README.md",
+        "https://raw.githubusercontent.com/apache/mxnet/master/README.md",
         path=tmp)
 
 
diff --git a/tests/python/unittest/test_loss.py b/tests/python/unittest/test_loss.py
index 0b8374b9b9..4eb1ec0c50 100644
--- a/tests/python/unittest/test_loss.py
+++ b/tests/python/unittest/test_loss.py
@@ -64,7 +64,7 @@ def get_net(num_hidden, flatten=True):
     fc3 = mx.symbol.FullyConnected(act2, name='fc3', num_hidden=num_hidden, flatten=flatten)
     return fc3
 
-# tracked at: https://github.com/apache/incubator-mxnet/issues/11692
+# tracked at: https://github.com/apache/mxnet/issues/11692
 @with_seed()
 def test_ce_loss():
     nclass = 10
@@ -83,7 +83,7 @@ def test_ce_loss():
             initializer=mx.init.Xavier(magnitude=2))
     assert mod.score(data_iter, eval_metric=mx.metric.Loss())[0][1] < 0.05
 
-# tracked at: https://github.com/apache/incubator-mxnet/issues/11691
+# tracked at: https://github.com/apache/mxnet/issues/11691
 @with_seed()
 def test_bce_loss():
     N = 20
diff --git a/tests/python/unittest/test_module.py b/tests/python/unittest/test_module.py
index b82933126d..77a18d8643 100644
--- a/tests/python/unittest/test_module.py
+++ b/tests/python/unittest/test_module.py
@@ -404,7 +404,7 @@ def test_module_switch_bucket():
 
 
 # roywei: Getting rid of fixed seed as flakiness could not be reproduced,
-# tracked at: https://github.com/apache/incubator-mxnet/issues/11705
+# tracked at: https://github.com/apache/mxnet/issues/11705
 @with_seed()
 def test_module_set_params():
     # data iter
diff --git a/tests/python/unittest/test_ndarray.py b/tests/python/unittest/test_ndarray.py
index c8fbf35065..0a869475f6 100644
--- a/tests/python/unittest/test_ndarray.py
+++ b/tests/python/unittest/test_ndarray.py
@@ -154,7 +154,7 @@ def test_ndarray_setitem():
         assert x.shape == trivial_shape
         assert same(x.asnumpy(), x_np)
 
-    # test https://github.com/apache/incubator-mxnet/issues/16647
+    # test https://github.com/apache/mxnet/issues/16647
     dst = mx.nd.zeros((1, 3, 1))  # destination array
     src = [1, 2, 3]
     dst[0, :len(src), 0] = src
@@ -237,7 +237,7 @@ def test_ndarray_reshape():
     assert same(tensor.reshape(-1, 15).reshape(0, -4, 3, -1).asnumpy(), true_res.reshape(2, 3, 5).asnumpy())
     assert same(tensor.reshape(-1, 0).asnumpy(), true_res.reshape(10, 3).asnumpy())
     assert same(tensor.reshape(-1, 0, reverse=True).asnumpy(), true_res.reshape(6, 5).asnumpy())
-    # https://github.com/apache/incubator-mxnet/issues/18886
+    # https://github.com/apache/mxnet/issues/18886
     assertRaises(ValueError, tensor.reshape, (2, 3))
 
 @with_seed()
@@ -1664,7 +1664,7 @@ def test_ndarray_indexing():
 
 
 def test_assign_float_value_to_ndarray():
-    """Test case from https://github.com/apache/incubator-mxnet/issues/8668"""
+    """Test case from https://github.com/apache/mxnet/issues/8668"""
     a = np.array([47.844944], dtype=np.float32)
     b = mx.nd.zeros(1, dtype=np.float32)
     b[0] = a
@@ -1673,7 +1673,7 @@ def test_assign_float_value_to_ndarray():
     assert same(a, b.asnumpy())
 
 def test_assign_large_int_to_ndarray():
-    """Test case from https://github.com/apache/incubator-mxnet/issues/11639"""
+    """Test case from https://github.com/apache/mxnet/issues/11639"""
     a = mx.nd.zeros((4, 1), dtype=np.int32)
     a[1,0] = int(16800001)
     a[2,0] = int(16800002)
@@ -1685,7 +1685,7 @@ def test_assign_large_int_to_ndarray():
 
 @with_seed()
 def test_assign_a_row_to_ndarray():
-    """Test case from https://github.com/apache/incubator-mxnet/issues/9976"""
+    """Test case from https://github.com/apache/mxnet/issues/9976"""
     H, W = 10, 10
     dtype = np.float32
     a_np = np.random.random((H, W)).astype(dtype)
@@ -2021,7 +2021,7 @@ def test_update_ops_mutation():
 
 
 # Problem :
-# https://github.com/apache/incubator-mxnet/pull/15768#issuecomment-532046408
+# https://github.com/apache/mxnet/pull/15768#issuecomment-532046408
 @with_seed(412298777)
 def test_update_ops_mutation_failed_seed():
     # The difference was -5.9604645e-08 which was
diff --git a/tests/python/unittest/test_numpy_ndarray.py b/tests/python/unittest/test_numpy_ndarray.py
index 3ce53c6a6e..d456ac677f 100644
--- a/tests/python/unittest/test_numpy_ndarray.py
+++ b/tests/python/unittest/test_numpy_ndarray.py
@@ -1104,7 +1104,7 @@ def test_np_multinomial():
                 freq = mx.np.random.multinomial(experiements, pvals, size=size).asnumpy()
                 assert freq.size == 0
     # test small experiment for github issue
-    # https://github.com/apache/incubator-mxnet/issues/15383
+    # https://github.com/apache/mxnet/issues/15383
     small_exp, total_exp = 20, 10000
     for pvals_mx_np_array in [False, True]:
         for pvals in pvals_list:
diff --git a/tests/python/unittest/test_numpy_op.py b/tests/python/unittest/test_numpy_op.py
index c1f899dcb3..fc25f8cd69 100644
--- a/tests/python/unittest/test_numpy_op.py
+++ b/tests/python/unittest/test_numpy_op.py
@@ -3494,7 +3494,7 @@ def test_np_ravel():
 
 @with_seed()
 @use_np
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_np_randint():
     ctx = mx.context.current_context()
     # test shapes
@@ -7264,7 +7264,7 @@ def test_np_pad():
 
 @with_seed()
 @use_np
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_np_rand():
     # Test shapes.
     shapes = [
diff --git a/tests/python/unittest/test_operator.py b/tests/python/unittest/test_operator.py
index e7ac61e552..4169eb6f95 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -2129,7 +2129,7 @@ def test_convolution_grouping():
                 np.testing.assert_allclose(arr1.asnumpy(), arr2.asnumpy(), rtol=1e-3, atol=1e-3)
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/14052")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/14052")
 @with_seed()
 def test_depthwise_convolution():
     for dim in [1,2]:
@@ -2176,9 +2176,9 @@ def test_depthwise_convolution():
 
 @with_seed()
 def test_convolution_independent_gradients():
-    # NOTE(zixuanweeei): Flaky test tracked by https://github.com/apache/incubator-mxnet/issues/15603.
+    # NOTE(zixuanweeei): Flaky test tracked by https://github.com/apache/mxnet/issues/15603.
     # GPU context will be enabled after figuring out the possible issue tracked at
-    # https://github.com/apache/incubator-mxnet/issues/15638.
+    # https://github.com/apache/mxnet/issues/15638.
     ctx = mx.cpu()
     atol = 1.0e-3
     rtol = 1.0e-3
@@ -3970,7 +3970,7 @@ def test_norm():
                                                         [np.ones(npy_out.shape).astype(out_dtype)],
                                                         [npy_out_backward], rtol=1e-3, atol=1e-5, ctx=ctx,
                                                         dtype=backward_dtype)
-                                # Disable numeric gradient https://github.com/apache/incubator-mxnet/issues/11509
+                                # Disable numeric gradient https://github.com/apache/mxnet/issues/11509
                                 # check gradient
                                 if dtype is not np.float16 and not skip_backward:
                                     check_numeric_gradient(norm_sym, [in_data], numeric_eps=epsilon,
@@ -4097,7 +4097,7 @@ def check_sequence_func(ftype, mask_value=0, axis=0):
 
 
 @with_seed()
-@unittest.skip("Flaky test: https://github.com/apache/incubator-mxnet/issues/11395")
+@unittest.skip("Flaky test: https://github.com/apache/mxnet/issues/11395")
 def test_sequence_last():
     check_sequence_func("last", axis=0)
     check_sequence_func("last", axis=1)
@@ -6062,11 +6062,11 @@ def test_custom_op():
         x = mx.nd.Custom(length=10, depth=10, op_type="no_input_op")
     assert_almost_equal(x, np.ones(shape=(10, 10), dtype=np.float32))
 
-@unittest.skip("Flaky test, tracked at https://github.com/apache/incubator-mxnet/issues/17467")
+@unittest.skip("Flaky test, tracked at https://github.com/apache/mxnet/issues/17467")
 @with_seed()
 def test_custom_op_fork():
     # test custom operator fork
-    # see https://github.com/apache/incubator-mxnet/issues/14396
+    # see https://github.com/apache/mxnet/issues/14396
     class AdditionOP(mx.operator.CustomOp):
         def __init__(self):
             super(AdditionOP, self).__init__()
@@ -6130,7 +6130,7 @@ def _build_dot_custom(fun_forward, name):
 @with_seed()
 def test_custom_op_exc():
     # test except handling
-    # see https://github.com/apache/incubator-mxnet/pull/14693
+    # see https://github.com/apache/mxnet/pull/14693
     # 1. error in python code
     def custom_exc1():
         def f(in_data, out_data):
@@ -6327,7 +6327,7 @@ def _validate_sample_location(input_rois, input_offset, spatial_scale, pooled_w,
 
     return output_offset
 
-@unittest.skip("Flaky test, tracked at https://github.com/apache/incubator-mxnet/issues/11713")
+@unittest.skip("Flaky test, tracked at https://github.com/apache/mxnet/issues/11713")
 @with_seed()
 def test_deformable_psroipooling():
     sample_per_part = 4
@@ -6532,7 +6532,7 @@ def _make_triangle_symm(a, ndims, m, lower, dtype=np.float32):
     return mx.sym.broadcast_mul(a, lt_mask)
 
 # @ankkhedia: Getting rid of fixed seed as flakiness could not be reproduced
-# tracked at https://github.com/apache/incubator-mxnet/issues/11718
+# tracked at https://github.com/apache/mxnet/issues/11718
 @with_seed()
 def test_laop():
     dtype = np.float64
@@ -6894,7 +6894,7 @@ def test_laop_3():
 
 
 # @piyushghai - Removing the fixed seed for this test.
-# Issue for flakiness is tracked at - https://github.com/apache/incubator-mxnet/issues/11721
+# Issue for flakiness is tracked at - https://github.com/apache/mxnet/issues/11721
 @with_seed()
 def test_laop_4():
     # Currently disabled on GPU as syevd needs cuda8
@@ -6970,7 +6970,7 @@ def test_laop_5():
 
 # Tests for linalg.inverse
 @with_seed()
-@unittest.skip("Test crashes https://github.com/apache/incubator-mxnet/issues/15975")
+@unittest.skip("Test crashes https://github.com/apache/mxnet/issues/15975")
 def test_laop_6():
     dtype = np.float64
     rtol_fw = 1e-7
@@ -7036,7 +7036,7 @@ def test_stack():
 
 
 ## TODO: test fails intermittently when cudnn on. temporarily disabled cudnn until gets fixed.
-## tracked at https://github.com/apache/incubator-mxnet/issues/14288
+## tracked at https://github.com/apache/mxnet/issues/14288
 @with_seed()
 def test_dropout():
     def zero_count(array, ratio):
@@ -7207,7 +7207,7 @@ def test_dropout():
         # check_dropout_axes(0.25, nshape, axes = (1, 2, 3), cudnn_off=False)
 
 
-@unittest.skip("test fails intermittently. temporarily disabled till it gets fixed. tracked at https://github.com/apache/incubator-mxnet/issues/11290")
+@unittest.skip("test fails intermittently. temporarily disabled till it gets fixed. tracked at https://github.com/apache/mxnet/issues/11290")
 @with_seed()
 def test_scatter_gather_nd():
     def check(data, idx):
@@ -7729,7 +7729,7 @@ def test_slice_partial_infer():
 
 @with_seed()
 def test_float16_min_max():
-    """Test for issue: https://github.com/apache/incubator-mxnet/issues/9007"""
+    """Test for issue: https://github.com/apache/mxnet/issues/9007"""
     a = mx.nd.array([np.finfo('float16').min, np.finfo('float16').max], dtype='float16')
     assert a.dtype == np.float16
     assert np.finfo('float16').min == mx.nd.min(a).asscalar()
@@ -8453,7 +8453,7 @@ def test_monitor_with_variable_input_shape():
         del os.environ['MXNET_SUBGRAPH_BACKEND']
 
 @with_seed()
-@unittest.skip("test fails intermittently. temporarily disabled till it gets fixed. tracked at https://github.com/apache/incubator-mxnet/issues/13915")
+@unittest.skip("test fails intermittently. temporarily disabled till it gets fixed. tracked at https://github.com/apache/mxnet/issues/13915")
 def test_activation():
     shapes = [(9,), (9, 10), (9, 10, 10), (1, 9, 10, 10)]
     dtype_l = [np.float64, np.float32, np.float16]
@@ -9462,7 +9462,7 @@ def test_transpose_infer_shape_mixed():
 
 @with_seed()
 def test_sample_normal_default_shape():
-    # Test case from https://github.com/apache/incubator-mxnet/issues/16135
+    # Test case from https://github.com/apache/mxnet/issues/16135
     s = mx.nd.sample_normal(mu=mx.nd.array([10.0]), sigma=mx.nd.array([0.5]))
     assert s.shape == (1,)
     s = mx.nd.sample_normal(mu=mx.nd.array([10.0]), sigma=mx.nd.array([0.5]), shape=())
@@ -10059,7 +10059,7 @@ def test_scalarop_locale_invariance():
 
 @with_seed()
 def test_take_grads():
-    # Test for https://github.com/apache/incubator-mxnet/issues/19817
+    # Test for https://github.com/apache/mxnet/issues/19817
     from mxnet.gluon.nn import HybridBlock, Conv1D, HybridSequential, HybridLambda, Dense
     from mxnet import autograd, nd
     from mxnet.gluon.loss import L2Loss
diff --git a/tests/python/unittest/test_profiler.py b/tests/python/unittest/test_profiler.py
index ae7352c193..3ff5f7b197 100644
--- a/tests/python/unittest/test_profiler.py
+++ b/tests/python/unittest/test_profiler.py
@@ -445,7 +445,7 @@ def test_custom_operator_profiling_multiple_custom_ops_imperative():
             'test_custom_operator_profiling_multiple_custom_ops_imperative.json')
 
 
-@unittest.skip("Flaky test https://github.com/apache/incubator-mxnet/issues/15406")
+@unittest.skip("Flaky test https://github.com/apache/mxnet/issues/15406")
 def test_custom_operator_profiling_naive_engine():
     # run the three tests above using Naive Engine
     run_in_spawned_process(test_custom_operator_profiling, \
diff --git a/tests/python/unittest/test_random.py b/tests/python/unittest/test_random.py
index f85503f98d..ede38727cc 100644
--- a/tests/python/unittest/test_random.py
+++ b/tests/python/unittest/test_random.py
@@ -603,7 +603,7 @@ def test_sample_multinomial():
 
 # Test the generators with the chi-square testing
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_normal_generator():
     ctx = mx.context.current_context()
     samples = 1000000
@@ -628,7 +628,7 @@ def test_normal_generator():
                              nsamples=samples, nrepeat=trials)
 
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_uniform_generator():
     ctx = mx.context.current_context()
     for dtype in ['float16', 'float32', 'float64']:
@@ -662,7 +662,7 @@ def test_gamma_generator():
             verify_generator(generator=generator_mx_same_seed, buckets=buckets, probs=probs, success_rate=success_rate)
 
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_exponential_generator():
     ctx = mx.context.current_context()
     for dtype in ['float16', 'float32', 'float64']:
@@ -677,7 +677,7 @@ def test_exponential_generator():
             verify_generator(generator=generator_mx_same_seed, buckets=buckets, probs=probs, success_rate=0.20)
 
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_poisson_generator():
     ctx = mx.context.current_context()
     for dtype in ['float16', 'float32', 'float64']:
@@ -693,7 +693,7 @@ def test_poisson_generator():
             verify_generator(generator=generator_mx_same_seed, buckets=buckets, probs=probs)
 
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_negative_binomial_generator():
     ctx = mx.context.current_context()
     for dtype in ['float16', 'float32', 'float64']:
@@ -723,7 +723,7 @@ def test_negative_binomial_generator():
         verify_generator(generator=generator_mx_same_seed, buckets=buckets, probs=probs)
 
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_multinomial_generator():
     # This test fails with dtype float16 if the probabilities themselves cannot be
     # well-represented in float16.  When the float16 random picks are assigned to buckets,
@@ -902,7 +902,7 @@ def test_zipfian_generator():
     assert_almost_equal(exp_cnt_sampled, exp_cnt[sampled_classes], rtol=1e-1, atol=1e-2)
     assert_almost_equal(exp_cnt_true, exp_cnt[true_classes], rtol=1e-1, atol=1e-2)
 
-# Issue #10277 (https://github.com/apache/incubator-mxnet/issues/10277) discusses this test.
+# Issue #10277 (https://github.com/apache/mxnet/issues/10277) discusses this test.
 @with_seed()
 def test_shuffle():
     def check_first_axis_shuffle(arr):
@@ -1006,7 +1006,7 @@ def test_randint_extremes():
     assert a>=50000000 and a<=50000010
 
 @with_seed()
-@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/incubator-mxnet/issues/20389')
+@unittest.skipIf(sys.platform.startswith('win'), 'https://github.com/apache/mxnet/issues/20389')
 def test_randint_generator():
     ctx = mx.context.current_context()
     for dtype in ['int32', 'int64']:
diff --git a/tests/python/unittest/test_sparse_ndarray.py b/tests/python/unittest/test_sparse_ndarray.py
index 9a1fce4ff1..1bf57a383a 100644
--- a/tests/python/unittest/test_sparse_ndarray.py
+++ b/tests/python/unittest/test_sparse_ndarray.py
@@ -557,7 +557,7 @@ def test_sparse_nd_pickle():
 
 
 # @kalyc: Getting rid of fixed seed as flakiness could not be reproduced
-# tracked at https://github.com/apache/incubator-mxnet/issues/11741
+# tracked at https://github.com/apache/mxnet/issues/11741
 @with_seed()
 def test_sparse_nd_save_load():
     repeat = 1
diff --git a/tests/python/unittest/test_sparse_operator.py b/tests/python/unittest/test_sparse_operator.py
index 4c4e3dbdfc..2c105eaa08 100644
--- a/tests/python/unittest/test_sparse_operator.py
+++ b/tests/python/unittest/test_sparse_operator.py
@@ -2173,7 +2173,7 @@ def test_batchnorm_fallback():
 @with_seed()
 def test_mkldnn_sparse():
     # This test is trying to create a race condition describedd in
-    # https://github.com/apache/incubator-mxnet/issues/10189
+    # https://github.com/apache/mxnet/issues/10189
     arr = mx.nd.random.uniform(shape=(10, 10, 32, 32))
     weight1 = mx.nd.random.uniform(shape=(10, 10, 3, 3))
     arr = mx.nd.Convolution(data=arr, weight=weight1, no_bias=True, kernel=(3, 3), num_filter=10)
diff --git a/tests/python/unittest/test_symbol.py b/tests/python/unittest/test_symbol.py
index 2aabfbf9b2..042d747e88 100644
--- a/tests/python/unittest/test_symbol.py
+++ b/tests/python/unittest/test_symbol.py
@@ -351,7 +351,7 @@ def test_simple_bind_gradient_graph_possible_with_cycle():
     are the outputs of the same node. Therefore, adding a node to the
     control_deps of itself must be skipped.
     See GitHub issue:
-    https://github.com/apache/incubator-mxnet/issues/8029
+    https://github.com/apache/mxnet/issues/8029
     for more details."""
     data = mx.symbol.Variable('data')
     res = data + data + data + data + data + data + data + data
diff --git a/tests/python/unittest/test_test_utils.py b/tests/python/unittest/test_test_utils.py
index 49f0b932fd..a31c2367d6 100644
--- a/tests/python/unittest/test_test_utils.py
+++ b/tests/python/unittest/test_test_utils.py
@@ -29,6 +29,6 @@ def test_download_retries():
 def test_download_successful():
     tmp = tempfile.mkdtemp()
     tmpfile = os.path.join(tmp, 'README.md')
-    mx.test_utils.download("https://raw.githubusercontent.com/apache/incubator-mxnet/master/README.md",
+    mx.test_utils.download("https://raw.githubusercontent.com/apache/mxnet/master/README.md",
                            fname=tmpfile)
     assert os.path.getsize(tmpfile) > 100
\ No newline at end of file
diff --git a/tests/requirements.txt b/tests/requirements.txt
index a58e4b4fc3..66006e42f6 100644
--- a/tests/requirements.txt
+++ b/tests/requirements.txt
@@ -1,3 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
 # Requirements for tests, those are installed before running on the virtualenv
 # Requirements for tests run within the qemu requirement see ci/qemu/test_requirements.txt
 mock
@@ -6,6 +23,6 @@ nose-timer
 ipython
 # Allow numpy version as advanced as 1.19.5 to avoid CVE-2021-41495 and CVE-2021-41496 affecting <1.19.1.
 numpy>=1.16.0,<1.20.0
-scipy<1.7.0 # Restrict scipy version due to https://github.com/apache/incubator-mxnet/issues/20389
+scipy<1.7.0 # Restrict scipy version due to https://github.com/apache/mxnet/issues/20389
 onnxruntime
 packaging
diff --git a/tests/tutorials/test_tutorials.py b/tests/tutorials/test_tutorials.py
index 2ebd2f8e92..8065aa9e69 100644
--- a/tests/tutorials/test_tutorials.py
+++ b/tests/tutorials/test_tutorials.py
@@ -114,7 +114,7 @@ def test_gluon_save_load_params():
 
 def test_gluon_hybrid():
     assert _test_tutorial_nb('gluon/hybrid')
-# https://github.com/apache/incubator-mxnet/issues/16181
+# https://github.com/apache/mxnet/issues/16181
 """
 def test_gluon_performance():
     assert _test_tutorial_nb('gluon/performance')
@@ -184,7 +184,7 @@ def test_module_to_gluon():
 
 def test_python_types_of_data_augmentation():
     assert _test_tutorial_nb('python/types_of_data_augmentation')
-#https://github.com/apache/incubator-mxnet/issues/16181
+#https://github.com/apache/mxnet/issues/16181
 """
 def test_python_profiler():
     assert _test_tutorial_nb('python/profiler')
@@ -218,7 +218,7 @@ def test_control_flow():
 
 def test_amp():
     assert _test_tutorial_nb('amp/amp_tutorial')
-# https://github.com/apache/incubator-mxnet/issues/16181
+# https://github.com/apache/mxnet/issues/16181
 """
 def test_mkldnn_quantization():
     assert _test_tutorial_nb('mkldnn/mkldnn_quantization')
diff --git a/tools/caffe_translator/README.md b/tools/caffe_translator/README.md
index c21ec50d2d..8653bcf24b 100644
--- a/tools/caffe_translator/README.md
+++ b/tools/caffe_translator/README.md
@@ -16,7 +16,7 @@
 <!--- under the License. -->
 
 # Caffe Translator
-Caffe Translator is a migration tool that helps developers migrate their existing Caffe code to MXNet and continue further development using MXNet. Note that this is different from the Caffe to MXNet model converter which is available [here](https://github.com/apache/incubator-mxnet/tree/master/tools/caffe_converter).
+Caffe Translator is a migration tool that helps developers migrate their existing Caffe code to MXNet and continue further development using MXNet. Note that this is different from the Caffe to MXNet model converter which is available [here](https://github.com/apache/mxnet/tree/master/tools/caffe_converter).
 
 Caffe Translator takes the training/validation prototxt ([example](https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt)) and solver prototxt ([example](https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt)) as input and produces MXNet Python code ([example](https://www.caffetranslator.org/examples/lenet/lenet_translated.py)) as output. The translated Python code uses MXNet Symbol and Module API to build the network, reads data from [...]
 
@@ -60,8 +60,8 @@ Here is the list of command line parameters accepted by the Caffe Translator:
 #### Run the translated code:
 
 Following prerequisites are required to run the translated code:
-1. Caffe with MXNet interface ([Why?](faq.md#why_caffe) [How to build?](https://github.com/apache/incubator-mxnet/tree/master/plugin/caffe#install-caffe-with-mxnet-interface))
-2. MXNet with Caffe plugin ([How to build?](https://github.com/apache/incubator-mxnet/tree/master/plugin/caffe#compile-with-caffe))
+1. Caffe with MXNet interface ([Why?](faq.md#why_caffe) [How to build?](https://github.com/apache/mxnet/tree/master/plugin/caffe#install-caffe-with-mxnet-interface))
+2. MXNet with Caffe plugin ([How to build?](https://github.com/apache/mxnet/tree/master/plugin/caffe#compile-with-caffe))
 3. The dataset in LMDB format.
 
 Once prerequisites are installed, the translated Python code can be run like any other Python code:
@@ -91,6 +91,6 @@ Caffe Translator can currently translate the following layers:
 - Scale<sup>*</sup>
 - SoftmaxOutput
 
-<sup>*</sup> Uses [CaffePlugin](https://github.com/apache/incubator-mxnet/tree/master/plugin/caffe)
+<sup>*</sup> Uses [CaffePlugin](https://github.com/apache/mxnet/tree/master/plugin/caffe)
 
-If you want Caffe Translator to translate a layer that is not in the above list, please create an [issue](https://github.com/apache/incubator-mxnet/issues/new).
+If you want Caffe Translator to translate a layer that is not in the above list, please create an [issue](https://github.com/apache/mxnet/issues/new).
diff --git a/tools/caffe_translator/build.gradle b/tools/caffe_translator/build.gradle
index da5e9003a1..78795cbc8f 100644
--- a/tools/caffe_translator/build.gradle
+++ b/tools/caffe_translator/build.gradle
@@ -140,9 +140,9 @@ uploadShadow {
                 }
 
                 scm {
-                    connection 'scm:git:git://github.com:apache/incubator-mxnet.git'
-                    developerConnection 'scm:git:git@github.com:apache/incubator-mxnet.git'
-                    url 'https://github.com/apache/incubator-mxnet.git'
+                    connection 'scm:git:git://github.com:apache/mxnet.git'
+                    developerConnection 'scm:git:git@github.com:apache/mxnet.git'
+                    url 'https://github.com/apache/mxnet.git'
                 }
             }
         }
diff --git a/tools/caffe_translator/build_from_source.md b/tools/caffe_translator/build_from_source.md
index c08a423a44..6815b9576b 100644
--- a/tools/caffe_translator/build_from_source.md
+++ b/tools/caffe_translator/build_from_source.md
@@ -24,7 +24,7 @@
 
 Step 1: Clone the code:
 ```
-git clone https://github.com/apache/incubator-mxnet.git mxnet
+git clone https://github.com/apache/mxnet.git mxnet
 ```
 Step 2: CD to CaffeTranslator directory
 ```
diff --git a/tools/coreml/pip_package/README.rst b/tools/coreml/pip_package/README.rst
index 495d6430c5..28479d2a29 100644
--- a/tools/coreml/pip_package/README.rst
+++ b/tools/coreml/pip_package/README.rst
@@ -18,11 +18,11 @@
 MXNET -> CoreML Converter
 =========================
 
-`Apache MXNet <https://github.com/apache/incubator-mxnet>`_ (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix `symbolic and imperative programming <https://mxnet.apache.org/api/architecture/program_model>`_ to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that  [...]
+`Apache MXNet <https://github.com/apache/mxnet>`_ (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix `symbolic and imperative programming <https://mxnet.apache.org/api/architecture/program_model>`_ to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symb [...]
 
 `Core ML <http://developer.apple.com/documentation/coreml>`_ is an Apple framework which allows developers to simply and easily integrate machine learning (ML) models into apps running on Apple devices (including iOS, watchOS, macOS, and tvOS). Core ML introduces a public file format (.mlmodel) for a broad set of ML methods including deep neural networks (both convolutional and recurrent), tree ensembles with boosting, and generalized linear models. Models in this format can be directly  [...]
 
-This tool helps convert `MXNet models <https://github.com/apache/incubator-mxnet>`_ into `Apple CoreML <https://developer.apple.com/documentation/coreml>`_ format which can then be run on Apple devices. You can find more information about this tool on our `github <https://github.com/apache/incubator-mxnet/tree/master/tools/coreml>`_ page.
+This tool helps convert `MXNet models <https://github.com/apache/mxnet>`_ into `Apple CoreML <https://developer.apple.com/documentation/coreml>`_ format which can then be run on Apple devices. You can find more information about this tool on our `github <https://github.com/apache/mxnet/tree/master/tools/coreml>`_ page.
 
 Prerequisites
 -------------
@@ -34,7 +34,7 @@ The method for installing this tool follows the `standard python package install
 
   pip install mxnet-to-coreml
 
-The package `documentation <https://github.com/apache/incubator-mxnet/tree/master/tools/coreml>`_ contains more details on how to use coremltools.
+The package `documentation <https://github.com/apache/mxnet/tree/master/tools/coreml>`_ contains more details on how to use coremltools.
 
 Dependencies
 ------------
@@ -56,6 +56,6 @@ In order to convert, say a `Squeezenet model <http://data.mxnet.io/models/imagen
 
 More Information
 ----------------
-* `On Github <https://github.com/apache/incubator-mxnet/tree/master/tools/coreml>`_
-* `MXNet framework <https://github.com/apache/incubator-mxnet>`_
+* `On Github <https://github.com/apache/mxnet/tree/master/tools/coreml>`_
+* `MXNet framework <https://github.com/apache/mxnet>`_
 * `Apple CoreML <https://developer.apple.com/documentation/coreml>`_
diff --git a/tools/coreml/pip_package/setup.py b/tools/coreml/pip_package/setup.py
index 35614271bf..10c6213afc 100644
--- a/tools/coreml/pip_package/setup.py
+++ b/tools/coreml/pip_package/setup.py
@@ -52,7 +52,7 @@ setup(name='mxnet-to-coreml',
         'Topic :: Software Development :: Libraries :: Python Modules'
       ],
       keywords='Apache MXNet Apple CoreML Converter Deep Learning',
-      url='https://github.com/apache/incubator-mxnet/tree/master/tools/coreml',
+      url='https://github.com/apache/mxnet/tree/master/tools/coreml',
       author='pracheer',
       author_email='pracheer_gupta@hotmail.com',
       license='Apache 2.0',
diff --git a/tools/create_source_archive.sh b/tools/create_source_archive.sh
index 6691a8b5cf..235fb73270 100755
--- a/tools/create_source_archive.sh
+++ b/tools/create_source_archive.sh
@@ -49,7 +49,7 @@ TARBALL=$SRCDIR.tar.gz
 # clone the repo and checkout the tag
 echo "Cloning the MXNet repository..."
 git clone -b $MXNET_TAG --depth 1 --recurse-submodules \
-	--shallow-submodules https://github.com/apache/incubator-mxnet.git \
+	--shallow-submodules https://github.com/apache/mxnet.git \
 	$SRCDIR
 pushd $SRCDIR
 
diff --git a/tools/dependencies/README.md b/tools/dependencies/README.md
index 5fad36bcb1..2665b6ad30 100644
--- a/tools/dependencies/README.md
+++ b/tools/dependencies/README.md
@@ -104,7 +104,7 @@ sudo apt-get install -y git \
 
 ### MKL, MKLDNN
 
-@pengzhao-intel (https://github.com/apache/incubator-mxnet/commits?author=pengzhao-intel) and his team are tracking and updating these versions. Kudos to them!
+@pengzhao-intel (https://github.com/apache/mxnet/commits?author=pengzhao-intel) and his team are tracking and updating these versions. Kudos to them!
 
 ### CUDA, cuDNN, NCCL
 
@@ -181,8 +181,8 @@ sudo apt install libnccl2 libnccl-dev
 We will build MXNet with statically linked dependencies.
 ```
 # Clone MXNet repo
-git clone --recursive https://github.com/apache/incubator-mxnet.git
-cd incubator-mxnet
+git clone --recursive https://github.com/apache/mxnet.git
+cd mxnet
 # Make sure you pin to specific commit for all the performance sanity check to make fair comparison
 # Make corresponding change on tools/setup_gpu_build_tools.sh
 # to upgrade CUDA version, please refer to PR #14887.
@@ -192,7 +192,7 @@ cd incubator-mxnet
 # Build PyPi package
 tools/staticbuild/build.sh cu100mkl
 
-# Wait for 10 - 30 mins, you will find libmxnet.so under the incubator-mxnet/lib
+# Wait for 10 - 30 mins, you will find libmxnet.so under the mxnet/lib
 
 # Install python frontend
 pip install -e python
@@ -216,14 +216,14 @@ OK
 Please run performance test aginast the MXNet you build before raising the PR.
 
 #### 4. Raise a PR
-1. Update the tools/setup_gpu_build_tools.sh please refer to PR [#14988](https://github.com/apache/incubator-mxnet/pull/14988), [#14887](https://github.com/apache/incubator-mxnet/pull/14887/files)
-2. (optional) Update the CI-related configuration/shell script/Dockerfile. Please refer to PR [#14986](https://github.com/apache/incubator-mxnet/pull/14986/files), [#14950](https://github.com/apache/incubator-mxnet/pull/14950/files)
+1. Update the tools/setup_gpu_build_tools.sh please refer to PR [#14988](https://github.com/apache/mxnet/pull/14988), [#14887](https://github.com/apache/mxnet/pull/14887/files)
+2. (optional) Update the CI-related configuration/shell script/Dockerfile. Please refer to PR [#14986](https://github.com/apache/mxnet/pull/14986/files), [#14950](https://github.com/apache/mxnet/pull/14950/files)
 
 #### 5. CI Test
 1. Our CI would test PyPi and Scala publish of latest CUDA version i.e. mxnet-cu101mkl
 
 ### numpy, requests, graphviz (python dependencies)
-1. Please refer to [#14588](https://github.com/apache/incubator-mxnet/pull/14588/files) and make sure the version have both of upper bound and lower bound
+1. Please refer to [#14588](https://github.com/apache/mxnet/pull/14588/files) and make sure the version have both of upper bound and lower bound
 #### Checklist
 - [ ] Python/setup.py
 - [ ] tools/pip/setup.py
@@ -281,7 +281,7 @@ sudo apt-get install -y git \
 # Update the dependency under tools/dependencies, then
 tools/staticbuild/build.sh mkl
 
-# Wait for 10 - 30 mins, you will find libmxnet.so under the incubator-mxnet/lib
+# Wait for 10 - 30 mins, you will find libmxnet.so under the mxnet/lib
 
 # Install python frontend
 pip install -e python
@@ -330,7 +330,7 @@ sudo apt-get install -y git \
 # Update the dependency under tools/dependencies, then
 tools/staticbuild/build.sh mkl
 
-# Wait for 10 - 30 mins, you will find libmxnet.so under the incubator-mxnet/lib
+# Wait for 10 - 30 mins, you will find libmxnet.so under the mxnet/lib
 
 # Install python frontend
 pip install -e python
diff --git a/tools/diagnose.py b/tools/diagnose.py
index 9cd5792a2f..05005b5032 100755
--- a/tools/diagnose.py
+++ b/tools/diagnose.py
@@ -31,7 +31,7 @@ except ImportError:
 import argparse
 
 URLS = {
-    'MXNet': 'https://github.com/apache/incubator-mxnet',
+    'MXNet': 'https://github.com/apache/mxnet',
     'Gluon Tutorial(en)': 'http://gluon.mxnet.io',
     'Gluon Tutorial(cn)': 'https://zh.gluon.ai',
     'FashionMNIST': 'https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz',
diff --git a/tools/pip/MANIFEST.in b/tools/pip/MANIFEST.in
index 7a36d329a4..170b874f13 100644
--- a/tools/pip/MANIFEST.in
+++ b/tools/pip/MANIFEST.in
@@ -17,7 +17,6 @@
 
 include README
 include LICENSE
-include DISCLAIMER
 include NOTICE
 include mxnet/COMMIT_HASH
 recursive-include mxnet/tools *
diff --git a/tools/pip/doc/PYPI_README.md b/tools/pip/doc/PYPI_README.md
index d323a5545f..3d93eb418a 100644
--- a/tools/pip/doc/PYPI_README.md
+++ b/tools/pip/doc/PYPI_README.md
@@ -20,5 +20,5 @@ Apache MXNet (Incubating) Python Package
 [Apache MXNet](http://beta.mxnet.io) is a deep learning framework designed for both *efficiency* and *flexibility*.
 It allows you to mix the flavours of deep learning programs together to maximize the efficiency and your productivity.
 
-For feature requests on the PyPI package, suggestions, and issue reports, create an issue by clicking [here](https://github.com/apache/incubator-mxnet/issues/new).
+For feature requests on the PyPI package, suggestions, and issue reports, create an issue by clicking [here](https://github.com/apache/mxnet/issues/new).
 
diff --git a/tools/pip/setup.py b/tools/pip/setup.py
index a8ab7b0322..d5b3bcfeb2 100644
--- a/tools/pip/setup.py
+++ b/tools/pip/setup.py
@@ -213,5 +213,5 @@ setup(name=package_name,
           'Topic :: Software Development :: Libraries',
           'Topic :: Software Development :: Libraries :: Python Modules',
       ],
-      url='https://github.com/apache/incubator-mxnet')
+      url='https://github.com/apache/mxnet')
 
diff --git a/tools/staticbuild/build.sh b/tools/staticbuild/build.sh
index 71812b1e96..97ea1a5b92 100755
--- a/tools/staticbuild/build.sh
+++ b/tools/staticbuild/build.sh
@@ -80,7 +80,6 @@ mkdir -p licenses
 cp tools/dependencies/LICENSE.binary.dependencies licenses/
 cp NOTICE licenses/
 cp LICENSE licenses/
-cp DISCLAIMER licenses/
 
 
 # Build mxnet
diff --git a/tools/windowsbuild/README.md b/tools/windowsbuild/README.md
index 7d8e7cf331..57606b1569 100644
--- a/tools/windowsbuild/README.md
+++ b/tools/windowsbuild/README.md
@@ -16,4 +16,4 @@
 <!--- under the License. -->
 
 Due to dll size limitation under windows. Split dll into different dlls according to arch
-Reference https://github.com/apache/incubator-mxnet/pull/16980
\ No newline at end of file
+Reference https://github.com/apache/mxnet/pull/16980
\ No newline at end of file