You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/14 19:27:10 UTC

[GitHub] piiswrong closed pull request #9065: Fix examples of profiler and cpp-package

piiswrong closed pull request #9065: Fix examples of profiler and cpp-package
URL: https://github.com/apache/incubator-mxnet/pull/9065
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/c++/basics.md b/docs/tutorials/c++/basics.md
index cdf1a28ecd..d3231e7a1f 100644
--- a/docs/tutorials/c++/basics.md
+++ b/docs/tutorials/c++/basics.md
@@ -16,8 +16,8 @@ Except linking the MXNet shared library, the C++ package itself is a header-only
 which means all you need to do is to include the header files. Among the header files,
 `op.h` is special since it is generated dynamically. The generation should be done when
 [building the C++ package](http://mxnet.io/get_started/build_from_source.html#build-the-c++-package).
-After that, you also need to copy the shared library (`libmxnet.so` in linux,
-`libmxnet.dll` in windows) from `/path/to/mxnet/lib` to the working directory.
+It is important to note that you need to **copy the shared library** (`libmxnet.so` in Linux and MacOS,
+`libmxnet.dll` in Windows) from `/path/to/mxnet/lib` to the working directory.
 We do not recommend you to use pre-built binaries because MXNet is under heavy development,
 the operator definitions in `op.h` may be incompatible with the pre-built version.
 
@@ -49,7 +49,7 @@ auto val_iter = MXDataIter("MNISTIter")
     .CreateDataIter();
 ```
 
-The data have been successfully loaded, we can now easily construct various models to identify
+The data have been successfully loaded. We can now easily construct various models to identify
 the digits with the help of C++ package.
 
 
@@ -159,7 +159,12 @@ while (val_iter.Next()) {
 ```
 
 You can find the complete code in `mlp_cpu.cpp`. Use `make mlp_cpu` to compile it,
- and `./mlp_cpu` to run it.
+ and `./mlp_cpu` to run it. If it complains that the shared library `libmxnet.so` is not found
+ after typing `./mlp_cpu`, you will need to specify the path to the shared library in
+ the environment variable `LD_LIBRARY_PATH` in Linux and `DYLD_LIBRARY_PATH`
+ in MacOS. For example, if you are using MacOS, typing
+ `DYLD_LIBRARY_PATH+=. ./mlp_cpu` would solve the problem. It basically tells the system
+ to find the shared library under the current directory since we have just copied it here.
 
 GPU Support
 -----------
@@ -186,4 +191,6 @@ data_batch.label.CopyTo(&args["label"]);
 NDArray::WaitAll();
 ```
 
-By replacing the former code to the latter one, we successfully port the code to GPU. You can find the complete code in `mlp_gpu.cpp`. Compilation is similar to the cpu version. (Note: The shared library should be built with GPU support on)
+By replacing the former code to the latter one, we successfully port the code to GPU.
+You can find the complete code in `mlp_gpu.cpp`. Compilation is similar to the cpu version.
+Note that the shared library must be built with GPU support enabled.
diff --git a/example/profiler/README.md b/example/profiler/README.md
new file mode 100644
index 0000000000..7d3c42b629
--- /dev/null
+++ b/example/profiler/README.md
@@ -0,0 +1,23 @@
+# MXNet Profiler Examples
+
+This folder contains examples of using MXNet profiler to generate profiling results in json files.
+Please refer to [this link](http://mxnet.incubator.apache.org/faq/perf.html?highlight=profiler#profiler)
+for visualizing profiling results and make sure that you have installed a version of MXNet compiled
+with `USE_PROFILER=1`.
+
+- profiler_executor.py. To run this example, simply type `python profiler_executor.py` in terminal.
+It will generate a json file named `profile_executor_5iter.json`.
+
+- profiler_imageiter.py. You first need to create a file named `test.rec`,
+which is an image dataset file before running this example.
+Please follow
+[this tutorial](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=rec%20file#create-a-dataset-using-recordio)
+on how to create `.rec` files using an existing tool in MXNet. After you created 'test.rec',
+type `python profiler_imageiter.py` in terminal. It will generate `profile_imageiter.json`.
+
+- profiler_matmul.py. This example profiles matrix multiplications on GPU. Please make sure
+that you have installed a GPU enabled version of MXNet before running this example. Type
+`python profiler_matmul.py` and it will generate `profile_matmul_20iter.json`.
+
+- profiler_ndarray.py. This examples profiles a series of `NDArray` operations. Simply type
+`python profiler_ndarray.py` in terminal and it will generate `profile_ndarray.json`.
\ No newline at end of file
diff --git a/example/profiler/profiler_executor.py b/example/profiler/profiler_executor.py
index 26e3e1ba2a..117a8df492 100644
--- a/example/profiler/profiler_executor.py
+++ b/example/profiler/profiler_executor.py
@@ -17,7 +17,7 @@
 
 import mxnet as mx
 import argparse
-import os, sys
+import os
 import time
 import numpy as np
 from mxnet import profiler
diff --git a/example/profiler/profiler_imageiter.py b/example/profiler/profiler_imageiter.py
index e16b9b7de4..77ca412358 100644
--- a/example/profiler/profiler_imageiter.py
+++ b/example/profiler/profiler_imageiter.py
@@ -15,16 +15,15 @@
 # specific language governing permissions and limitations
 # under the License.
 
+from __future__ import print_function
 import os
 # uncomment to set the number of worker threads.
 # os.environ["MXNET_CPU_WORKER_NTHREADS"] = "4"
-from __future__ import print_function
 import time
 import mxnet as mx
-import numpy as np
 
 
-def run_imageiter(path_rec, n, batch_size = 32):
+def run_imageiter(path_rec, n, batch_size=32):
 
     data = mx.img.ImageIter(batch_size=batch_size,
                             data_shape=(3, 224, 224),
@@ -39,6 +38,7 @@ def run_imageiter(path_rec, n, batch_size = 32):
     mx.nd.waitall()
     print(batch_size*n/(time.time() - tic))
 
+
 if __name__ == '__main__':
     mx.profiler.profiler_set_config(mode='all', filename='profile_imageiter.json')
     mx.profiler.profiler_set_state('run')
diff --git a/example/profiler/profiler_matmul.py b/example/profiler/profiler_matmul.py
index 1b1cf74f41..a23545cb06 100644
--- a/example/profiler/profiler_matmul.py
+++ b/example/profiler/profiler_matmul.py
@@ -18,9 +18,8 @@
 from __future__ import print_function
 import mxnet as mx
 import argparse
-import os, sys
 import time
-import numpy as np
+
 
 def parse_args():
     parser = argparse.ArgumentParser(description='Set network parameters for benchmark test.')
@@ -30,18 +29,18 @@ def parse_args():
     parser.add_argument('--end_profiling_iter', type=int, default=70)
     return parser.parse_args()
 
+
 args = parse_args()
 
 if __name__ == '__main__':
     mx.profiler.profiler_set_config(mode='symbolic', filename=args.profile_filename)
     print('profile file save to {0}'.format(args.profile_filename))
 
-
     A = mx.sym.Variable('A')
     B = mx.sym.Variable('B')
     C = mx.symbol.dot(A, B)
 
-    executor = C.simple_bind(mx.gpu(1), 'write', A=(4096, 4096), B=(4096, 4096))
+    executor = C.simple_bind(mx.gpu(0), 'write', A=(4096, 4096), B=(4096, 4096))
 
     a = mx.random.uniform(-1.0, 1.0, shape=(4096, 4096))
     b = mx.random.uniform(-1.0, 1.0, shape=(4096, 4096))
diff --git a/example/profiler/profiler_ndarray.py b/example/profiler/profiler_ndarray.py
index 67ea87b1ed..5c233c64ed 100644
--- a/example/profiler/profiler_ndarray.py
+++ b/example/profiler/profiler_ndarray.py
@@ -82,6 +82,7 @@ def random_ndarray(dim):
     data = mx.nd.array(np.random.uniform(-10, 10, shape))
     return data
 
+
 def test_ndarray_elementwise():
     np.random.seed(0)
     nrepeat = 10
@@ -99,6 +100,7 @@ def test_ndarray_elementwise():
             check_with_uniform(mx.nd.square, 1, dim, np.square, rmin=0)
             check_with_uniform(lambda x: mx.nd.norm(x).asscalar(), 1, dim, np.linalg.norm)
 
+
 def test_ndarray_negate():
     npy = np.random.uniform(-10, 10, (2,3,4))
     arr = mx.nd.array(npy)
@@ -170,6 +172,7 @@ def test_ndarray_scalar():
     d = -c + 2
     assert(np.sum(d.asnumpy()) < 1e-5)
 
+
 def test_ndarray_pickle():
     np.random.seed(0)
     maxdim = 5
@@ -222,8 +225,7 @@ def test_ndarray_slice():
 
 def test_ndarray_slice_along_axis():
     arr = mx.nd.array(np.random.uniform(-10, 10, (3, 4, 2, 3)))
-    sub_arr = mx.nd.zeros((3, 2, 2, 3))
-    arr._copy_slice_to(1, 1, 3, sub_arr)
+    sub_arr = arr.slice(begin=(None, 1), end=(None, 3))
 
     # test we sliced correctly
     assert same(arr.asnumpy()[:, 1:3, :, :], sub_arr.asnumpy())
@@ -242,6 +244,7 @@ def test_clip():
         assert B1[i] >= -2
         assert B1[i] <= 2
 
+
 def test_dot():
     a = np.random.uniform(-3, 3, (3, 4))
     b = np.random.uniform(-3, 3, (4, 5))
@@ -251,8 +254,10 @@ def test_dot():
     C = mx.nd.dot(A, B)
     assert reldiff(c, C.asnumpy()) < 1e-5
 
+
 def test_reduce():
     sample_num = 200
+
     def test_reduce_inner(numpy_reduce_func, nd_reduce_func):
         for i in range(sample_num):
             ndim = np.random.randint(1, 6)
@@ -285,8 +290,10 @@ def test_reduce_inner(numpy_reduce_func, nd_reduce_func):
     test_reduce_inner(lambda data, axis, keepdims:_np_reduce(data, axis, keepdims, np.min),
                       mx.nd.min)
 
+
 def test_broadcast():
     sample_num = 1000
+
     def test_broadcast_to():
         for i in range(sample_num):
             ndim = np.random.randint(1, 6)
@@ -307,6 +314,7 @@ def test_broadcast_to():
             assert err < 1E-8
     test_broadcast_to()
 
+
 if __name__ == '__main__':
     mx.profiler.profiler_set_config(mode='all', filename='profile_ndarray.json')
     mx.profiler.profiler_set_state('run')


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services