You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/04/11 16:28:34 UTC

[GitHub] David-Levinthal opened a new issue #10505: profiler shoudl collect call durations not just timestamps

David-Levinthal opened a new issue #10505: profiler shoudl collect call durations not just timestamps
URL: https://github.com/apache/incubator-mxnet/issues/10505
 
 
   Environment info (Required)
   
   What to do:
   1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   cat diagnose.log 
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                28
   On-line CPU(s) list:   0-27
   Thread(s) per core:    1
   Core(s) per socket:    14
   Socket(s):             2
   NUMA node(s):          2
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
   Stepping:              1
   CPU MHz:               3201.859
   CPU max MHz:           3500.0000
   CPU min MHz:           1200.0000
   BogoMIPS:              5190.34
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              35840K
   NUMA node0 CPU(s):     0-13
   NUMA node1 CPU(s):     14-27
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt retpoline kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
   ----------Python Info----------
   Version      : 3.5.2
   Compiler     : GCC 5.4.0 20160609
   Build        : ('default', 'Nov 23 2017 16:37:01')
   Arch         : ('64bit', 'ELF')
   ------------Pip Info-----------
   Version      : 8.1.1
   Directory    : /usr/lib/python3/dist-packages/pip
   ----------MXNet Info-----------
   Version      : 1.2.0
   Directory    : /home/levinth/mxnet/python/mxnet
   Hashtag not found. Not installed from pre-built package.
   ----------System Info----------
   Platform     : Linux-4.4.0-116-generic-x86_64-with-Ubuntu-16.04-xenial
   system       : Linux
   node         : zt-gpu-lin-1
   release      : 4.4.0-116-generic
   version      : #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   ----------Network Test----------
   Setting timeout: 10
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0578 sec, LOAD: 0.1538 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0450 sec, LOAD: 0.0828 sec.
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0068 sec, LOAD: 0.5143 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1794 sec, LOAD: 0.2363 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0084 sec, LOAD: 0.1054 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0076 sec, LOAD: 0.4941 sec.
   
   Package used (Python/R/Scala/Julia):
   Python3 mxnet built from source with:
   make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_NCCL=1 USE_NCCL_PATH=/usr/local/cuda/nccl USE_PROFILER=1 > mxbuild.log 2>&1
   
   Sockeye runs fine with no profiling added
   sockeye runs with profiling added per code snippets below..but resulting json file has problems
   sockeye invocation:
   python3 -m sockeye.train -s /home/levinth/WMT/train.tok.clean.bpe.32000.de -t /home/levinth/WMT/train.tok.clean.bpe.32000.en -vs /home/levinth/WMT/newstest2016.tok.bpe.32000.de -vt /home/levinth/WMT/newstest2016.tok.bpe.32000.en --num-embed 1024 --rnn-num-hidden 1024 --num-layers 4 --rnn-attention-type bilinear --max-seq-len 50 --device-ids 0 --batch-size 128 -o wmt_model_gpu0 > wmt_4layer_32k_bilinear_len50_gpu0_proftest2.log 2>&1
   
   I add a half dozen lines to set up the profiler, start it and stop after a counter reaches 50
   pound signs removed because the make a mess of the formatting
   next_data_batch = train_iter.next()
   profcount = 0
   mx.profiler.set_config(profile_all=True, filename='/home/levinth/sockeye_profile/profile_'+str(profcount)+'_sockeye.json')
   mx.profiler.set_state('run')
   while True:
   
           if not train_iter.iter_next():
               self.state.epoch += 1
               train_iter.reset()
               if max_num_epochs is not None and self.state.epoch == max_num_epochs:
                   logger.info("Maximum  of epochs (%s) reached.", max_num_epochs)
                   break
   
           if max_updates is not None and self.state.updates == max_updates:
               logger.info("Maximum  of updates (%s) reached.", max_updates)
               break
   
   
           batch = next_data_batch
           profcount += 1
           print(' from training profcount = %d' % profcount)
           self._step(self.model, batch, checkpoint_frequency, metric_train, metric_loss)
           if train_iter.iter_next():
               next_data_batch = train_iter.next()
               self.model.prepare_batch(next_data_batch)
           batch_num_samples = batch.data[0].shape[0]
           batch_num_tokens = batch.data[0].shape[1] * batch_num_samples
           self.state.updates += 1
           self.state.samples += batch_num_samples
           speedometer(self.state.epoch, self.state.updates, batch_num_samples, batch_num_tokens, metric_train)
           if profcount == 50:
               mx.profiler.set_state('stop')
   
   it appears that stopping the profile does not collect the durations
   1.
   2.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services