You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/02/02 17:53:55 UTC

[GitHub] [incubator-mxnet] cyrusbehr opened a new issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

cyrusbehr opened a new issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821


   Building mxnet release 1.7.0 `64f737cdd59fe88d2c5b479f25d011c5156b6a8a` from source on Ubuntu 18.04 using the following CMake arguments:
   
   ```
   cmake -DUSE_CPP_PACKAGE=1 -DBUILD_CPP_EXAMPLES=OFF -DUSE_CUDA=0 -DUSE_MKL_IF_AVAILABLE=1 -DUSE_BLAS=mkl -DUSE_OPENCV=0 -DUSE_LAPACK=0   -DMKL_INCLUDE_DIR=/opt/intel/compilers_and_libraries/linux/mkl/include -DMKL_RT_LIBRARY=/opt/intel/compilers_and_libraries/linux/mkl/lib/intel64/libmkl_rt.so -DCMAKE_BUILD_TYPE=Release ..
   
   ```
   
   I am trying to run inference with a quantized model. Here is my code:
   
   ```
   #include <iostream>
   #include <iostream>
   #include <fstream>
   #include <sstream>
   #include <opencv2/opencv.hpp>
   #include "mxnet-cpp/MxNetCpp.h"
   using namespace mxnet::cpp;
   
   float dotProduct(const std::vector<float> &v1, const std::vector<float> &v2) {
       float res = 0.f;
       for (int i = 0; i < v1.size(); ++i) {
           res += v1[i] * v2[i];
       }
       return res;
   }
   
   inline NDArray AsData(cv::Mat rgb_image, Context ctx = Context::cpu()) {
       std::vector<float> data_buffer;
   
       // hwc to chw conversion
       for (int c = 0; c < 3; ++c) {
           for (int i = 0; i < rgb_image.rows; ++i) {
               for (int j = 0; j < rgb_image.cols; ++j) {
                   data_buffer.push_back(static_cast<float>(rgb_image.data[(i * rgb_image.cols + j) * 3 + c]));
               }
           }
       }
   
       // construct NDArray from data buffer
       return NDArray(data_buffer, Shape(1, 3, rgb_image.rows, rgb_image.cols), ctx);
   }
   
   
   int main() {
       std::string jsonFilepath = "../tfv4_quantized-symbol.json";
       std::string paramsFilepath = "../tfv4_quantized-0000.params";
   
   //    std::string jsonFilepath = "../tfv4.json";
   //    std::string paramsFilepath = "../tfv4.params";
   
       // Read a face chip:
       auto imgBGR = cv::imread("/home/nchafni/Cyrus/data/10kx10/data/Joseph-Abram-Cook-mugshot-36942062.jpg");
       cv::Mat imgRGB;
       cv::cvtColor(imgBGR, imgRGB, cv::COLOR_BGR2RGB);
   
       mxnet::cpp::Context ctx(Context::cpu());
       mxnet::cpp::Symbol net;
       mxnet::cpp::Executor *exec;
       std::map<std::string, mxnet::cpp::NDArray> args;
       std::map<std::string, mxnet::cpp::NDArray> auxs;
   
       net = Symbol::Load(jsonFilepath);
       std::map<std::string, NDArray> params = NDArray::LoadToMap(paramsFilepath);
   
       for (auto iter : params) {
           std::string type = iter.first.substr(0, 4);
           std::string name = iter.first.substr(4);
           if (type == "arg:")
               args[name] = iter.second.Copy(ctx);
           else if (type == "aux:")
               auxs[name] = iter.second.Copy(ctx);
           else
               continue;
       }
       NDArray::WaitAll();
   
       args["data"] = NDArray(Shape(1, 3, 112, 112), ctx, false);
       exec = net.SimpleBind(
               ctx, args, std::map<std::string, NDArray>(),
               std::map<std::string, OpReqType>(), auxs);
   
       auto data = AsData(imgRGB, ctx);
       data.CopyTo(&(exec->arg_dict()["data"]));
   
       exec->Forward(false);
   
       auto embedding = exec->outputs[0].Copy(Context(kCPU, 0));
       embedding.WaitToRead();
   
       std::vector<float> featureVector;
   
       int num = embedding.GetShape()[1];
       featureVector.resize(num);
       for (int i=0; i<num; i++) {
           featureVector[i] = embedding.At(0, i);
       }
   
       float magnitude = sqrt(dotProduct(featureVector, featureVector));
       for (size_t j = 0; j < featureVector.size(); j++) {
           featureVector[j] = featureVector[j] / magnitude;
       }
   
       auto simScore = dotProduct(featureVector, featureVector);
       std::cout << simScore << std::endl;
   
       delete exec;
       return 0;
   }
   
   ```
   
   I can compile the program fine. When I try to run it, I get the following error message: 
   
   ```
   [09:52:28] /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/src/executor/graph_executor.cc:2061: Subgraph backend MKLDNN is activated.
   terminate called after throwing an instance of 'dmlc::Error'
     what():  [09:52:28] /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/packaged/usr/local/include/mxnet-cpp/ndarray.hpp:237: Check failed: MXNDArrayWaitToRead(blob_ptr_->handle_) == 0 (-1 vs. 0) : MXNetError: Check failed: (in_data[0].dtype()) == (mshadow: :kUint8) mkldnn_quantized_conv op only supports uint8 as input type
   Stack trace:
     File "/home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/src/operator/quantization/mkldnn/mkldnn_quantized_conv.cc", line 41
     [bt] (0) ./quanitzed_mxnet_mkldnn(dmlc::LogMessageFatal::~LogMessageFatal()+0x4d) [0x558c0c32cf01]
     [bt] (1) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(+0x3268973) [0x7fd3467bd973]
     [bt] (2) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(mxnet::exec::FComputeExExecutor::Run(mxnet::RunContext, bool)+0x23b) [0x7fd34411ea8b]
     [bt] (3) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(+0xbda5fd) [0x7fd34412f5fd]
     [bt] (4) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(+0xbda6bf) [0x7fd34412f6bf]
     [bt] (5) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext, mxnet::engine::OprBlock*)+0x121) [0x7fd344109cf1]
     [bt] (6) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(std::_Function_handler<void (std::shared_ptr<dmlc::ManualEvent>), mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)::{lambda()#1}::operator()() const::{lambda(std::shared_ptr<dmlc::ManualEvent>)#1}>::_M_invoke(std::_Any_data const&, std::shared_ptr<dmlc::ManualEvent>&&)+0x147) [0x7fd34410a637]
     [bt] (7) /home/nchafni/Cyrus/prototype/quanitzed_mxnet_mkldnn/incubator-mxnet/build/libmxnet.so(std::thread::_State_impl<std::thread::_Invoker<std::tuple<std::function<void (std::shared_ptr<dmlc::ManualEvent>)>, std::shared_ptr<dmlc::ManualEvent> > > >::_M_run()+0x4a) [0x7fd344108cba]
     [bt] (8) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xd0870) [0x7fd33fec7870]
   
   Stack trace:
     [bt] (0) ./quanitzed_mxnet_mkldnn(dmlc::LogMessageFatal::~LogMessageFatal()+0x4d) [0x558c0c32cf01]
     [bt] (1) ./quanitzed_mxnet_mkldnn(+0x10527) [0x558c0c332527]
     [bt] (2) ./quanitzed_mxnet_mkldnn(+0x908e) [0x558c0c32b08e]
     [bt] (3) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7fd33f471bf7]
     [bt] (4) ./quanitzed_mxnet_mkldnn(+0x81da) [0x558c0c32a1da]
   
   ```
   
   How can I resolve this issue? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] anko-intel commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
anko-intel commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-775742146


   @sfraczek is looking at the issue


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] szha commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
szha commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-775385162


   cc @anko-intel 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] cyrusbehr commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
cyrusbehr commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-776075255


   I will give it a shot and report the results. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] sfraczek commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
sfraczek commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-775957502


   This is older version of mkldnn quantized convolution operator.  At that time int8 input wasn't supported. It would be best if you could change the model to use the newer subgraph based one on which should work. Can you do that?
   Here is some documentation about subgraph quantization: https://mxnet.apache.org/versions/1.7.0/api/python/docs/tutorials/performance/backend/mkldnn/mkldnn_quantization.html


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] anko-intel commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
anko-intel commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-853056953


   @cyrusbehr, @szha can we close the issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] szha closed issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
szha closed issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] szha commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
szha commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-853328037


   Closing for now. @cyrusbehr feel free to ping me if you need the issue reopened.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] sfraczek commented on issue #19821: mkldnn_quantized_conv op only supports uint8 as input type

Posted by GitBox <gi...@apache.org>.
sfraczek commented on issue #19821:
URL: https://github.com/apache/incubator-mxnet/issues/19821#issuecomment-829101505


   Hi, have you tried it? How was it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org