You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mxnet.apache.org by Jun Wu <wu...@gmail.com> on 2019/04/30 04:29:13 UTC

[Announcement] New Committer - Zhennan Qin

Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin) from
Intel as a new committer.

Zhennan is the main author of accelerating MXNet/MKLDNN inference through
operator fusion and model quantization. His work has placed MXNet in an
advantageous place for inference workloads on Intel CPUs compared with
other DL frameworks.

Re: [Announcement] New Committer - Zhennan Qin

Posted by Lin Yuan <ap...@gmail.com>.
Congrats, Zhennan! Well deserved.

Lin

On Tue, Apr 30, 2019 at 3:07 PM Zhao, Patric <pa...@intel.com> wrote:

> Cong, Zhennan.
>
> Really great works and it makes the MXNet/Quantization flow outstanding
> over the world!
>
> > -----Original Message-----
> > From: Lv, Tao A [mailto:tao.a.lv@intel.com]
> > Sent: Tuesday, April 30, 2019 11:01 PM
> > To: dev@mxnet.incubator.apache.org
> > Subject: RE: [Announcement] New Committer - Zhennan Qin
> >
> > Congratulations Zhennan!
> >
> > -----Original Message-----
> > From: Jun Wu [mailto:wujun.nju@gmail.com]
> > Sent: Tuesday, April 30, 2019 12:29 PM
> > To: dev@mxnet.incubator.apache.org
> > Subject: [Announcement] New Committer - Zhennan Qin
> >
> > Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin)
> > from Intel as a new committer.
> >
> > Zhennan is the main author of accelerating MXNet/MKLDNN inference
> > through operator fusion and model quantization. His work has placed MXNet
> > in an advantageous place for inference workloads on Intel CPUs compared
> > with other DL frameworks.
>

RE: [Announcement] New Committer - Zhennan Qin

Posted by "Zhao, Patric" <pa...@intel.com>.
Cong, Zhennan.

Really great works and it makes the MXNet/Quantization flow outstanding over the world!

> -----Original Message-----
> From: Lv, Tao A [mailto:tao.a.lv@intel.com]
> Sent: Tuesday, April 30, 2019 11:01 PM
> To: dev@mxnet.incubator.apache.org
> Subject: RE: [Announcement] New Committer - Zhennan Qin
> 
> Congratulations Zhennan!
> 
> -----Original Message-----
> From: Jun Wu [mailto:wujun.nju@gmail.com]
> Sent: Tuesday, April 30, 2019 12:29 PM
> To: dev@mxnet.incubator.apache.org
> Subject: [Announcement] New Committer - Zhennan Qin
> 
> Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin)
> from Intel as a new committer.
> 
> Zhennan is the main author of accelerating MXNet/MKLDNN inference
> through operator fusion and model quantization. His work has placed MXNet
> in an advantageous place for inference workloads on Intel CPUs compared
> with other DL frameworks.

Re: [Announcement] New Committer - Zhennan Qin

Posted by MiraiWK WKCN <wk...@live.cn>.
Congrats Zhennan! Welcome!

________________________________
From: Lv, Tao A <ta...@intel.com>
Sent: Tuesday, April 30, 2019 11:01:01 PM
To: dev@mxnet.incubator.apache.org
Subject: RE: [Announcement] New Committer - Zhennan Qin

Congratulations Zhennan!

-----Original Message-----
From: Jun Wu [mailto:wujun.nju@gmail.com]
Sent: Tuesday, April 30, 2019 12:29 PM
To: dev@mxnet.incubator.apache.org
Subject: [Announcement] New Committer - Zhennan Qin

Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin) from Intel as a new committer.

Zhennan is the main author of accelerating MXNet/MKLDNN inference through operator fusion and model quantization. His work has placed MXNet in an advantageous place for inference workloads on Intel CPUs compared with other DL frameworks.

RE: [Announcement] New Committer - Zhennan Qin

Posted by "Lv, Tao A" <ta...@intel.com>.
Congratulations Zhennan!

-----Original Message-----
From: Jun Wu [mailto:wujun.nju@gmail.com] 
Sent: Tuesday, April 30, 2019 12:29 PM
To: dev@mxnet.incubator.apache.org
Subject: [Announcement] New Committer - Zhennan Qin

Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin) from Intel as a new committer.

Zhennan is the main author of accelerating MXNet/MKLDNN inference through operator fusion and model quantization. His work has placed MXNet in an advantageous place for inference workloads on Intel CPUs compared with other DL frameworks.