You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mxnet.apache.org by Chaitanya Bapat <ch...@gmail.com> on 2019/08/03 19:41:17 UTC

Re: Report of MXNet NumPy Project Status

Thanks Jun for the summary. Apologies for delayed response.

Having skimmed through a bunch of PRs revolving around "Numpy-compatibility
Infra" (#15581 <https://github.com/apache/incubator-mxnet/pull/15581>,
#14758 <https://github.com/apache/incubator-mxnet/pull/14758>, #14924
<https://github.com/apache/incubator-mxnet/pull/14924>)

I had 3 questions.
1. Looks like by making Numpy compatible APIs, MXNet would be more
"usable", "easy-to-use" or "user-friendly". Having MXNet's Numpy version
seems to be one big bet in our roadmap. So now my question is, would this
be an addition or a replacement?
So instead of mx.nd.zeros would be discontinuing with that and rather use
mx.np.zeros?

2. Are we going to deprecate our mx.nd.* ops in 2.0 or upcoming releases?
The reason why I'm asking this is - I have a pending PR on mx.nd.cumsum op
(but now that Hao's #15581 has mx.np.cumsum in the pipeline should I close
my PR if it’s not going to be used in future?

3. I understand making our operators "numpy-compatible" is an urgent need
and will be greatly appreciated by the users/community.
But going forward, are there going to be 2 ways of using MXNet operators?
or is it going to be the de-facto method.
I would assume we should only have one (to prevent confusing our users) and
ensure all our existing ops are numpy-compatible.

Thanks once again!
Chai



On Wed, 22 May 2019 at 09:25, Junru Shao <ju...@gmail.com> wrote:

> πŸŽ‰πŸŽ‰ Nice progress Jun!
>
> On Wed, May 22, 2019 at 12:12 AM Jun Wu <wu...@gmail.com> wrote:
>
> > Dear Community,
> >
> > A few months ago, we submitted this RFC
> > <https://github.com/apache/incubator-mxnet/issues/14253> proposing
> > introducing NumPy-compatible coding experience into MXNet. As it has been
> > some time since the proposal, we would like to share the progress with
> the
> > community and listen to feedbacks and suggestions to enhance technical
> > implementation as well as the way the project is operated.
> >
> > We set our first milestone by tackling the problem of MXNet not
> supporting
> > scalar and zero-size tensors. Last month, we submitted the PR
> > <https://github.com/apache/incubator-mxnet/pull/14661> providing the
> > infrastructure to support those two types of tensors in MXNet. This work
> > has affected almost every file and all language bindings in MXNet
> codebase.
> > It would be impossible to provide a complete solution hadn't there any
> > contributions from many MXNet developers across different organizations.
> >
> > With the infrastructure of supporting scalar and zero-size tensors, we
> are
> > currently working on implementing NumPy operators in MXNet. We created a
> > list of operators <
> https://github.com/apache/incubator-mxnet/issues/14327>
> > to be implemented from the D2L book <http://www.d2l.ai/>, and hope that
> we
> > will be able to provide full NumPy operator coverage for the book by the
> > end of next month.
> >
> > In the future, we plan to provide NumPy operator support for GluonCV
> > <https://github.com/dmlc/gluon-cv> and GluonNLP
> > <https://github.com/dmlc/gluon-nlp>. We also intend to explore the
> > opportunities of extending our work to support the libraries that heavily
> > depend on NumPy, not only from the deep learning world, but also a
> broader
> > data science community, where the techniques employed by deep learning,
> > such as auto differentiation, symbolic programming, GPU computing, and so
> > forth can be beneficial.
> >
> > Thank you very much for your time to read this email and care about our
> > efforts on making MXNet a super user-friendly deep learning framework. We
> > look forward to your comments, suggestions and contributions for this
> > project.
> >
> > Best,
> > Developers of MXNet NumPy Project
> >
> > References
> > [1] Development branch:
> > https://github.com/apache/incubator-mxnet/tree/numpy
> > [2] PR for supporting scalar and zero-size tensors:
> > https://github.com/apache/incubator-mxnet/pull/14661
> > [3] First batch of NumPy operators to be implemented:
> > https://github.com/apache/incubator-mxnet/issues/14327
> > [4] The D2L book: https://github.com/d2l-ai/d2l-en
> > [5] GluonCV: https://github.com/dmlc/gluon-cv
> > [6] GluonNLP: https://github.com/dmlc/gluon-nlp
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
<https://github.com/ChaiBapchya>[image: https://www.facebook.com/chaibapat]
<https://www.facebook.com/chaibapchya>[image:
https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
https://www.linkedin.com//in/chaibapat25]
<https://www.linkedin.com//in/chaibapchya/>