You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/07 01:42:24 UTC

[GitHub] [incubator-mxnet] gengyanlei commented on issue #17224: 可以将gluon中数据分发 合成么,像pytorch那样,并且.step()不需要传入batch,以及损失函数可以加reduce么?使用起来不太方便,不过比symbol好用多了,symbol不好记录loss

gengyanlei commented on issue #17224: 可以将gluon中数据分发 合成么,像pytorch那样,并且.step()不需要传入batch,以及损失函数可以加reduce么?使用起来不太方便,不过比symbol好用多了,symbol不好记录loss
URL: https://github.com/apache/incubator-mxnet/issues/17224#issuecomment-571394479
 
 
   mxnet gluon数据读取时,需要对数据进行分发到每个GPU上,然后再根据每个gpu上面预测的结果,计算loss,再将每个loss反向传播(**这个过程需要for循环操作,比较繁琐**),而且.step(batch_size),这个batch_size有可能不是一样的,那么需要每次step时,获取动态获取batch_size大小。
   数据分发对应nn.DataParallel
   可以将预测结果直接合并成1个么,不管几个gpu,我阅读mxnet API时,没注意到有这个参数。
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services