You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/10/03 22:06:56 UTC

[GitHub] [incubator-mxnet] szha commented on issue #13684: CUDA error when increasing number of training epochs

szha commented on issue #13684:
URL: https://github.com/apache/incubator-mxnet/issues/13684#issuecomment-703170471


   @ndeepesh this is caused by the same CUDA fork problem we discussed in #18734. The way to solve it is to fork first before initializing the GPU context. In this example, the fork happens in data loader in `load_cifar10` and the GPU initialization happens with `try_all_gpus`. Reordering them should solve the problem.
   
   ```
   def train_with_data_aug(train_augs, test_augs, lr=0.001):
       batch_size = 256
       train_iter = load_cifar10(True, train_augs, batch_size)
       test_iter = load_cifar10(False, test_augs, batch_size)
       ctx, net = try_all_gpus(), gb.resnet18(10)
       net.initialize(ctx=ctx, init=init.Xavier())
       trainer = gluon.Trainer(net.collect_params(), 'adam',
                               {'learning_rate': lr})
       loss = gloss.SoftmaxCrossEntropyLoss()
       train(train_iter, test_iter, net, loss, trainer, ctx, num_epochs=8)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org