You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by GitBox <gi...@apache.org> on 2020/05/20 09:52:37 UTC

[GitHub] [singa] Shashankwer opened a new issue #707: Layer mismatch causes session to to terminate abruptly

Shashankwer opened a new issue #707:
URL: https://github.com/apache/singa/issues/707


   Hi, 
   
   The issue might be known, however while creating a Neural network layer stack with unmatched layer can cause the current python session to end abruptly, without generating any stack trace. while calculating model.loss(e.g. autograd.mse_loss(y,t) ) 
   
   for example of a simple feed forward neural network:
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3,4)
           self.linear2 = autograd.Linear(4,3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(x) 
   
   if the output does not have a dimension of 3, the current session will terminate without generating any error. 
   
   A stack trace is generated with below warning. 
   
   WARNING: Logging before InitGoogleLogging() is written to STDERR
   F0520 17:37:19.265754 288538048 tensor.cc:431] Check failed: shape_.at(m - i) == 1 (3 vs. 1) i= 0
   *** Check failure stack trace: ***
   
   This causes to rerun the entire program/notebook again. The same issue is not seen in autograd.backward which generates an assertion error. 
   
   Thanks and Regards,
   Shashank 
   
   
   
   
   
     


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] nudles commented on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
nudles commented on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-636574676


   @dcslin can you help check this issue?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] dcslin commented on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
dcslin commented on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-649328973


   addressed in pr https://github.com/apache/singa/pull/751


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] dcslin commented on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
dcslin commented on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-645873692


   Yes We should add input shape check all necessary operators in autograd.py
   for example, we should raise exception if input shapes are different:
   ```
   autograd.softmax_cross_entropy(tx, ty)
   autograd.mse_loss(tx, ty)
   autograd.equal(tx,ty)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] Shashankwer closed issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
Shashankwer closed issue #707:
URL: https://github.com/apache/singa/issues/707


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] Shashankwer edited a comment on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
Shashankwer edited a comment on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-637404896


   Hi, 
   
   Issue reported here is for handling the error on the python API side and is particularly noticed for autograd.backward function. 
   
   Consider the below example
   ```
   from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   from singa import device
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3, 4)
           self.linear2 = autograd.Linear(4, 3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   def train(model, x, t, dev=device.get_default_device(), epochs=100):
       for i in range(epochs):
           y = model.forward(x)
           loss = autograd.mse_loss(y, t)
           print("loss: ", loss)
           sgd = opt.SGD()
           for p, gp in autograd.backward(loss):
               sgd.update(p, gp)
           sgd.step()
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,3)).gaussian(1,1)
       
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
       train(m,x,y)
   ```
   
   The above code will execute without any issues. However if we change the dimension of output tensor such that it does not match the model constructed, the error is noticed. For example 
   
   ```
   from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   from singa import device
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3, 4)
           self.linear2 = autograd.Linear(4, 3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   def train(model, x, t, dev=device.get_default_device(), epochs=100):
       for i in range(epochs):
           y = model.forward(x)
           loss = autograd.mse_loss(y, t)
           print("loss: ", loss)
           sgd = opt.SGD()
           for p, gp in autograd.backward(loss):
               sgd.update(p, gp)
           sgd.step()
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,4)).gaussian(1,1)
       
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
       train(m,x,y)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] Shashankwer commented on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
Shashankwer commented on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-637404896


   Hi, 
   
   Issue reported here is for handling the error on the python API side and is particularly noticed for autograd.backward function. 
   
   Consider the below example
   `
   from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   from singa import device
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3, 4)
           self.linear2 = autograd.Linear(4, 3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   def train(model, x, t, dev=device.get_default_device(), epochs=100):
       for i in range(epochs):
           y = model.forward(x)
           loss = autograd.mse_loss(y, t)
           print("loss: ", loss)
           sgd = opt.SGD()
           for p, gp in autograd.backward(loss):
               sgd.update(p, gp)
           sgd.step()
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,3)).gaussian(1,1)
       
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
       train(m,x,y)
   `
   
   The above code will execute without any issues. However if we change the dimension of output tensor such that it does not match the model constructed, the error is noticed. For example 
   
   `
   from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   from singa import device
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3, 4)
           self.linear2 = autograd.Linear(4, 3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   def train(model, x, t, dev=device.get_default_device(), epochs=100):
       for i in range(epochs):
           y = model.forward(x)
           loss = autograd.mse_loss(y, t)
           print("loss: ", loss)
           sgd = opt.SGD()
           for p, gp in autograd.backward(loss):
               sgd.update(p, gp)
           sgd.step()
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,4)).gaussian(1,1)
       
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
       train(m,x,y)
   `


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] dcslin commented on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
dcslin commented on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-637286868


   Hi @Shashankwer , Understand that the error code is not clear enough, but I could not replicate the error without further details(inputs, outputs), would you like to refer to following working example transformed from your code to help you debugging?
   
   ```
   from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3,4)
           self.linear2 = autograd.Linear(4,3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,3)).gaussian(1,1)
   
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
   ```
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] Shashankwer edited a comment on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
Shashankwer edited a comment on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-637404896


   Hi, 
   
   Issue reported here is for handling the error on the python API side and is particularly noticed for autograd.backward function. 
   
   Consider the below example
   `from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   from singa import device
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3, 4)
           self.linear2 = autograd.Linear(4, 3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   def train(model, x, t, dev=device.get_default_device(), epochs=100):
       for i in range(epochs):
           y = model.forward(x)
           loss = autograd.mse_loss(y, t)
           print("loss: ", loss)
           sgd = opt.SGD()
           for p, gp in autograd.backward(loss):
               sgd.update(p, gp)
           sgd.step()
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,3)).gaussian(1,1)
       
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
       train(m,x,y)`
   
   The above code will execute without any issues. However if we change the dimension of output tensor such that it does not match the model constructed, the error is noticed. For example 
   
   `from singa import autograd
   from singa import module
   from singa import opt
   from singa import tensor
   from singa import device
   
   class MLP():
       def __init__(self):
           self.linear1 = autograd.Linear(3, 4)
           self.linear2 = autograd.Linear(4, 3)
       def forward(self,x):
           y = self.linear1(x)
           return self.linear2(y)
       def loss(self, out, ty):
           return autograd.softmax_cross_entropy(out, ty)
       def optim(self, loss):
           self.optimizer.backward_and_update(loss)
       def set_optimizer(self, optimizer):
           self.optimizer = optimizer
   
   def train(model, x, t, dev=device.get_default_device(), epochs=100):
       for i in range(epochs):
           y = model.forward(x)
           loss = autograd.mse_loss(y, t)
           print("loss: ", loss)
           sgd = opt.SGD()
           for p, gp in autograd.backward(loss):
               sgd.update(p, gp)
           sgd.step()
   
   
   if __name__ == '__main__':
       x=tensor.Tensor((3,3)).gaussian(1,1)
       y=tensor.Tensor((3,4)).gaussian(1,1)
       
       autograd.training = True
       m = MLP()
       sgd = opt.SGD()
       m.set_optimizer(sgd)
       out = m.forward(x)
       loss = m.loss(out, y)
       m.optim(loss)
       print(loss)
       train(m,x,y)`


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [singa] dcslin commented on issue #707: Layer mismatch causes session to to terminate abruptly

Posted by GitBox <gi...@apache.org>.
dcslin commented on issue #707:
URL: https://github.com/apache/singa/issues/707#issuecomment-636581943


   hi @Shashankwer I am looking into this.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org