You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by GitBox <gi...@apache.org> on 2020/06/06 08:54:02 UTC
[GitHub] [singa] joddiy opened a new pull request #724: add embedding layer
joddiy opened a new pull request #724:
URL: https://github.com/apache/singa/pull/724
add embedding layer
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] nudles merged pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
nudles merged pull request #724:
URL: https://github.com/apache/singa/pull/724
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] dcslin commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
dcslin commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-664091056
> > hi @joddiy , i resolve previous issue. could you please help to take a look at this:
> > in your test, updating `X = np.random.randint(5, size=(2, 4))` to `X = np.array([[0,1,2,3],[9,8,7,6]])` will give error:
> > ```
> > F0726 14:38:29.124913 10729 tensor.cc:1256] Check failed: in.shape(0) >= end (8 vs. 10) Tensor size must >= end
> > ```
>
> Hi, shicong, it exists a little bug in the backward of embedding, it has been fixed now, please check again.
Thank you @joddiy , this is tested and it is ok now from my side.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] lgtm-com[bot] commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
lgtm-com[bot] commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-640016273
This pull request **introduces 1 alert** when merging 0ec9b84e73fc7eda70ac2cc0fad172c0e62717cf into 038e2df29922112cf0e6125460cd6b254e7598ec - [view on LGTM.com](https://lgtm.com/projects/g/apache/singa/rev/pr-1db4a21a55e780958787b33434da801147c0ff95)
**new alerts:**
* 1 for Unused local variable
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] joddiy commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
joddiy commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-647316572
> Hi @joddiy , thank you for the code. testing this for lstm model, but it seems that the `model.compile` could not pass.
>
> ```python
> #!/usr/bin/env python
> # coding: utf-8
>
> # In[2]:
>
>
> import sys
> build_path = r'/root/singa-imdb/build/python'
> sys.path.append(build_path)
> from singa import autograd
> from singa import layer
> from singa import model
> from singa import tensor
> from singa import device
> from singa import opt
> import numpy as np
> bs = 32
> seq_limit = 50
> embed_size = 300
> hid = 64
> max_epoch=20
> vocab_size=100
>
>
> # In[3]:
>
>
> class IMDB(model.Model):
> def __init__(self, hidden_size, seq_length):
> super().__init__()
> batch_first = True
> self.em = layer.Embedding(vocab_size, embed_size)
> self.l1 = layer.Linear(64)
> self.l2 = layer.Linear(2)
>
> def forward(self, x):
> y = self.em(x)
> y = autograd.reshape(y, (y.shape[0], -1))
> y = self.l1(y)
> y = autograd.relu(y)
> y = self.l2(y)
> return y
>
> def train_one_batch(self, x, y):
> out = self.forward(x)
> loss = autograd.softmax_cross_entropy(out, y)
> self.optimizer(loss)
> return out, loss
>
> def set_opt(self, optimizer):
> self.optimizer = optimizer
>
>
> # In[ ]:
>
>
> dev = device.create_cuda_gpu_on(7)
> x = np.random.randint(0, vocab_size, (bs, seq_limit))
> tx = tensor.from_numpy(x)
> tx.to_device(dev)
>
> ty = tensor.Tensor((bs, 2), dev, tensor.float32)
> ty.gaussian(0,1)
>
> m = IMDB(hid, seq_limit)
> m.set_opt(opt.SGD())
>
> m.compile([tx], is_train=True, use_graph=False, sequential=False)
>
>
> # In[1]:
>
>
> """
> WARNING: Logging before InitGoogleLogging() is written to STDERR
> F0622 04:00:20.654505 388 common.cc:34] Check failed: initialized_ Must initialize data before reading it
> *** Check failure stack trace: ***
> """
>
>
> # In[3]:
>
>
> # out, loss = m(tx, ty)
> ```
Hi, @shicong, is there any method to check whether the tensor has been initialized? as you can see, in the forward, we convert the tensor to numpy array, if the tensor has not been initialized,, it must have a problem.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] joddiy commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
joddiy commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-664018524
> hi @joddiy , i resolve previous issue. could you please help to take a look at this:
> in your test, updating `X = np.random.randint(5, size=(2, 4))` to `X = np.array([[0,1,2,3],[9,8,7,6]])` will give error:
>
> ```
> F0726 14:38:29.124913 10729 tensor.cc:1256] Check failed: in.shape(0) >= end (8 vs. 10) Tensor size must >= end
> ```
Hi, shicong, it exists a little bug in the backward of embedding, it has been fixed now, please check again.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] dcslin commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
dcslin commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-647273232
Hi @joddiy , thank you for the code. testing this for lstm model, but it seems that the `model.compile` could not pass.
``` python
#!/usr/bin/env python
# coding: utf-8
# In[2]:
import sys
build_path = r'/root/singa-imdb/build/python'
sys.path.append(build_path)
from singa import autograd
from singa import layer
from singa import model
from singa import tensor
from singa import device
from singa import opt
import numpy as np
bs = 32
seq_limit = 50
embed_size = 300
hid = 64
max_epoch=20
vocab_size=100
# In[3]:
class IMDB(model.Model):
def __init__(self, hidden_size, seq_length):
super().__init__()
batch_first = True
self.em = layer.Embedding(vocab_size, embed_size)
self.l1 = layer.Linear(64)
self.l2 = layer.Linear(2)
def forward(self, x):
y = self.em(x)
y = autograd.reshape(y, (y.shape[0], -1))
y = self.l1(y)
y = autograd.relu(y)
y = self.l2(y)
return y
def train_one_batch(self, x, y):
out = self.forward(x)
loss = autograd.softmax_cross_entropy(out, y)
self.optimizer(loss)
return out, loss
def set_opt(self, optimizer):
self.optimizer = optimizer
# In[ ]:
dev = device.create_cuda_gpu_on(7)
x = np.random.randint(0, vocab_size, (bs, seq_limit))
tx = tensor.from_numpy(x)
tx.to_device(dev)
ty = tensor.Tensor((bs, 2), dev, tensor.float32)
ty.gaussian(0,1)
m = IMDB(hid, seq_limit)
m.set_opt(opt.SGD())
m.compile([tx], is_train=True, use_graph=False, sequential=False)
# In[1]:
"""
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0622 04:00:20.654505 388 common.cc:34] Check failed: initialized_ Must initialize data before reading it
*** Check failure stack trace: ***
"""
# In[3]:
# out, loss = m(tx, ty)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] dcslin commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
dcslin commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-663996392
hi @joddiy , i resolve previous issue. could you please help to take a look at this:
in your test, updating `X = np.random.randint(5, size=(2, 4))` to `X = np.array([[0,1,2,3],[9,8,7,6]])` will give error:
```
F0726 14:38:29.124913 10729 tensor.cc:1256] Check failed: in.shape(0) >= end (8 vs. 10) Tensor size must >= end
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] joddiy commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
joddiy commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-663996662
> hi @joddiy , i resolve previous issue. could you please help to take a look at this:
> in your test, updating `X = np.random.randint(5, size=(2, 4))` to `X = np.array([[0,1,2,3],[9,8,7,6]])` will give error:
>
> ```
> F0726 14:38:29.124913 10729 tensor.cc:1256] Check failed: in.shape(0) >= end (8 vs. 10) Tensor size must >= end
> ```
Thanks, shicong, pls let me check.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [singa] lgtm-com[bot] commented on pull request #724: add embedding layer
Posted by GitBox <gi...@apache.org>.
lgtm-com[bot] commented on pull request #724:
URL: https://github.com/apache/singa/pull/724#issuecomment-664019267
This pull request **introduces 1 alert** when merging ca172f4127056d8a28c168ec278d644de94adc4b into 5f4b250c1ef4d0380769310e3884bf0c1115d2a1 - [view on LGTM.com](https://lgtm.com/projects/g/apache/singa/rev/pr-ec054b38e78ea9214186e4164475e82ce8d97c21)
**new alerts:**
* 1 for Unused local variable
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org