You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by noah chanmala <nc...@gmail.com> on 2015/08/05 22:05:46 UTC

Re: Saving file content to ByteBuffer and to column does not retrieve the same size of data

Carlos,

I am having the same issue as you.  I used below:

for (Row row : rs) {
    bytebuff=row.getBytes("data");
    byte data[];
    data=Bytes.getArray(bytebuff);
    out.write(data);

 Did you ever find the answer for this?

Thanks,

Noah



On Monday, September 22, 2014 at 6:50:51 AM UTC-4, Carlos Scheidecker wrote:
>
> Hello all,
>
> I can successfully read a file to a ByteBuffer and then write to a 
> Cassandra blob column. However, when I retrieve the value of the column, 
> the size of the ByteBuffer retrieved is bigger than the original ByteBuffer 
> where the file was read from. Writing to the disk, corrupts the image.
>
> So I read a JPG:
>
> private ByteBuffer readFile(String filename) {
> FileInputStream fIn;
> FileChannel fChan;
> long fSize;
> ByteBuffer mBuf;
> try {
> fIn = new FileInputStream(filename);
> fChan = fIn.getChannel();
> fSize = fChan.size();
> mBuf = ByteBuffer.allocate((int) fSize);
> fChan.read(mBuf);
> //mBuf.rewind();
> fChan.close();
> fIn.close();
> //mBuf.clear();
> return mBuf;
> } catch (IOException exc) {
> System.out.println("I/O Error: " + exc.getMessage());
> System.exit(1);
> }
> return null;
> }
>
> So I run readFile and save it to the contents column down which is a blob.
>
> ByteBuffer file1 = readFile(filename);
>
> Then I retrieve the object I call Attachment which has the following 
> definition from the table:
>
> CREATE TABLE attachment (
>   attachmentid timeuuid,
>   attachmentname varchar,
>   mimecontenttype varchar,
>   contents blob,
>   PRIMARY KEY(attachmentid)
> );
>
> The ByteBuffer is saved under the contents column.
>
> When I retrieve that ByteBuffer file2 = readAttachment.getContents();
>
> I notice that file1 is java.nio.HeapByteBuffer[pos=99801 lim=99801 
> cap=99801] and file2 that is retrieved is java.nio.HeapByteBuffer[pos=0 
> lim=99937 cap=99937]
>
> As you can see the original size was 99801 how it was save, and is now 
> 99937 so when I get that retrieved ByteBuffer which I assign to file2 and 
> write it to disk, I have a JPG that I cannot read because has some metadata 
> in from of it.
>
> I am attaching the original JPG, 1.jpg and 1_retrieve.jpg which is the one 
> with size 99937.
>
> So there is a difference of 99937-99801=136 between them.
>
> Therefore, taking the retrieved one and saving to a physical file will 
> prevent the system to display the JPG as it will error as: "Error 
> interpreting JPEG image file (Not a JPEG file: starts with 0x82 0x00)"
>
> Why is that the blob retrieved from the column bigger than the one I have 
> saved initially to the same column? Is there an offset that I need to deal 
> with? If I call buffer.arrayOffset() on the retrieved ByteBuffer the value 
> of the offset is 0.
>
> Here's the function I use to write file2 to the physical file 
> 1_retrieved.jpg: 
>
> private void writeFile(ByteBuffer buffer, String destname) {
> try (RandomAccessFile out = new RandomAccessFile(destname, "rw")) {
>    //
>    byte[] data = new byte[buffer.limit()];
>    //buffer.flip();
>    buffer.get(data);
>    out.write(data);
>    out.close();
> } catch (IOException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> }
> }
>
> Thanks.
>