You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Billy Pearson (JIRA)" <ji...@apache.org> on 2008/09/19 07:36:44 UTC
[jira] Issue Comment Edited: (HBASE-884) Double and float
converters for Bytes class
[ https://issues.apache.org/jira/browse/HBASE-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12632520#action_12632520 ]
viper799 edited comment on HBASE-884 at 9/18/08 10:35 PM:
---------------------------------------------------------------
I have tried this patch and have a question if it working correctly
error I am receiving
{code}
2008-09-19 00:18:03,358 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:480)
at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:489)
at org.apache.hadoop.hbase.util.Bytes.toDouble(Bytes.java:200)
at com.compspy.mapred.SumInRank.map(SumInRank.java:79)
at org.apache.hadoop.hbase.mapred.TableMap.map(TableMap.java:42)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
{code}
I inserted a double by praseing it form a text file that I am testing imports with
linea[1] is a string for column
linea[3] is a string in a text file of a double number prased as a double here:
new BatchOperation(Bytes.toBytes(linea[1]),Bytes.toBytes(Double.parseDouble(linea[3]))));
So in the end of the job the reduce writes all these to hbase so far no problem
in the next job I read the value back and sum them up and write a new record
line 79:
double RecordAmount = Bytes.toDouble(e.getValue().getValue());
e is a cell from a rowresult
am I doing something wrong?
I scanned from the shell and can see the values look ok from there
value=0.0025188916876574
was (Author: viper799):
I have tried this patch and have a question if it working correctly
error I am receiving
{code}
2008-09-19 00:18:03,358 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:480)
at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:489)
at org.apache.hadoop.hbase.util.Bytes.toDouble(Bytes.java:200)
at com.compspy.mapred.SumInRank.map(SumInRank.java:79)
at org.apache.hadoop.hbase.mapred.TableMap.map(TableMap.java:42)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
{code}
I inserted a double by praseing it form a text file that I am testing imports with
linea[1] is a string for column
linea[3] is a string in a text file of a double number prased as a double here:
new BatchOperation(Bytes.toBytes(linea[1]),Bytes.toBytes(Double.parseDouble(linea[3]))));
So in the end of the job the reduce writes all these to hbase so far no problem
in the next job I read the value back and sum them up and write a new record
line 79:
double RecordAmount = Bytes.toDouble(e.getValue().getValue());
e is a cell from a rowresult
am I doing something wrong?
> Double and float converters for Bytes class
> -------------------------------------------
>
> Key: HBASE-884
> URL: https://issues.apache.org/jira/browse/HBASE-884
> Project: Hadoop HBase
> Issue Type: Improvement
> Components: io
> Reporter: Doğacan Güney
> Priority: Minor
> Attachments: new_converters.patch
>
>
> Is there any reason why there are no double and float converters for Bytes class? They will certainly come in handy.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
Re: [jira] Issue Comment Edited: (HBASE-884) Double and float converters for Bytes class
Posted by "Edward J. Yoon" <ed...@apache.org>.
I wanted it. :)
On Fri, Sep 19, 2008 at 2:36 PM, Billy Pearson (JIRA) <ji...@apache.org> wrote:
>
> [ https://issues.apache.org/jira/browse/HBASE-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12632520#action_12632520 ]
>
> viper799 edited comment on HBASE-884 at 9/18/08 10:35 PM:
> ---------------------------------------------------------------
>
> I have tried this patch and have a question if it working correctly
>
> error I am receiving
> {code}
> 2008-09-19 00:18:03,358 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
> java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:480)
> at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:489)
> at org.apache.hadoop.hbase.util.Bytes.toDouble(Bytes.java:200)
> at com.compspy.mapred.SumInRank.map(SumInRank.java:79)
> at org.apache.hadoop.hbase.mapred.TableMap.map(TableMap.java:42)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> {code}
>
>
> I inserted a double by praseing it form a text file that I am testing imports with
>
> linea[1] is a string for column
> linea[3] is a string in a text file of a double number prased as a double here:
> new BatchOperation(Bytes.toBytes(linea[1]),Bytes.toBytes(Double.parseDouble(linea[3]))));
>
> So in the end of the job the reduce writes all these to hbase so far no problem
>
> in the next job I read the value back and sum them up and write a new record
> line 79:
> double RecordAmount = Bytes.toDouble(e.getValue().getValue());
>
> e is a cell from a rowresult
>
> am I doing something wrong?
>
> I scanned from the shell and can see the values look ok from there
> value=0.0025188916876574
>
> was (Author: viper799):
> I have tried this patch and have a question if it working correctly
>
> error I am receiving
> {code}
> 2008-09-19 00:18:03,358 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
> java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:480)
> at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:489)
> at org.apache.hadoop.hbase.util.Bytes.toDouble(Bytes.java:200)
> at com.compspy.mapred.SumInRank.map(SumInRank.java:79)
> at org.apache.hadoop.hbase.mapred.TableMap.map(TableMap.java:42)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> {code}
>
>
> I inserted a double by praseing it form a text file that I am testing imports with
>
> linea[1] is a string for column
> linea[3] is a string in a text file of a double number prased as a double here:
> new BatchOperation(Bytes.toBytes(linea[1]),Bytes.toBytes(Double.parseDouble(linea[3]))));
>
> So in the end of the job the reduce writes all these to hbase so far no problem
>
> in the next job I read the value back and sum them up and write a new record
> line 79:
> double RecordAmount = Bytes.toDouble(e.getValue().getValue());
>
> e is a cell from a rowresult
>
> am I doing something wrong?
>
>
>> Double and float converters for Bytes class
>> -------------------------------------------
>>
>> Key: HBASE-884
>> URL: https://issues.apache.org/jira/browse/HBASE-884
>> Project: Hadoop HBase
>> Issue Type: Improvement
>> Components: io
>> Reporter: Doğacan Güney
>> Priority: Minor
>> Attachments: new_converters.patch
>>
>>
>> Is there any reason why there are no double and float converters for Bytes class? They will certainly come in handy.
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>
--
Best regards, Edward J. Yoon
edwardyoon@apache.org
http://blog.udanax.org