You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Christophe S (JIRA)" <ji...@apache.org> on 2015/02/12 11:27:12 UTC
[jira] [Updated] (PHOENIX-1655) SpoolTooBigToDiskException when
getting a huge resultset
[ https://issues.apache.org/jira/browse/PHOENIX-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Christophe S updated PHOENIX-1655:
----------------------------------
Description:
I am working with timeseries and I would like to extract all the values for a sensor (12M row for one year). I thought it would work with Phoenix as it would stream the result but it seems it can't.
My table is as follows:
|M| VARCHAR |13|
|TS| TIMESTAMP |-|
|V| FLOAT |-|
|Q| CHAR |10|
|N| VARCHAR |10|
The code giving the exception:
{code}
final ResultSet rs = statement.executeQuery(
"select m, ts, v from MY_TABLE where m = 'mySensor'");
while (rs.next()) {
final String line = rs.getString(1) + "," + rs.getDate(2).getTime() + "," + rs.getFloat(3);
ps.println(line);
}
{code}
After a while I got the following stacktrace:
{code}
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.iterate.SpoolTooBigToDiskException: result too big, max allowed(bytes): 1044971520
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
at org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:289)
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:44)
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:66)
at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:86)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
{code}
was:
I am working with timeseries and I would like to extract all the values for a sensor (12M row for one year). I thought it would work with Phoenix as it would stream the result but it seems it can't.
My table is as follows:
|M| VARCHAR |13|
|TS| TIMESTAMP |-|
|V| FLOAT |-|
|Q| CHAR |10|
|N| VARCHAR |10|
The code giving the exception:
{code}
final ResultSet rs = statement.executeQuery(
"select m, ts, v from MY_TABLE where m = 'mySensor'");
while (rs.next()) {
final String line = rs.getString(1) + "," + rs.getDate(2).getTime() + "," + rs.getFloat(3);
ps.println(line);
}
{code}
After a while I got the following stacktrace:
{code}
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.iterate.SpoolTooBigToDiskException: result too big, max allowed(bytes): 1044971520
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
at org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:289)
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:44)
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:66)
at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:86)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
{code}
> SpoolTooBigToDiskException when getting a huge resultset
> --------------------------------------------------------
>
> Key: PHOENIX-1655
> URL: https://issues.apache.org/jira/browse/PHOENIX-1655
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.2
> Environment: HDP2.2
> Reporter: Christophe S
>
> I am working with timeseries and I would like to extract all the values for a sensor (12M row for one year). I thought it would work with Phoenix as it would stream the result but it seems it can't.
> My table is as follows:
> |M| VARCHAR |13|
> |TS| TIMESTAMP |-|
> |V| FLOAT |-|
> |Q| CHAR |10|
> |N| VARCHAR |10|
> The code giving the exception:
> {code}
> final ResultSet rs = statement.executeQuery(
> "select m, ts, v from MY_TABLE where m = 'mySensor'");
> while (rs.next()) {
> final String line = rs.getString(1) + "," + rs.getDate(2).getTime() + "," + rs.getFloat(3);
> ps.println(line);
> }
> {code}
> After a while I got the following stacktrace:
> {code}
> Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.iterate.SpoolTooBigToDiskException: result too big, max allowed(bytes): 1044971520
> at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
> at org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:289)
> at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:44)
> at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:66)
> at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:86)
> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)