You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Isart Montane <is...@gmail.com> on 2015/05/18 15:46:08 UTC

Phoenix and NodeJS

Hi,

the company I work for is performing some tests on Phoenix with NodeJS. For
simple queries I didn't had any problem, but as soon as I start to use our
app I'm getting "process out of memory" errors on the client when I runs
queries that return a big number of rows (i.e. 400k) . I think the problem
is that the client tries to buffer all the results in RAM and that kills
it. The same query runs fine when I run it with sqline.

So, is there a way to tell the client to stream the results (or batch them)
instead of buffering them all? is raising the client memory the only
solution?

I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client as
the NodeJS driver

Thanks,

Isart Montane

Re: Phoenix and NodeJS

Posted by Isart Montane <is...@gmail.com>.
really useful, thanks!

On Tue, May 19, 2015 at 2:30 PM, Vaclav Loffelmann <
vaclav.loffelmann@socialbakers.com> wrote:

> Hi,
> currently we are migrating all read services to Java. For Phoenix 3.0
> we used to use Phoenix proxy server [1, 2].
>
> If you want use your existing code and native Java Phoenix client, you
> can consider interpreting JS on Nashorn (Java 8) with Avatar.js.
>
> Vaclav;
>
> [1] https://github.com/falsecz/phoenix-proxy-server
> [2] https://github.com/falsecz/node-phoenix-proxy
>
> On 05/19/2015 02:46 AM, Eli Levine wrote:
> > Yeah, so you can see that code creates a String array containing
> > the whole result set. Usually a very bad idea for 400K-row result
> > sets. You want to process results incrementally, probably via
> > paging using row-value constructors and LIMIT.
> >
> > On Mon, May 18, 2015 at 12:00 PM, Isart Montane
> > <is...@gmail.com> wrote:
> >
> >> Thanks James.
> >>
> >> That code is from the node driver, I will try to get some advice
> >> from it's developer.
> >>
> >> Thanks,
> >>
> >>
> >> On Mon, May 18, 2015 at 6:34 PM, James Taylor
> >> <ja...@apache.org> wrote:
> >>
> >>> Hi Isart, That code isn't Phoenix code. This sounds like a Node
> >>> JS issue. Vaclav has done a lot with Node JS, so he may be able
> >>> to give you some tips. Thanks, James
> >>>
> >>> On Mon, May 18, 2015 at 9:06 AM, Isart Montane
> >>> <is...@gmail.com> wrote:
> >>>> Hi Eli,
> >>>>
> >>>> thanks a lot for your comments. I think you are right. I
> >>>> found the
> >>> client
> >>>> code that's causing the issue. Do you have an example I can
> >>>> use to
> >>> patch it?
> >>>> is that the recommended way to access phoenix? I've seen on
> >>>> the web that there's also a query server available, is it
> >>>> worth a try?
> >>>>
> >>>>
> >>>> public String[] query(String sql) { List<String> lsResults =
> >>>> new ArrayList(); Connection conn = null; try { conn =
> >>>> this.dataSource.getConnection(); ResultSet rs =
> >>>> conn.createStatement().executeQuery(sql); ResultSetMetaData
> >>>> data = rs.getMetaData(); int numberOfColumns =
> >>>> data.getColumnCount(); List<String> lsRows = new
> >>>> ArrayList(); for (int i = 1; i <= numberOfColumns; i++) {
> >>>> lsRows.add(data.getColumnName(i)); } lsResults.add(join("\t",
> >>>> lsRows)); lsRows.clear(); while (rs.next()) { for (int i = 1;
> >>>> i <= numberOfColumns; i++) { lsRows.add(rs.getString(i)); }
> >>>> lsResults.add(join("\t", lsRows)); lsRows.clear(); }
> >>>> rs.close(); conn.close(); } catch (Exception e) {
> >>>> e.printStackTrace(); return null; } return
> >>>> (String[])lsResults.toArray(new String[lsResults.size()]); }
> >>>>
> >>>> On Mon, May 18, 2015 at 5:43 PM, Eli Levine
> >>>> <el...@gmail.com>
> >>> wrote:
> >>>>>
> >>>>> I don't have info on what your app does with results from
> >>>>> Phoenix. If
> >>> the
> >>>>> app is constructing some sort of object representations
> >>>>> from Phoenix
> >>> results
> >>>>> and holding on to them, I would look at what the memory
> >>>>> footprint of
> >>> that
> >>>>> is. I know this isn't very helpful but at this point I
> >>>>> would try to dig deeper into your app and the NodeJS driver
> >>>>> rather than Phoenix, since
> >>> you
> >>>>> mentioned the same queries run fine in sqlline.
> >>>>>
> >>>>> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <
> >>> isart.montane@gmail.com>
> >>>>> wrote:
> >>>>>>
> >>>>>> Hi Eli,
> >>>>>>
> >>>>>> thanks a lot for your answer. That might be a workaround
> >>>>>> but I was
> >>> hoping
> >>>>>> to get a more generic answer I can apply to the
> >>>>>> driver/phoenix since
> >>> that
> >>>>>> will require me lots of changes to the code.
> >>>>>>
> >>>>>> Any clue on why it works with sqline but not trough the
> >>>>>> node driver?
> >>>>>>
> >>>>>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine
> >>>>>> <el...@gmail.com>
> >>> wrote:
> >>>>>>>
> >>>>>>> Have you looked at paging [1] using Phoenix's row-value
> >>>>>>> constructors together with the LIMIT clause? That might
> >>>>>>> be what you are looking
> >>> for.
> >>>>>>>
> >>>>>>> [1] http://phoenix.apache.org/paged.html
> >>>>>>>
> >>>>>>> Eli
> >>>>>>>
> >>>>>>>
> >>>>>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <
> >>> isart.montane@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> the company I work for is performing some tests on
> >>>>>>>> Phoenix with
> >>> NodeJS.
> >>>>>>>> For simple queries I didn't had any problem, but as
> >>>>>>>> soon as I start
> >>> to use
> >>>>>>>> our app I'm getting "process out of memory" errors on
> >>>>>>>> the client
> >>> when I runs
> >>>>>>>> queries that return a big number of rows (i.e. 400k)
> >>>>>>>> . I think the
> >>> problem
> >>>>>>>> is that the client tries to buffer all the results in
> >>>>>>>> RAM and that
> >>> kills it.
> >>>>>>>> The same query runs fine when I run it with sqline.
> >>>>>>>>
> >>>>>>>> So, is there a way to tell the client to stream the
> >>>>>>>> results (or
> >>> batch
> >>>>>>>> them) instead of buffering them all? is raising the
> >>>>>>>> client memory
> >>> the only
> >>>>>>>> solution?
> >>>>>>>>
> >>>>>>>> I'm using phoenix-4.3.1 and
> >>> https://github.com/gaodazhu/phoenix-client
> >>>>>>>> as the NodeJS driver
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>>
> >>>>>>>> Isart Montane
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >>
> >
>

Re: Phoenix and NodeJS

Posted by Vaclav Loffelmann <va...@socialbakers.com>.
Hi,
currently we are migrating all read services to Java. For Phoenix 3.0
we used to use Phoenix proxy server [1, 2].

If you want use your existing code and native Java Phoenix client, you
can consider interpreting JS on Nashorn (Java 8) with Avatar.js.

Vaclav;

[1] https://github.com/falsecz/phoenix-proxy-server
[2] https://github.com/falsecz/node-phoenix-proxy

On 05/19/2015 02:46 AM, Eli Levine wrote:
> Yeah, so you can see that code creates a String array containing
> the whole result set. Usually a very bad idea for 400K-row result
> sets. You want to process results incrementally, probably via
> paging using row-value constructors and LIMIT.
> 
> On Mon, May 18, 2015 at 12:00 PM, Isart Montane
> <is...@gmail.com> wrote:
> 
>> Thanks James.
>> 
>> That code is from the node driver, I will try to get some advice
>> from it's developer.
>> 
>> Thanks,
>> 
>> 
>> On Mon, May 18, 2015 at 6:34 PM, James Taylor
>> <ja...@apache.org> wrote:
>> 
>>> Hi Isart, That code isn't Phoenix code. This sounds like a Node
>>> JS issue. Vaclav has done a lot with Node JS, so he may be able
>>> to give you some tips. Thanks, James
>>> 
>>> On Mon, May 18, 2015 at 9:06 AM, Isart Montane
>>> <is...@gmail.com> wrote:
>>>> Hi Eli,
>>>> 
>>>> thanks a lot for your comments. I think you are right. I
>>>> found the
>>> client
>>>> code that's causing the issue. Do you have an example I can
>>>> use to
>>> patch it?
>>>> is that the recommended way to access phoenix? I've seen on
>>>> the web that there's also a query server available, is it
>>>> worth a try?
>>>> 
>>>> 
>>>> public String[] query(String sql) { List<String> lsResults =
>>>> new ArrayList(); Connection conn = null; try { conn =
>>>> this.dataSource.getConnection(); ResultSet rs =
>>>> conn.createStatement().executeQuery(sql); ResultSetMetaData
>>>> data = rs.getMetaData(); int numberOfColumns =
>>>> data.getColumnCount(); List<String> lsRows = new
>>>> ArrayList(); for (int i = 1; i <= numberOfColumns; i++) { 
>>>> lsRows.add(data.getColumnName(i)); } lsResults.add(join("\t",
>>>> lsRows)); lsRows.clear(); while (rs.next()) { for (int i = 1;
>>>> i <= numberOfColumns; i++) { lsRows.add(rs.getString(i)); } 
>>>> lsResults.add(join("\t", lsRows)); lsRows.clear(); } 
>>>> rs.close(); conn.close(); } catch (Exception e) { 
>>>> e.printStackTrace(); return null; } return
>>>> (String[])lsResults.toArray(new String[lsResults.size()]); }
>>>> 
>>>> On Mon, May 18, 2015 at 5:43 PM, Eli Levine
>>>> <el...@gmail.com>
>>> wrote:
>>>>> 
>>>>> I don't have info on what your app does with results from
>>>>> Phoenix. If
>>> the
>>>>> app is constructing some sort of object representations
>>>>> from Phoenix
>>> results
>>>>> and holding on to them, I would look at what the memory
>>>>> footprint of
>>> that
>>>>> is. I know this isn't very helpful but at this point I
>>>>> would try to dig deeper into your app and the NodeJS driver
>>>>> rather than Phoenix, since
>>> you
>>>>> mentioned the same queries run fine in sqlline.
>>>>> 
>>>>> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <
>>> isart.montane@gmail.com>
>>>>> wrote:
>>>>>> 
>>>>>> Hi Eli,
>>>>>> 
>>>>>> thanks a lot for your answer. That might be a workaround
>>>>>> but I was
>>> hoping
>>>>>> to get a more generic answer I can apply to the
>>>>>> driver/phoenix since
>>> that
>>>>>> will require me lots of changes to the code.
>>>>>> 
>>>>>> Any clue on why it works with sqline but not trough the
>>>>>> node driver?
>>>>>> 
>>>>>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine
>>>>>> <el...@gmail.com>
>>> wrote:
>>>>>>> 
>>>>>>> Have you looked at paging [1] using Phoenix's row-value
>>>>>>> constructors together with the LIMIT clause? That might
>>>>>>> be what you are looking
>>> for.
>>>>>>> 
>>>>>>> [1] http://phoenix.apache.org/paged.html
>>>>>>> 
>>>>>>> Eli
>>>>>>> 
>>>>>>> 
>>>>>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <
>>> isart.montane@gmail.com>
>>>>>>> wrote:
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> the company I work for is performing some tests on
>>>>>>>> Phoenix with
>>> NodeJS.
>>>>>>>> For simple queries I didn't had any problem, but as
>>>>>>>> soon as I start
>>> to use
>>>>>>>> our app I'm getting "process out of memory" errors on
>>>>>>>> the client
>>> when I runs
>>>>>>>> queries that return a big number of rows (i.e. 400k)
>>>>>>>> . I think the
>>> problem
>>>>>>>> is that the client tries to buffer all the results in
>>>>>>>> RAM and that
>>> kills it.
>>>>>>>> The same query runs fine when I run it with sqline.
>>>>>>>> 
>>>>>>>> So, is there a way to tell the client to stream the
>>>>>>>> results (or
>>> batch
>>>>>>>> them) instead of buffering them all? is raising the
>>>>>>>> client memory
>>> the only
>>>>>>>> solution?
>>>>>>>> 
>>>>>>>> I'm using phoenix-4.3.1 and
>>> https://github.com/gaodazhu/phoenix-client
>>>>>>>> as the NodeJS driver
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> 
>>>>>>>> Isart Montane
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 
> 

Re: Phoenix and NodeJS

Posted by Eli Levine <el...@gmail.com>.
Yeah, so you can see that code creates a String array containing the whole
result set. Usually a very bad idea for 400K-row result sets. You want to
process results incrementally, probably via paging using row-value
constructors and LIMIT.

On Mon, May 18, 2015 at 12:00 PM, Isart Montane <is...@gmail.com>
wrote:

> Thanks James.
>
> That code is from the node driver, I will try to get some advice from it's
> developer.
>
> Thanks,
>
>
> On Mon, May 18, 2015 at 6:34 PM, James Taylor <ja...@apache.org>
> wrote:
>
>> Hi Isart,
>> That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
>> has done a lot with Node JS, so he may be able to give you some tips.
>> Thanks,
>> James
>>
>> On Mon, May 18, 2015 at 9:06 AM, Isart Montane <is...@gmail.com>
>> wrote:
>> > Hi Eli,
>> >
>> > thanks a lot for your comments. I think you are right. I found the
>> client
>> > code that's causing the issue. Do you have an example I can use to
>> patch it?
>> > is that the recommended way to access phoenix? I've seen on the web that
>> > there's also a query server available, is it worth a try?
>> >
>> >
>> > public String[] query(String sql)
>> >   {
>> >     List<String> lsResults = new ArrayList();
>> >     Connection conn = null;
>> >     try
>> >     {
>> >       conn = this.dataSource.getConnection();
>> >       ResultSet rs = conn.createStatement().executeQuery(sql);
>> >       ResultSetMetaData data = rs.getMetaData();
>> >       int numberOfColumns = data.getColumnCount();
>> >       List<String> lsRows = new ArrayList();
>> >       for (int i = 1; i <= numberOfColumns; i++) {
>> >         lsRows.add(data.getColumnName(i));
>> >       }
>> >       lsResults.add(join("\t", lsRows));
>> >       lsRows.clear();
>> >       while (rs.next())
>> >       {
>> >         for (int i = 1; i <= numberOfColumns; i++) {
>> >           lsRows.add(rs.getString(i));
>> >         }
>> >         lsResults.add(join("\t", lsRows));
>> >         lsRows.clear();
>> >       }
>> >       rs.close();
>> >       conn.close();
>> >     }
>> >     catch (Exception e)
>> >     {
>> >       e.printStackTrace();
>> >       return null;
>> >     }
>> >     return (String[])lsResults.toArray(new String[lsResults.size()]);
>> >   }
>> >
>> > On Mon, May 18, 2015 at 5:43 PM, Eli Levine <el...@gmail.com>
>> wrote:
>> >>
>> >> I don't have info on what your app does with results from Phoenix. If
>> the
>> >> app is constructing some sort of object representations from Phoenix
>> results
>> >> and holding on to them, I would look at what the memory footprint of
>> that
>> >> is. I know this isn't very helpful but at this point I would try to dig
>> >> deeper into your app and the NodeJS driver rather than Phoenix, since
>> you
>> >> mentioned the same queries run fine in sqlline.
>> >>
>> >> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <
>> isart.montane@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi Eli,
>> >>>
>> >>> thanks a lot for your answer. That might be a workaround but I was
>> hoping
>> >>> to get a more generic answer I can apply to the driver/phoenix since
>> that
>> >>> will require me lots of changes to the code.
>> >>>
>> >>> Any clue on why it works with sqline but not trough the node driver?
>> >>>
>> >>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine <el...@gmail.com>
>> wrote:
>> >>>>
>> >>>> Have you looked at paging [1] using Phoenix's row-value constructors
>> >>>> together with the LIMIT clause? That might be what you are looking
>> for.
>> >>>>
>> >>>> [1] http://phoenix.apache.org/paged.html
>> >>>>
>> >>>> Eli
>> >>>>
>> >>>>
>> >>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <
>> isart.montane@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Hi,
>> >>>>>
>> >>>>> the company I work for is performing some tests on Phoenix with
>> NodeJS.
>> >>>>> For simple queries I didn't had any problem, but as soon as I start
>> to use
>> >>>>> our app I'm getting "process out of memory" errors on the client
>> when I runs
>> >>>>> queries that return a big number of rows (i.e. 400k) . I think the
>> problem
>> >>>>> is that the client tries to buffer all the results in RAM and that
>> kills it.
>> >>>>> The same query runs fine when I run it with sqline.
>> >>>>>
>> >>>>> So, is there a way to tell the client to stream the results (or
>> batch
>> >>>>> them) instead of buffering them all? is raising the client memory
>> the only
>> >>>>> solution?
>> >>>>>
>> >>>>> I'm using phoenix-4.3.1 and
>> https://github.com/gaodazhu/phoenix-client
>> >>>>> as the NodeJS driver
>> >>>>>
>> >>>>> Thanks,
>> >>>>>
>> >>>>> Isart Montane
>> >>>>
>> >>>>
>> >>>
>> >>
>> >
>>
>
>

Re: Phoenix and NodeJS

Posted by Isart Montane <is...@gmail.com>.
Thanks James.

That code is from the node driver, I will try to get some advice from it's
developer.

Thanks,


On Mon, May 18, 2015 at 6:34 PM, James Taylor <ja...@apache.org>
wrote:

> Hi Isart,
> That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
> has done a lot with Node JS, so he may be able to give you some tips.
> Thanks,
> James
>
> On Mon, May 18, 2015 at 9:06 AM, Isart Montane <is...@gmail.com>
> wrote:
> > Hi Eli,
> >
> > thanks a lot for your comments. I think you are right. I found the client
> > code that's causing the issue. Do you have an example I can use to patch
> it?
> > is that the recommended way to access phoenix? I've seen on the web that
> > there's also a query server available, is it worth a try?
> >
> >
> > public String[] query(String sql)
> >   {
> >     List<String> lsResults = new ArrayList();
> >     Connection conn = null;
> >     try
> >     {
> >       conn = this.dataSource.getConnection();
> >       ResultSet rs = conn.createStatement().executeQuery(sql);
> >       ResultSetMetaData data = rs.getMetaData();
> >       int numberOfColumns = data.getColumnCount();
> >       List<String> lsRows = new ArrayList();
> >       for (int i = 1; i <= numberOfColumns; i++) {
> >         lsRows.add(data.getColumnName(i));
> >       }
> >       lsResults.add(join("\t", lsRows));
> >       lsRows.clear();
> >       while (rs.next())
> >       {
> >         for (int i = 1; i <= numberOfColumns; i++) {
> >           lsRows.add(rs.getString(i));
> >         }
> >         lsResults.add(join("\t", lsRows));
> >         lsRows.clear();
> >       }
> >       rs.close();
> >       conn.close();
> >     }
> >     catch (Exception e)
> >     {
> >       e.printStackTrace();
> >       return null;
> >     }
> >     return (String[])lsResults.toArray(new String[lsResults.size()]);
> >   }
> >
> > On Mon, May 18, 2015 at 5:43 PM, Eli Levine <el...@gmail.com> wrote:
> >>
> >> I don't have info on what your app does with results from Phoenix. If
> the
> >> app is constructing some sort of object representations from Phoenix
> results
> >> and holding on to them, I would look at what the memory footprint of
> that
> >> is. I know this isn't very helpful but at this point I would try to dig
> >> deeper into your app and the NodeJS driver rather than Phoenix, since
> you
> >> mentioned the same queries run fine in sqlline.
> >>
> >> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <isart.montane@gmail.com
> >
> >> wrote:
> >>>
> >>> Hi Eli,
> >>>
> >>> thanks a lot for your answer. That might be a workaround but I was
> hoping
> >>> to get a more generic answer I can apply to the driver/phoenix since
> that
> >>> will require me lots of changes to the code.
> >>>
> >>> Any clue on why it works with sqline but not trough the node driver?
> >>>
> >>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine <el...@gmail.com>
> wrote:
> >>>>
> >>>> Have you looked at paging [1] using Phoenix's row-value constructors
> >>>> together with the LIMIT clause? That might be what you are looking
> for.
> >>>>
> >>>> [1] http://phoenix.apache.org/paged.html
> >>>>
> >>>> Eli
> >>>>
> >>>>
> >>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <
> isart.montane@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> the company I work for is performing some tests on Phoenix with
> NodeJS.
> >>>>> For simple queries I didn't had any problem, but as soon as I start
> to use
> >>>>> our app I'm getting "process out of memory" errors on the client
> when I runs
> >>>>> queries that return a big number of rows (i.e. 400k) . I think the
> problem
> >>>>> is that the client tries to buffer all the results in RAM and that
> kills it.
> >>>>> The same query runs fine when I run it with sqline.
> >>>>>
> >>>>> So, is there a way to tell the client to stream the results (or batch
> >>>>> them) instead of buffering them all? is raising the client memory
> the only
> >>>>> solution?
> >>>>>
> >>>>> I'm using phoenix-4.3.1 and
> https://github.com/gaodazhu/phoenix-client
> >>>>> as the NodeJS driver
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>> Isart Montane
> >>>>
> >>>>
> >>>
> >>
> >
>

Re: Phoenix and NodeJS

Posted by James Taylor <ja...@apache.org>.
Hi Isart,
That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
has done a lot with Node JS, so he may be able to give you some tips.
Thanks,
James

On Mon, May 18, 2015 at 9:06 AM, Isart Montane <is...@gmail.com> wrote:
> Hi Eli,
>
> thanks a lot for your comments. I think you are right. I found the client
> code that's causing the issue. Do you have an example I can use to patch it?
> is that the recommended way to access phoenix? I've seen on the web that
> there's also a query server available, is it worth a try?
>
>
> public String[] query(String sql)
>   {
>     List<String> lsResults = new ArrayList();
>     Connection conn = null;
>     try
>     {
>       conn = this.dataSource.getConnection();
>       ResultSet rs = conn.createStatement().executeQuery(sql);
>       ResultSetMetaData data = rs.getMetaData();
>       int numberOfColumns = data.getColumnCount();
>       List<String> lsRows = new ArrayList();
>       for (int i = 1; i <= numberOfColumns; i++) {
>         lsRows.add(data.getColumnName(i));
>       }
>       lsResults.add(join("\t", lsRows));
>       lsRows.clear();
>       while (rs.next())
>       {
>         for (int i = 1; i <= numberOfColumns; i++) {
>           lsRows.add(rs.getString(i));
>         }
>         lsResults.add(join("\t", lsRows));
>         lsRows.clear();
>       }
>       rs.close();
>       conn.close();
>     }
>     catch (Exception e)
>     {
>       e.printStackTrace();
>       return null;
>     }
>     return (String[])lsResults.toArray(new String[lsResults.size()]);
>   }
>
> On Mon, May 18, 2015 at 5:43 PM, Eli Levine <el...@gmail.com> wrote:
>>
>> I don't have info on what your app does with results from Phoenix. If the
>> app is constructing some sort of object representations from Phoenix results
>> and holding on to them, I would look at what the memory footprint of that
>> is. I know this isn't very helpful but at this point I would try to dig
>> deeper into your app and the NodeJS driver rather than Phoenix, since you
>> mentioned the same queries run fine in sqlline.
>>
>> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <is...@gmail.com>
>> wrote:
>>>
>>> Hi Eli,
>>>
>>> thanks a lot for your answer. That might be a workaround but I was hoping
>>> to get a more generic answer I can apply to the driver/phoenix since that
>>> will require me lots of changes to the code.
>>>
>>> Any clue on why it works with sqline but not trough the node driver?
>>>
>>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine <el...@gmail.com> wrote:
>>>>
>>>> Have you looked at paging [1] using Phoenix's row-value constructors
>>>> together with the LIMIT clause? That might be what you are looking for.
>>>>
>>>> [1] http://phoenix.apache.org/paged.html
>>>>
>>>> Eli
>>>>
>>>>
>>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <is...@gmail.com>
>>>> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> the company I work for is performing some tests on Phoenix with NodeJS.
>>>>> For simple queries I didn't had any problem, but as soon as I start to use
>>>>> our app I'm getting "process out of memory" errors on the client when I runs
>>>>> queries that return a big number of rows (i.e. 400k) . I think the problem
>>>>> is that the client tries to buffer all the results in RAM and that kills it.
>>>>> The same query runs fine when I run it with sqline.
>>>>>
>>>>> So, is there a way to tell the client to stream the results (or batch
>>>>> them) instead of buffering them all? is raising the client memory the only
>>>>> solution?
>>>>>
>>>>> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
>>>>> as the NodeJS driver
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Isart Montane
>>>>
>>>>
>>>
>>
>

Re: Phoenix and NodeJS

Posted by Isart Montane <is...@gmail.com>.
Hi Eli,

thanks a lot for your comments. I think you are right. I found the client
code that's causing the issue. Do you have an example I can use to patch
it? is that the recommended way to access phoenix? I've seen on the web
that there's also a query server available, is it worth a try?


public String[] query(String sql)
  {
    List<String> lsResults = new ArrayList();
    Connection conn = null;
    try
    {
      conn = this.dataSource.getConnection();
      ResultSet rs = conn.createStatement().executeQuery(sql);
      ResultSetMetaData data = rs.getMetaData();
      int numberOfColumns = data.getColumnCount();
      List<String> lsRows = new ArrayList();
      for (int i = 1; i <= numberOfColumns; i++) {
        lsRows.add(data.getColumnName(i));
      }
      lsResults.add(join("\t", lsRows));
      lsRows.clear();
      while (rs.next())
      {
        for (int i = 1; i <= numberOfColumns; i++) {
          lsRows.add(rs.getString(i));
        }
        lsResults.add(join("\t", lsRows));
        lsRows.clear();
      }
      rs.close();
      conn.close();
    }
    catch (Exception e)
    {
      e.printStackTrace();
      return null;
    }
    return (String[])lsResults.toArray(new String[lsResults.size()]);
  }

On Mon, May 18, 2015 at 5:43 PM, Eli Levine <el...@gmail.com> wrote:

> I don't have info on what your app does with results from Phoenix. If the
> app is constructing some sort of object representations from Phoenix
> results and holding on to them, I would look at what the memory footprint
> of that is. I know this isn't very helpful but at this point I would try to
> dig deeper into your app and the NodeJS driver rather than Phoenix, since
> you mentioned the same queries run fine in sqlline.
>
> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <is...@gmail.com>
> wrote:
>
>> Hi Eli,
>>
>> thanks a lot for your answer. That might be a workaround but I was hoping
>> to get a more generic answer I can apply to the driver/phoenix since that
>> will require me lots of changes to the code.
>>
>> Any clue on why it works with sqline but not trough the node driver?
>>
>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine <el...@gmail.com> wrote:
>>
>>> Have you looked at paging [1] using Phoenix's row-value constructors
>>> together with the LIMIT clause? That might be what you are looking for.
>>>
>>> [1] http://phoenix.apache.org/paged.html
>>>
>>> Eli
>>>
>>>
>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <is...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> the company I work for is performing some tests on Phoenix with NodeJS.
>>>> For simple queries I didn't had any problem, but as soon as I start to use
>>>> our app I'm getting "process out of memory" errors on the client when I
>>>> runs queries that return a big number of rows (i.e. 400k) . I think the
>>>> problem is that the client tries to buffer all the results in RAM and that
>>>> kills it. The same query runs fine when I run it with sqline.
>>>>
>>>> So, is there a way to tell the client to stream the results (or batch
>>>> them) instead of buffering them all? is raising the client memory the only
>>>> solution?
>>>>
>>>> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
>>>> as the NodeJS driver
>>>>
>>>> Thanks,
>>>>
>>>> Isart Montane
>>>>
>>>
>>>
>>
>

Re: Phoenix and NodeJS

Posted by Eli Levine <el...@gmail.com>.
I don't have info on what your app does with results from Phoenix. If the
app is constructing some sort of object representations from Phoenix
results and holding on to them, I would look at what the memory footprint
of that is. I know this isn't very helpful but at this point I would try to
dig deeper into your app and the NodeJS driver rather than Phoenix, since
you mentioned the same queries run fine in sqlline.

On Mon, May 18, 2015 at 7:30 AM, Isart Montane <is...@gmail.com>
wrote:

> Hi Eli,
>
> thanks a lot for your answer. That might be a workaround but I was hoping
> to get a more generic answer I can apply to the driver/phoenix since that
> will require me lots of changes to the code.
>
> Any clue on why it works with sqline but not trough the node driver?
>
> On Mon, May 18, 2015 at 4:20 PM, Eli Levine <el...@gmail.com> wrote:
>
>> Have you looked at paging [1] using Phoenix's row-value constructors
>> together with the LIMIT clause? That might be what you are looking for.
>>
>> [1] http://phoenix.apache.org/paged.html
>>
>> Eli
>>
>>
>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <is...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> the company I work for is performing some tests on Phoenix with NodeJS.
>>> For simple queries I didn't had any problem, but as soon as I start to use
>>> our app I'm getting "process out of memory" errors on the client when I
>>> runs queries that return a big number of rows (i.e. 400k) . I think the
>>> problem is that the client tries to buffer all the results in RAM and that
>>> kills it. The same query runs fine when I run it with sqline.
>>>
>>> So, is there a way to tell the client to stream the results (or batch
>>> them) instead of buffering them all? is raising the client memory the only
>>> solution?
>>>
>>> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
>>> as the NodeJS driver
>>>
>>> Thanks,
>>>
>>> Isart Montane
>>>
>>
>>
>

Re: Phoenix and NodeJS

Posted by Isart Montane <is...@gmail.com>.
Hi Eli,

thanks a lot for your answer. That might be a workaround but I was hoping
to get a more generic answer I can apply to the driver/phoenix since that
will require me lots of changes to the code.

Any clue on why it works with sqline but not trough the node driver?

On Mon, May 18, 2015 at 4:20 PM, Eli Levine <el...@gmail.com> wrote:

> Have you looked at paging [1] using Phoenix's row-value constructors
> together with the LIMIT clause? That might be what you are looking for.
>
> [1] http://phoenix.apache.org/paged.html
>
> Eli
>
>
> On Mon, May 18, 2015 at 6:46 AM, Isart Montane <is...@gmail.com>
> wrote:
>
>> Hi,
>>
>> the company I work for is performing some tests on Phoenix with NodeJS.
>> For simple queries I didn't had any problem, but as soon as I start to use
>> our app I'm getting "process out of memory" errors on the client when I
>> runs queries that return a big number of rows (i.e. 400k) . I think the
>> problem is that the client tries to buffer all the results in RAM and that
>> kills it. The same query runs fine when I run it with sqline.
>>
>> So, is there a way to tell the client to stream the results (or batch
>> them) instead of buffering them all? is raising the client memory the only
>> solution?
>>
>> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
>> as the NodeJS driver
>>
>> Thanks,
>>
>> Isart Montane
>>
>
>

Re: Phoenix and NodeJS

Posted by Eli Levine <el...@gmail.com>.
Have you looked at paging [1] using Phoenix's row-value constructors
together with the LIMIT clause? That might be what you are looking for.

[1] http://phoenix.apache.org/paged.html

Eli


On Mon, May 18, 2015 at 6:46 AM, Isart Montane <is...@gmail.com>
wrote:

> Hi,
>
> the company I work for is performing some tests on Phoenix with NodeJS.
> For simple queries I didn't had any problem, but as soon as I start to use
> our app I'm getting "process out of memory" errors on the client when I
> runs queries that return a big number of rows (i.e. 400k) . I think the
> problem is that the client tries to buffer all the results in RAM and that
> kills it. The same query runs fine when I run it with sqline.
>
> So, is there a way to tell the client to stream the results (or batch
> them) instead of buffering them all? is raising the client memory the only
> solution?
>
> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client as
> the NodeJS driver
>
> Thanks,
>
> Isart Montane
>