You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by Vineet Mishra <cl...@gmail.com> on 2015/06/19 09:24:39 UTC

Kylin go down with Querying

Hi All,

I am making a normal select all query for my cube having around 9 dimension
and 19 measure, the cube source table size is 230 Mb with cube expansion
rate of 500% resulting in cube of 1.1GB.

The query which I am making on top of Kylin makes the kylin server go down,
I am moving ahead to raise the concern memory, but looking out for opinion
if some other reason might be possible

Stack Trace mentioned below

The configured limit of 1,000 object references was reached while
attempting to calculate the size of the object graph. Severe performance
degradation could occur if the sizing operation continues. This can be
avoided by setting the CacheManger or Cache <sizeOfPolicy> elements
maxDepthExceededBehavior to "abort" or adding stop points with
@IgnoreSizeOf annotations. If performance degradation is NOT an issue at
the configured limit, raise the limit value using the CacheManager or Cache
<sizeOfPolicy> elements maxDepth attribute. For more information, see the
Ehcache configuration documentation.


Thanks!

Re: Kylin go down with Querying

Posted by Vineet Mishra <cl...@gmail.com>.
What I could infer by now is that it works good when the default limit(50k)
is applied on the kylin query browser and even if I query for limit 500k
records it takes latency but respond back.

There are 30 Million records present current in my hbase table and I guess
the aggregation query is trying to pull all the data from start to end,
which results it to go down. But I am not sure, if the data size is huge
why/how does it impact the Kylin Server to go down without any Error log in
the trace.

Thanks!


On Fri, Jun 19, 2015 at 2:21 PM, Li Yang <li...@apache.org> wrote:

> Need more log, pls post all start from query begin to its end as attachment
> then we can analyze.
>
> The log you posted is by cache manager, by when the query should be already
> finished.
>
>
>
> On Fri, Jun 19, 2015 at 3:24 PM, Vineet Mishra <cl...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I am making a normal select all query for my cube having around 9
> dimension
> > and 19 measure, the cube source table size is 230 Mb with cube expansion
> > rate of 500% resulting in cube of 1.1GB.
> >
> > The query which I am making on top of Kylin makes the kylin server go
> down,
> > I am moving ahead to raise the concern memory, but looking out for
> opinion
> > if some other reason might be possible
> >
> > Stack Trace mentioned below
> >
> > The configured limit of 1,000 object references was reached while
> > attempting to calculate the size of the object graph. Severe performance
> > degradation could occur if the sizing operation continues. This can be
> > avoided by setting the CacheManger or Cache <sizeOfPolicy> elements
> > maxDepthExceededBehavior to "abort" or adding stop points with
> > @IgnoreSizeOf annotations. If performance degradation is NOT an issue at
> > the configured limit, raise the limit value using the CacheManager or
> Cache
> > <sizeOfPolicy> elements maxDepth attribute. For more information, see the
> > Ehcache configuration documentation.
> >
> >
> > Thanks!
> >
>

Re: Kylin go down with Querying

Posted by Li Yang <li...@apache.org>.
Need more log, pls post all start from query begin to its end as attachment
then we can analyze.

The log you posted is by cache manager, by when the query should be already
finished.



On Fri, Jun 19, 2015 at 3:24 PM, Vineet Mishra <cl...@gmail.com>
wrote:

> Hi All,
>
> I am making a normal select all query for my cube having around 9 dimension
> and 19 measure, the cube source table size is 230 Mb with cube expansion
> rate of 500% resulting in cube of 1.1GB.
>
> The query which I am making on top of Kylin makes the kylin server go down,
> I am moving ahead to raise the concern memory, but looking out for opinion
> if some other reason might be possible
>
> Stack Trace mentioned below
>
> The configured limit of 1,000 object references was reached while
> attempting to calculate the size of the object graph. Severe performance
> degradation could occur if the sizing operation continues. This can be
> avoided by setting the CacheManger or Cache <sizeOfPolicy> elements
> maxDepthExceededBehavior to "abort" or adding stop points with
> @IgnoreSizeOf annotations. If performance degradation is NOT an issue at
> the configured limit, raise the limit value using the CacheManager or Cache
> <sizeOfPolicy> elements maxDepth attribute. For more information, see the
> Ehcache configuration documentation.
>
>
> Thanks!
>