You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@quickstep.apache.org by Navneet Potti <po...@wisc.edu> on 2016/09/03 02:48:55 UTC

BlockNotFoundInMemory

Hi folks
I just wanted to report this error I got running TPCH SF100 Query 21 on Cloudlab (master branch from a few days ago). Unfortunately, I don’t think I’ll be able to reproduce it to debug it.

terminate called after throwing an instance of ‘quickstep::BlockNotFoundInMemory’
  what(): BlockNotFoundInMemory: The specified block was not found in memory

The error only occurred during the first run; the later runs seem to have run to completion.

Has anyone seen this error before?

As an aside, I also get this warning often when running some of the later queries.
tcmalloc: large alloc 1610612736 bytes == 0x99ed4000 @
tcmalloc: large alloc 1610612736 bytes == 0x89ec4000 @
tcmalloc: large alloc 1610612736 bytes == 0x89ec4000 @

This warning does not seem to affect query execution, as far as I can tell.

Cheers,
Nav

Re: BlockNotFoundInMemory

Posted by Navneet Potti <na...@gmail.com>.
Created a JIRA issue (marked it Minor) and assigned it to JQ. 

https://issues.apache.org/jira/browse/QUICKSTEP-54 <https://issues.apache.org/jira/browse/QUICKSTEP-54>

Any thoughts on the BlockNotFoundInMemory bug? Have you hit that one before? 


> On Sep 3, 2016, at 07:51, Jignesh Patel <jm...@gmail.com> wrote:
> 
> I have seen this before. I belive it is realated to the hash table allocation, which are very large (actually larger than they need to be due to estimation errors). Tcmalloc warns on such allocations. 
> 
> BTW: Jianqiao and Harshad, can we reduce the initial hash table allocation. I think starting with something small e.g, 8K entries and then doubling as needed (the code already does that) should work. We can try it.
> 
> Cheers,
> Jignesh 
> 
> 
> On 9/2/16, 9:48 PM, "Navneet Potti" <po...@wisc.edu> wrote:
> 
>    As an aside, I also get this warning often when running some of the later queries.
>    tcmalloc: large alloc 1610612736 bytes == 0x99ed4000 @
>    tcmalloc: large alloc 1610612736 bytes == 0x89ec4000 @
>    tcmalloc: large alloc 1610612736 bytes == 0x89ec4000 @
> 
> 
> 
> 
> 


Re: BlockNotFoundInMemory

Posted by Jignesh Patel <jm...@gmail.com>.
I have seen this before. I belive it is realated to the hash table allocation, which are very large (actually larger than they need to be due to estimation errors). Tcmalloc warns on such allocations. 

BTW: Jianqiao and Harshad, can we reduce the initial hash table allocation. I think starting with something small e.g, 8K entries and then doubling as needed (the code already does that) should work. We can try it.

Cheers,
Jignesh 


On 9/2/16, 9:48 PM, "Navneet Potti" <po...@wisc.edu> wrote:

    As an aside, I also get this warning often when running some of the later queries.
    tcmalloc: large alloc 1610612736 bytes == 0x99ed4000 @
    tcmalloc: large alloc 1610612736 bytes == 0x89ec4000 @
    tcmalloc: large alloc 1610612736 bytes == 0x89ec4000 @