You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Dong Li (JIRA)" <ji...@apache.org> on 2015/10/28 10:59:27 UTC

[jira] [Commented] (HAWQ-94) Storage error when insert a large tuple

    [ https://issues.apache.org/jira/browse/HAWQ-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14978105#comment-14978105 ] 

Dong Li commented on HAWQ-94:
-----------------------------

This is mainly because of function  AppendOnlyStorageRead_PositionToNextBlock in cdbappendonlystorageread.c.
{code}
        if (i > 0)
	{
	  if (storageRead->storageAttributes.version == AORelationVersion_Original)
	  {
	    i = i / 4 * 4;
	  }
	  else if (storageRead->storageAttributes.version == AORelationVersion_Aligned64bit)
	  {
	    i = i / 8 * 8;
	  }
	  *headerOffsetInFile += i;
	  *header += i;
	  storageRead->bufferedRead.bufferOffset += i;
	}

	/*
	 * Determine the maximum boundary of the block.
	 * UNDONE: When we have a block directory, we will tighten the limit down.
	 */
	fileRemainderLen = storageRead->bufferedRead.fileLen -
		               *headerOffsetInFile;
	if (storageRead->maxBufferLen > fileRemainderLen)
		*blockLimitLen = (int32)fileRemainderLen;
	else
		*blockLimitLen = storageRead->maxBufferLen;

	return true;
{code}
If the fileRemainderLen<=0 , we should return false. Actually it is impossible to make fileRemainderLen<0, but if the bottom of the file are all 0, it will skip these zeros, and the fileRemainderLen will equal with 0.
Maybe the code should modify as follow.
{code}
	fileRemainderLen = storageRead->bufferedRead.fileLen -
		               *headerOffsetInFile;
	if (fileRemainderLen <= 0)
		return false;
	if (storageRead->maxBufferLen > fileRemainderLen)
		*blockLimitLen = (int32)fileRemainderLen;
	else
		*blockLimitLen = storageRead->maxBufferLen;

	return true;
{code}

> Storage error when insert a large tuple
> ---------------------------------------
>
>                 Key: HAWQ-94
>                 URL: https://issues.apache.org/jira/browse/HAWQ-94
>             Project: Apache HAWQ
>          Issue Type: Bug
>          Components: Storage
>            Reporter: Dong Li
>            Assignee: Lei Chang
>
> 1. Set guc value "appendonly_split_write_size_mb" 
> hawq config -c appendonly_split_write_size_mb -v 2
> 2.Run sql
> set default_segment_num=1;
> create table eightbytleft_for_readsplit(str varchar) with (appendonly=true,blocksize=2097152,checksum=true);
> insert into eightbytleft_for_readsplit select repeat('a',2097136*63-8);
> WARNING:  skipping "eightbytleft_for_readsplit" --- error returned: Bad append-only storage header of type small content. Header check error 7, detail 'Append-only storage header is invalid -- overall block length 2097152 is > block limit length 0 (smallcontent_bytes_0_3 0x120003ff, smallcontent_bytes_4_7 0xfe000000)' (cdbappendonlystorageread.c:972)  (seg0 sdw1.hawq.greenplum.com:31100 pid=641344)
> DETAIL:
> Append-Only storage Small Content header: smallcontent_bytes_0_3 0x120003FF, smallcontent_bytes_4_7 0xFE000000, headerKind = 1, executorBlockKind = 2, rowCount = 0, usingChecksums = true, header checksum 0xAD931AF8, block checksum 0x3B923D8, dataLength 2097136, compressedLength 0, overallBlockLen 2097152
> Scan of Append-Only Row-Oriented relation 'eightbytleft_for_readsplit'. Append-Only segment file 'hdfs://smdw:9000/hawq/hawq-1446007451/16385/32094/32095/1', block header offset in file = 136314880, bufferCount 66
> INFO:  ANALYZE completed. Success: 0, Failure: 1 (eightbytleft_for_readsplit)
> INSERT 0 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)