You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hawq.apache.org by "Ruilong Huo (JIRA)" <ji...@apache.org> on 2019/06/02 04:00:00 UTC

[jira] [Created] (HAWQ-1722) Core dump due to lock is not released before reporting errors when exceeding MaxAORelSegFileStatus

Ruilong Huo created HAWQ-1722:
---------------------------------

             Summary: Core dump due to lock is not released before reporting errors when exceeding MaxAORelSegFileStatus
                 Key: HAWQ-1722
                 URL: https://issues.apache.org/jira/browse/HAWQ-1722
             Project: Apache HAWQ
          Issue Type: Bug
            Reporter: Ruilong Huo
            Assignee: Radar Lei


since the lock is not released before reporting errors, it will leads to panic during transaction abort since the lock will be acquired again. A RWLock is acquired twice in one process will leads to panic.

There are two occurrences of this bug in this function, one for AO, one for parquet.
```
			if (id == NEXT_END_OF_LIST)
			{
				pfree(allfsinfoParquet);

				ereport(ERROR, (errmsg("cannot open more than %d "
				      "append-only table segment "
				      "files cocurrently",
				      MaxAORelSegFileStatus)));

				return false;
			}
```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)