You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@iotdb.apache.org by "wangyanhong (Jira)" <ji...@apache.org> on 2021/03/11 08:21:00 UTC

[jira] [Commented] (IOTDB-1207) Open time partition causes stackoverflow

    [ https://issues.apache.org/jira/browse/IOTDB-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17299399#comment-17299399 ] 

wangyanhong commented on IOTDB-1207:
------------------------------------

For reference

https://github.com/apache/iotdb/pull/2809

> Open time partition causes stackoverflow
> ----------------------------------------
>
>                 Key: IOTDB-1207
>                 URL: https://issues.apache.org/jira/browse/IOTDB-1207
>             Project: Apache IoTDB
>          Issue Type: Bug
>          Components: Cluster
>            Reporter: wangyanhong
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2021-03-11-10-26-50-700.png
>
>
> Open time partition causes stackoverflow when use six nodes and three replication
> !image-2021-03-11-10-26-50-700.png!
> After preliminary study,Find the problem occurs in the process of inserting data to get the schema
> Read remote schema in mRemoteMetaCache and if not all the schemas were obtained, it will read local schema 
> before read remote shema ,it will try to get the corresponding device from local  
> mNodeCache , If the device is not found, it will skip reading the remote schema and read the local schema
> because local mNodeCache dosen't contain remote device , it will never read remote schema from mRemoteMetaCache ,this causes the insert execution fail,forward plan failed,
> and then will create timeseries for fail insertion,In this process, the timeseries need to be created is empty ,it causes the process of creating timeseries to enter an infinite loop。
>  
> In order to solve this problem, I removed the code that get the corresponding device from local  mNodeCache. but it causes another problem,When the amount of data is large,benchmark  will get stuck and occur timeout



--
This message was sent by Atlassian Jira
(v8.3.4#803005)