You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@iotdb.apache.org by "DaweiLiu (Jira)" <ji...@apache.org> on 2021/08/18 10:18:00 UTC
[jira] [Commented] (IOTDB-1573) When the number of chunks to be
loaded at the same time exceeds the cache limit, query data is lost
[ https://issues.apache.org/jira/browse/IOTDB-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17400958#comment-17400958 ]
DaweiLiu commented on IOTDB-1573:
---------------------------------
First of all, write using the writer provided by tsfile.
Configure the `groupSize` in config to be as small as possible to generate many chunk groups in the file to simulate the number of chunk groups generated when large data files are written.
{code:java}
File file = FSFactoryProducer.getFSFactory().getFile("target/LRUTest.tsfile");
TSFileConfig tsFileConfig = new TSFileConfig();
//make it small
tsFileConfig.setGroupSizeInByte(1024*30);
TsFileWriter tsFileWriter = new TsFileWriter(file, new Schema(), tsFileConfig);
{code}
The incrementing time is then written to the timestamp, and some cases with the same value occur.
{code:java}
while (lid < 11_000_000L) {
TSRecord tsRecord = new TSRecord(lid, "t");
IntDataPoint linkPoint = new IntDataPoint("lid", lid);
tsRecord.addTuple(linkPoint);
tsFileWriter.write(tsRecord);
if (lid % 8000000 == 0) {
TSRecord tsRecord2 = new TSRecord(lid + 11_000_000L, "t");
IntDataPoint linkPoint1 = new IntDataPoint("lid", lid);
tsRecord2.addTuple(linkPoint1);
tsFileWriter.write(tsRecord2);
}
lid++;
}
{code}
Finally, the query is executed and no data is returned.
{code:java}
ReadOnlyTsFile readOnlyTsFile = new ReadOnlyTsFile(tsFileSequenceReader);
SingleSeriesExpression singleSeriesExpression = new SingleSeriesExpression(new Path("t", "lid"), ValueFilter.eq(8000001));
QueryExpression queryExpression =
QueryExpression.create(Arrays.asList(new Path("t", "lid")), singleSeriesExpression);
QueryDataSet queryDataSet = readOnlyTsFile.query(queryExpression);
while (queryDataSet.hasNext()) {
System.out.println(queryDataSet.next());
}
{code}
> When the number of chunks to be loaded at the same time exceeds the cache limit, query data is lost
> ---------------------------------------------------------------------------------------------------
>
> Key: IOTDB-1573
> URL: https://issues.apache.org/jira/browse/IOTDB-1573
> Project: Apache IoTDB
> Issue Type: Bug
> Components: Core/TsFile
> Reporter: DaweiLiu
> Priority: Major
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)