You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Mike Thomsen (Jira)" <ji...@apache.org> on 2022/03/29 11:19:00 UTC
[jira] [Resolved] (NIFI-7388) Bug reports about VolatileContentRepository and VolatileFlowFileRepository
[ https://issues.apache.org/jira/browse/NIFI-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mike Thomsen resolved NIFI-7388.
--------------------------------
Resolution: Won't Fix
VolatileContentRepository is removed in 1.17.0-SNAPSHOT.
> Bug reports about VolatileContentRepository and VolatileFlowFileRepository
> --------------------------------------------------------------------------
>
> Key: NIFI-7388
> URL: https://issues.apache.org/jira/browse/NIFI-7388
> Project: Apache NiFi
> Issue Type: Bug
> Components: Core Framework
> Affects Versions: 1.11.4
> Environment: jdk8_141
> docker: 18.04.0-ce
> Kubernetes: v1.13.3
> Reporter: zhangxinchen
> Priority: Major
> Original Estimate: 168h
> Remaining Estimate: 168h
>
> 1. VolatileContentRepository
> when maxSize = 100MB and blockSize = 2KB, there should be 51200 "slots". If
> we write one kb by one kb, 102400 one kb should be written in, but when
> writing 51201th one kb, "java.io.IOException: Content Repository is out of
> space" occurs. Here's the Junit Test I write.
> @Test
> public void test() throws IOException {
> System.setProperty(NiFiProperties.PROPERTIES_FILE_PATH,
> TestVolatileContentRepository.class.getResource("/conf/nifi.properties").getFile());
> final Map<String, String> addProps = new HashMap<>();
> addProps.put(VolatileContentRepository.BLOCK_SIZE_PROPERTY, "2 KB");
> final NiFiProperties nifiProps =
> NiFiProperties.createBasicNiFiProperties(null, addProps);
> final VolatileContentRepository contentRepo = new
> VolatileContentRepository(nifiProps);
> contentRepo.initialize(claimManager);
> // can write 100 * 1024 /1 = 102400, but after 51201, blocks exhausted
> for (int idx =0; idx < 51201; ++idx) {
> final ContentClaim claim = contentRepo.create(true);
> try (final OutputStream out = contentRepo.write(claim)){
> final byte[] oneK = new byte[1024];
> Arrays.fill(oneK, (byte) 55);
> out.write(oneK);
> }
> }
> }
> 2. VolatileFlowFileRepository
> When the backpressure occurs, FileSystemSwapManager will swap out FlowFiles
> to disk whenever swapQueue size exceeds 10000, there's no problem in
> swap-out process BUT in swap-in process, VolatileFlowFileRepository does
> not "acknowledge" the FlowFiles which has been swap out when
> FileSystemSwapManager swaps in FlowFiles from the disk and logs the warning
> information "Cannot swap in FlowFiles from location..." because the
> implementation of "isValidSwapLocationSuffix" in VolatileFlowFileRepository
> is always FALSE.
> And the queue is still like FULL when checking the NiFi frontend, the
> upstream processor is STUCKED, maybe FileSystemSwapManager "thinks" these
> FlowFiles are still not consumed.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)