You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Nicolas Sanglard (JIRA)" <ji...@apache.org> on 2018/03/28 13:49:00 UTC
[jira] [Created] (NIFI-5024) Deadlock in ExecuteStreamCommand
processor
Nicolas Sanglard created NIFI-5024:
--------------------------------------
Summary: Deadlock in ExecuteStreamCommand processor
Key: NIFI-5024
URL: https://issues.apache.org/jira/browse/NIFI-5024
Project: Apache NiFi
Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Nicolas Sanglard
Attachments: Screen Shot 2018-03-28 at 15.34.36.png, Screen Shot 2018-03-28 at 15.36.02.png
Whenever a process is producing too much output on stderr, the current implementation will run into a deadlock between the JVM and the unix process started by the ExecuteStreamCommand.
This is a known issue that is fully described here: [http://java-monitor.com/forum/showthread.php?t=4067]
In short:
* If the process produces too much stderr that is not consumed by ExecuteStreamCommand, it will block until data is read.
* The current processor implementation is reading from stderr only after having called process.waitFor()
* Thus, the two processes are waiting for each other and fall into a deadlock
The following setup will lead to a deadlock:
A jar containing the following Main application:
{code:java}
object Main extends App {
import scala.collection.JavaConverters._
val str = Source.fromInputStream(this.getClass.getResourceAsStream("/1mb.txt")).mkString
System.err.println(str)
}
{code}
The following NiFi Flow:
!Screen Shot 2018-03-28 at 15.34.36.png!
Configuration for ExecuteStreamCommand:
!Screen Shot 2018-03-28 at 15.36.02.png!
The script is simply containing a call to the jar:
{code:java}
java -jar stderr.jar
{code}
Once the processor calls the script, it appears as "processing" indefinitely and can only be stopped by restarting NiFi.
I already have a running solution that I will publish as soon as possible.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)