You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by pradeepbill <pr...@gmail.com> on 2016/05/31 14:29:44 UTC

back pressure

hi there, I am trying to understand what the below text says from NIFI
documentation,

1. What does back pressure mean really in NIFI?, here is my simple flow, 
Some NIFI-source -> Some NIFI processor.
If the amount of input data from the  source is more than the amount of data
the processor can process , then back pressure is applied ? and the 2
parameters(“Back pressure object threshold.”, “Back pressure data size
threshold.”) help in that case ?  

2.If back pressure is applied and I assume flow files are queued up, and
sent in the order they were received.Please confirm

2.back pressure is applied by default ?


"NiFi provides two configuration elements for Back Pressure. These
thresholds indicate how much data should be allowed to exist in the queue
before the component that is the source of the Connection is no longer
scheduled to run. This allows the system to avoid being overrun with data.
The first option provided is the “Back pressure object threshold.” This is
the number of FlowFiles that can be in the queue before back pressure is
applied. The second configuration option is the “Back pressure data size
threshold.” This specifies the maximum amount of data (in size) that should
be queued up before applying back pressure. This value is configured by
entering a number followed by a data size (B for bytes, KB for kilobytes, MB
for megabytes, GB for gigabytes, or TB for terabytes)."



Thanks
Pradeep



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: back pressure

Posted by Bryan Bende <bb...@gmail.com>.
Your scenario brings up an interesting point... back-pressure can only help
you with in NiFi, meaning that in a push scenario, NiFi has no way of
telling the syslog server (or any external source) to stop sending messages.

The way the syslog processor works, there is a background thread receiving
the messages and placing them on an internal queue, and the processor when
it runs is pulling messages off the internal queue [1].
So if you applied back-pressure and the processor stopped running, the
processor would stop pulling messages off the queue, but the messages would
still be coming and being placed in the queue until it filled up, and at
some point data would start getting dropped.

You may be able to tune the ListenSyslog processor such that you can avoid
needing back-pressure, based on some of the recommendations in the post
below.

[1]
https://community.hortonworks.com/articles/30424/optimizing-performance-of-apache-nifis-network-lis.html


On Tue, May 31, 2016 at 11:06 AM, pradeepbill <pr...@gmail.com>
wrote:

> Thanks Bryan, helps a lot,
> here is my use case,  I use a listen Syslog processor -> Data Output
> port(Spark application), now from the back pressure discussion, assuming I
> am using  “Back pressure data size
>  threshold.”=100MB , and that size is reached, and listen Syslog processor
> wont run and , but still the real source from where the listen Syslog
> processor is listening is emitting data , what happens to that data ?. lost
> ?.
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10803.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>

Re: back pressure

Posted by pradeepbill <pr...@gmail.com>.
Thanks Bryan, helps a lot, 
here is my use case,  I use a listen Syslog processor -> Data Output
port(Spark application), now from the back pressure discussion, assuming I
am using  “Back pressure data size 
 threshold.”=100MB , and that size is reached, and listen Syslog processor
wont run and , but still the real source from where the listen Syslog
processor is listening is emitting data , what happens to that data ?. lost
?. 



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10803.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: back pressure

Posted by Bryan Bende <bb...@gmail.com>.
Hi Pradeep,

Back-pressure tells the source component to no longer execute when the
threshold has been reached.

For example, say you have a GetFile -> PutHDFS and you configure and object
threshold of 100 on the queue between them.
If the queue gets to 100, GetFile will no longer be allowed to run until
the queue drops below 100. As soon as it drops below the threshold it
starts running again and repeating the process.
If you use the data size, it is the total size of all flow files in queue,
so in the previous example it could be set to 100 MB and then GetFile would
stop executing when the queue has 100 MB worth of data.

Overall back-pressure is a way to respond to downstream components that are
taking longer to process, and slow down the source components so they don't
keep producing more data.

Hope this helps.

-Bryan


On Tue, May 31, 2016 at 10:29 AM, pradeepbill <pr...@gmail.com>
wrote:

> hi there, I am trying to understand what the below text says from NIFI
> documentation,
>
> 1. What does back pressure mean really in NIFI?, here is my simple flow,
> Some NIFI-source -> Some NIFI processor.
> If the amount of input data from the  source is more than the amount of
> data
> the processor can process , then back pressure is applied ? and the 2
> parameters(“Back pressure object threshold.”, “Back pressure data size
> threshold.”) help in that case ?
>
> 2.If back pressure is applied and I assume flow files are queued up, and
> sent in the order they were received.Please confirm
>
> 2.back pressure is applied by default ?
>
>
> "NiFi provides two configuration elements for Back Pressure. These
> thresholds indicate how much data should be allowed to exist in the queue
> before the component that is the source of the Connection is no longer
> scheduled to run. This allows the system to avoid being overrun with data.
> The first option provided is the “Back pressure object threshold.” This is
> the number of FlowFiles that can be in the queue before back pressure is
> applied. The second configuration option is the “Back pressure data size
> threshold.” This specifies the maximum amount of data (in size) that should
> be queued up before applying back pressure. This value is configured by
> entering a number followed by a data size (B for bytes, KB for kilobytes,
> MB
> for megabytes, GB for gigabytes, or TB for terabytes)."
>
>
>
> Thanks
> Pradeep
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>

Re: back pressure

Posted by pradeepbill <pr...@gmail.com>.
ok will get the latest and try, will update you soon.



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10903.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: back pressure

Posted by Bryan Bende <bb...@gmail.com>.
Just wanted to add, if you are using Maven it would be the spark receiver
0.6.1 which would bring in all the other dependencies:

http://central.maven.org/maven2/org/apache/nifi/nifi-spark-receiver/0.6.1/nifi-spark-receiver-0.6.1.pom

Otherwise you can grab the 0.6.1 version of each of the jars you listed.

Thanks,

Bryan

On Wednesday, June 1, 2016, Bryan Bende <bb...@gmail.com> wrote:

> Ok can you try getting the 0.6.1 version of those jars and see if the
> problem still occurrs?
>
> I want to make sure it still happens in the latest code before we change
> anything, because I know we fixed a similar serialization issue before.
>
>
> http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom
>
> On Wednesday, June 1, 2016, pradeepbill <pradeep.bill@gmail.com
> <javascript:_e(%7B%7D,'cvml','pradeep.bill@gmail.com');>> wrote:
>
>> I am using the below jars, Please see the respective versions, there is no
>> default storage level really, but in the examples listed below use
>> MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
>> does not work always, sometimes when there is a spill we may need DISK as
>> well.
>>
>>
>> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
>> https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
>>
>>
>>
>> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
>>
>>
>> Thanks
>> Pradeep
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
>> Sent from the Apache NiFi Developer List mailing list archive at
>> Nabble.com.
>>
>
>
> --
> Sent from Gmail Mobile
>


-- 
Sent from Gmail Mobile

Re: back pressure

Posted by Bryan Bende <bb...@gmail.com>.
Glad to hear the exception was resolved!

Thanks for reporting the results.

On Thu, Jun 2, 2016 at 12:58 PM, pradeepbill <pr...@gmail.com> wrote:

> Ok , I can confirm, I dont see the below exception after using  0.6.1
> jars.Thanks again Bryan.I dont know what will happen when I get a burst of
> data (yet to hit that), but with the incubating jars I could not even
> process anything because of exception below.
>
>
>  java.io.NotSerializableException:
> > org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1
> > Serialization stack:
>
> Ok can you try getting the 0.6.1 version of those jars and see if the
> problem still occurrs?
>
> I want to make sure it still happens in the latest code before we change
> anything, because I know we fixed a similar serialization issue before.
>
>
> http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom
>
> On Wednesday, June 1, 2016, pradeepbill &lt;pradeep.bill@&gt; wrote:
>
> > I am using the below jars, Please see the respective versions, there is
> no
> > default storage level really, but in the examples listed below use
> > MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> > does not work always, sometimes when there is a spill we may need DISK as
> > well.
> >
> >
> >
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> > https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
> >
> >
> >
> >
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
> >
> >
> > Thanks
> > Pradeep
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> > Sent from the Apache NiFi Developer List mailing list archive at
> > Nabble.com.
> >
>
>
> --
> Sent from Gmail Mobile
>
>
> Ok can you try getting the 0.6.1 version of those jars and see if the
> problem still occurrs?
>
> I want to make sure it still happens in the latest code before we change
> anything, because I know we fixed a similar serialization issue before.
>
>
> http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom
>
> On Wednesday, June 1, 2016, pradeepbill &lt;pradeep.bill@&gt; wrote:
>
> > I am using the below jars, Please see the respective versions, there is
> no
> > default storage level really, but in the examples listed below use
> > MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> > does not work always, sometimes when there is a spill we may need DISK as
> > well.
> >
> >
> >
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> > https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
> >
> >
> >
> >
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
> >
> >
> > Thanks
> > Pradeep
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> > Sent from the Apache NiFi Developer List mailing list archive at
> > Nabble.com.
> >
>
>
> --
> Sent from Gmail Mobile
>
>
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10986.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>

Re: back pressure

Posted by pradeepbill <pr...@gmail.com>.
Ok , I can confirm, I dont see the below exception after using  0.6.1
jars.Thanks again Bryan.I dont know what will happen when I get a burst of
data (yet to hit that), but with the incubating jars I could not even
process anything because of exception below.


 java.io.NotSerializableException: 
> org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1 
> Serialization stack:

Ok can you try getting the 0.6.1 version of those jars and see if the
problem still occurrs?

I want to make sure it still happens in the latest code before we change
anything, because I know we fixed a similar serialization issue before.

http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom

On Wednesday, June 1, 2016, pradeepbill &lt;pradeep.bill@&gt; wrote:

> I am using the below jars, Please see the respective versions, there is no
> default storage level really, but in the examples listed below use
> MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> does not work always, sometimes when there is a spill we may need DISK as
> well.
>
>
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
>
>
>
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
>
>
> Thanks
> Pradeep
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


-- 
Sent from Gmail Mobile


Ok can you try getting the 0.6.1 version of those jars and see if the
problem still occurrs?

I want to make sure it still happens in the latest code before we change
anything, because I know we fixed a similar serialization issue before.

http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom

On Wednesday, June 1, 2016, pradeepbill &lt;pradeep.bill@&gt; wrote:

> I am using the below jars, Please see the respective versions, there is no
> default storage level really, but in the examples listed below use
> MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> does not work always, sometimes when there is a spill we may need DISK as
> well.
>
>
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
>
>
>
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
>
>
> Thanks
> Pradeep
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


-- 
Sent from Gmail Mobile





--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10986.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: back pressure

Posted by Bryan Bende <bb...@gmail.com>.
Ok can you try getting the 0.6.1 version of those jars and see if the
problem still occurrs?

I want to make sure it still happens in the latest code before we change
anything, because I know we fixed a similar serialization issue before.

http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom

On Wednesday, June 1, 2016, pradeepbill <pr...@gmail.com> wrote:

> I am using the below jars, Please see the respective versions, there is no
> default storage level really, but in the examples listed below use
> MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> does not work always, sometimes when there is a spill we may need DISK as
> well.
>
>
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
>
>
>
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
>
>
> Thanks
> Pradeep
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


-- 
Sent from Gmail Mobile

Re: back pressure

Posted by pradeepbill <pr...@gmail.com>.
I am using the below jars, Please see the respective versions, there is no
default storage level really, but in the examples listed below use
MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
does not work always, sometimes when there is a spill we may need DISK as
well.

https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark


/home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar 


Thanks
Pradeep



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: back pressure

Posted by Bryan Bende <bb...@gmail.com>.
Glad to hear you were able to optimize the NiFi side of things.

As for the other error, it looks like we might need to make the
ReceiveRunnable class serializable.

Can you confirm what version of NiFi you are using? also, I am not that
familiar with Spark streaming, what is the default StorageLevel?


On Wed, Jun 1, 2016 at 11:42 AM, pradeepbill <pr...@gmail.com> wrote:

> here is a follow up on that Spark exception.If I change the StorageLevel
> option to MEMORY_AND_DISK_SER,
>
>  JavaReceiverInputDStream packetStream =
>                      ssc.receiverStream(new NiFiReceiver(config,
> StorageLevel.MEMORY_AND_DISK_SER()));
>
>  I get below exception
>
>
> 16/06/01 12:50:28 ERROR scheduler.ReceiverTracker: Deregistered receiver
> for
> stream 0: Restarting receiver with delay 2000ms: Failed to receive data
> from
> NiFi - java.io.NotSerializableException:
> org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1
> Serialization stack:
>         - object not serializable (class:
> org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1, value:
> org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1@70fb979)
>         at
>
> org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
>         at
>
> org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
>         at
>
> org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:153)
>         at
>
> org.apache.spark.storage.BlockManager.dataSerializeStream(BlockManager.scala:1189)
>         at
>
> org.apache.spark.storage.BlockManager.dataSerialize(BlockManager.scala:1198)
>         at
> org.apache.spark.storage.MemoryStore.putArray(MemoryStore.scala:131)
>         at
> org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:168)
>         at
> org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:142)
>         at
> org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:790)
>         at
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:637)
>         at
>
> org.apache.spark.streaming.receiver.BlockManagerBasedBlockHandler.storeBlock(ReceivedBlockHandler.scala:81)
>         at
>
> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushAndReportBlock(ReceiverSupervisorImpl.scala:141)
>         at
>
> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushIterator(ReceiverSupervisorImpl.scala:121)
>         at
> org.apache.spark.streaming.receiver.Receiver.store(Receiver.scala:152)
>         at
>
> org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable.run(NiFiReceiver.java:182)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10889.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>

Re: back pressure

Posted by pradeepbill <pr...@gmail.com>.
here is a follow up on that Spark exception.If I change the StorageLevel
option to MEMORY_AND_DISK_SER,

 JavaReceiverInputDStream packetStream = 
		     ssc.receiverStream(new NiFiReceiver(config,
StorageLevel.MEMORY_AND_DISK_SER()));

 I get below exception


16/06/01 12:50:28 ERROR scheduler.ReceiverTracker: Deregistered receiver for
stream 0: Restarting receiver with delay 2000ms: Failed to receive data from
NiFi - java.io.NotSerializableException:
org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1
Serialization stack:
        - object not serializable (class:
org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1, value:
org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1@70fb979)
        at
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
        at
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
        at
org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:153)
        at
org.apache.spark.storage.BlockManager.dataSerializeStream(BlockManager.scala:1189)
        at
org.apache.spark.storage.BlockManager.dataSerialize(BlockManager.scala:1198)
        at
org.apache.spark.storage.MemoryStore.putArray(MemoryStore.scala:131)
        at
org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:168)
        at
org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:142)
        at
org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:790)
        at
org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:637)
        at
org.apache.spark.streaming.receiver.BlockManagerBasedBlockHandler.storeBlock(ReceivedBlockHandler.scala:81)
        at
org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushAndReportBlock(ReceiverSupervisorImpl.scala:141)
        at
org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushIterator(ReceiverSupervisorImpl.scala:121)
        at
org.apache.spark.streaming.receiver.Receiver.store(Receiver.scala:152)
        at
org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable.run(NiFiReceiver.java:182)
        at java.lang.Thread.run(Thread.java:745)




--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10889.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: back pressure

Posted by pradeepbill <pr...@gmail.com>.
HI Bryan, here is my update since yesterday
FYI, I was using  ListenSyslog with TCP , Also after trying with the
recommendations from your website, I could get all of them to work well,
except Spark throwing  exceptions like "Could not compute split, block
input-0-1464774108087 not found", which I think is where there are  sudden
bursts of data spikes from NIFI to spark, and spark not having enough memory
to hold all the data.But I guess that is a whole new problem.

Thanks
Pradeep



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10888.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.