You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@nifi.apache.org by Lou Tian <ti...@gmail.com> on 2017/09/25 12:29:35 UTC

Memory leak for HandleHttpRequest processor?

Hi,

We are doing performance test for our NIFI flow with Gatling. But after
several run, the NIFI always has the OutOfMemory error. I did not find
similar questions in the mailing list, if you already answered similar
questions please let me know.

*Problem description:*
We have the Nifi flow. The normal flow works fine. To evaluate whether our
flow can handle the load, we decided to do the performance test with
Gatling.

1) We add the two processors HandleHttpRequest at the start of the flow and
HandleHttpResponse at the end of the flow. So our nifi is like a webservice
and Gatling will evaluate the response time.  2) Then  we continuously push
messages to HandleHttpRequest processor.

*Problem*:
Nifi can only handle two runs. Then the third time, it failed and we have
to restart the NIFI. I copied some error log here.

 o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> HandleHttpRequest[id=**] failed to process session due to
> java.lang.OutOfMemoryError: Java heap space: {}
> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> HandleHttpRequest[id=**] failed to process session due to
> java.lang.OutOfMemoryError: Java heap space: {}
> java.lang.OutOfMemoryError: Java heap space
> at java.util.HashMap.values(HashMap.java:958)
> at
> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
> at
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
> at
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
> at
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)


So our final questions:
1. Do you think it is the HandleHttpRequest processors problem? Or there is
something wrong in our configuration. Anything we can do to avoid such
problem?
2. If it's the processor, will you plan to fix it in the coming version?

Thank you so much for your reply.

Kind Regards,
Tian

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
Yeah, 2G does not have the OOME during my test. If I have more time, I'll
try to run it.
If I get any update about this. Or I find some problems in the customised
processor.
I will give you the feedback.

Thanks for your help.

On Tue, Sep 26, 2017 at 2:32 PM, Joe Witt <jo...@gmail.com> wrote:

> The lock failure for the provenance repo can be ignored and it does
> recover.  In addition you can switch to using the
> WriteAheadProvenanceRepository which works in a completely different and
> faster way anyway.
>
> So once using 1GB heap you still saw an OOME but at 2GB you did not.
> Right?  I wonder if at 2GB if you ran for a longer period if you would see
> the OOME.
>
> How to do throughput testing:
> - I generally do not do this using RPC based options like HTTP requests
> in/out but rather using GenerateFlowFile.  But for your case and what
> you're trying to assess I think it is important to do it in a manner
> similar to how you are.  I'll take a closer look at your linked
> code/template to get a better idea and try to replicate your observations.
>
> Thanks
>
> On Tue, Sep 26, 2017 at 7:50 AM, Lou Tian <ti...@gmail.com> wrote:
>
>> Hi Joe,
>>
>> Yes, that one is for the 512M.
>>
>> 1. I did several tests, the results are in the table.
>>     It seems that every time, the error trace is not the same.
>>     I doubt that if the error logs are really useful, but anyway I copied
>> some here.
>>
>> [image: Inline image 1]
>>
>> 2. I isolated the performance test to a small project to
>> https://github.com/lou78/TestPerformance.git .
>>     The FlowFile template is also inside the project. Just in case you
>> need to reproduce it.
>>     Currently the test will have 50 request/second and last for 10
>> minutes.
>>     You can change it in the BasicSimulation.scala.
>>
>> 3. *Question*: I'd like to test if our flow can process so many messages
>> in given time.
>>      Do you have suggestions to do the Flow File performance test?
>>
>> Thanks.
>>
>>
>> **********Some Error LOG********
>>
>> 2017-09-26 13:27:18,081 ERROR [Timer-Driven Process Thread-2]
>> o.a.n.p.standard.HandleHttpResponse HandleHttpResponse[id=bd0f5d7d-015e-1000-3402-763e31542bbd]
>> Failed to respond to HTTP request for StandardFlowFileRecord[uuid=d9
>> 383563-d317-40ad-b449-37ea2806e7fe,claim=StandardContentClaim
>> [resourceClaim=StandardResourceClaim[id=1506425095749-29,
>> container=default, section=29], offset=781242,
>> length=418],offset=0,name=15431969122198,size=418] because FlowFile had
>> an 'http.context.identifier' attribute of de0f382b-0bac-4496-a74f-32e6197f378e
>> but could not find an HTTP Response Object for this identifier
>> 2017-09-26 13:28:58,259 ERROR [pool-10-thread-1] org.apache.nifi.NiFi An
>> Unknown Error Occurred in Thread Thread[pool-10-thread-1,5,main]:
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:58,261 ERROR [NiFi Web Server-307] org.apache.nifi.NiFi
>> An Unknown Error Occurred in Thread Thread[NiFi Web Server-307,5,main]:
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,800 WARN [qtp1908618024-320]
>> org.eclipse.jetty.server.HttpChannel /nifi/
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,800 WARN [qtp1908618024-378]
>> o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
>> org.eclipse.jetty.util.thread.QueuedThreadPool$2@577ddcc3 in
>> qtp1908618024{STARTED,8<=41<=200,i=22,q=0}
>> 2017-09-26 13:28:59,800 ERROR [Timer-Driven Process Thread-2]
>> o.a.n.p.standard.HandleHttpResponse HandleHttpResponse[id=bd0f5d7d-015e-1000-3402-763e31542bbd]
>> HandleHttpResponse[id=bd0f5d7d-015e-1000-3402-763e31542bbd] failed to
>> process due to java.lang.OutOfMemoryError: Java heap space; rolling back
>> session: {}
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,800 ERROR [pool-10-thread-1] org.apache.nifi.NiFi
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,801 ERROR [pool-36-thread-1] org.apache.nifi.NiFi An
>> Unknown Error Occurred in Thread Thread[pool-36-thread-1,5,main]:
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,801 ERROR [qtp1908618024-378] org.apache.nifi.NiFi An
>> Unknown Error Occurred in Thread Thread[qtp1908618024-378,5,main]:
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,801 ERROR [pool-36-thread-1] org.apache.nifi.NiFi
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,801 ERROR [qtp1908618024-378] org.apache.nifi.NiFi
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,805 ERROR [Provenance Maintenance Thread-2]
>> org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Provenance
>> Maintenance Thread-2,5,main]: java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,805 ERROR [Provenance Maintenance Thread-2]
>> org.apache.nifi.NiFi
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,805 WARN [qtp1908618024-364]
>> o.e.jetty.util.thread.QueuedThreadPool
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,805 ERROR [Timer-Driven Process Thread-5]
>> o.a.n.processors.standard.RouteOnContent RouteOnContent[id=bd0f36de-015e-1000-2103-c1d81aaa36dc]
>> RouteOnContent[id=bd0f36de-015e-1000-2103-c1d81aaa36dc] failed to
>> process due to java.lang.OutOfMemoryError: Java heap space; rolling back
>> session: {}
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,806 WARN [qtp1908618024-364]
>> o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
>> org.eclipse.jetty.util.thread.QueuedThreadPool$2@577ddcc3 in
>> qtp1908618024{STARTED,8<=41<=200,i=23,q=2}
>> 2017-09-26 13:28:59,806 ERROR [NiFi Web Server-307] org.apache.nifi.NiFi
>> java.lang.OutOfMemoryError: Java heap space
>> 2017-09-26 13:28:59,807 WARN [NiFi Web Server-374]
>> org.eclipse.jetty.server.HttpChannel /nifi-api/flow/process-groups/
>> bd0e9451-015e-1000-c9a7-99594722fe60
>> java.lang.OutOfMemoryError: Java heap space
>> at java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
>> at java.util.HashMap.putVal(HashMap.java:641)
>> at java.util.HashMap.put(HashMap.java:611)
>> at sun.util.resources.OpenListResourceBundle.loadLookup(OpenLis
>> tResourceBundle.java:146)
>> at sun.util.resources.OpenListResourceBundle.loadLookupTablesIf
>> Necessary(OpenListResourceBundle.java:128)
>> at sun.util.resources.OpenListResourceBundle.handleKeySet(OpenL
>> istResourceBundle.java:96)
>> at java.util.ResourceBundle.containsKey(ResourceBundle.java:1807)
>> at sun.util.locale.provider.LocaleResources.getTimeZoneNames(Lo
>> caleResources.java:263)
>> at sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplay
>> NameArray(TimeZoneNameProviderImpl.java:124)
>> at sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplay
>> Name(TimeZoneNameProviderImpl.java:99)
>> at sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGet
>> ter.getName(TimeZoneNameUtility.java:240)
>> at sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGet
>> ter.getObject(TimeZoneNameUtility.java:198)
>> at sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGet
>> ter.getObject(TimeZoneNameUtility.java:184)
>> at sun.util.locale.provider.LocaleServiceProviderPool.getLocali
>> zedObjectImpl(LocaleServiceProviderPool.java:281)
>> at sun.util.locale.provider.LocaleServiceProviderPool.getLocali
>> zedObject(LocaleServiceProviderPool.java:265)
>> at sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplay
>> NamesImpl(TimeZoneNameUtility.java:166)
>> at sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplay
>> Names(TimeZoneNameUtility.java:107)
>> at java.time.format.DateTimeFormatterBuilder$ZoneTextPrinterPar
>> ser.getDisplayName(DateTimeFormatterBuilder.java:3650)
>> at java.time.format.DateTimeFormatterBuilder$ZoneTextPrinterPar
>> ser.format(DateTimeFormatterBuilder.java:3689)
>> at java.time.format.DateTimeFormatterBuilder$CompositePrinterPa
>> rser.format(DateTimeFormatterBuilder.java:2179)
>> at java.time.format.DateTimeFormatter.formatTo(DateTimeFormatte
>> r.java:1746)
>> at java.time.format.DateTimeFormatter.format(DateTimeFormatter.java:1720)
>> at org.apache.nifi.web.api.dto.util.TimeAdapter.marshal(TimeAda
>> pter.java:43)
>> at org.apache.nifi.web.api.dto.util.TimeAdapter.marshal(TimeAda
>> pter.java:33)
>> at org.codehaus.jackson.xc.XmlAdapterJsonSerializer.serialize(X
>> mlAdapterJsonSerializer.java:38)
>> at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsF
>> ield(BeanPropertyWriter.java:446)
>> at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializ
>> eFields(BeanSerializerBase.java:150)
>> at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSe
>> rializer.java:112)
>> at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsF
>> ield(BeanPropertyWriter.java:446)
>> at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializ
>> eFields(BeanSerializerBase.java:150)
>> at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSe
>> rializer.java:112)
>> at org.codehaus.jackson.map.ser.std.CollectionSerializer.serial
>> izeContents(CollectionSerializer.java:72)
>> 2017-09-26 13:28:59,804 ERROR [FileSystemRepository Workers Thread-3]
>> org.apache.nifi.engine.FlowEngine A flow controller task execution
>> stopped abnormally
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>> Java heap space
>> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1150)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ----------------------
>> *For 2G, I cannot get OOME, but another error:*
>> 2017-09-26 11:40:52,823 ERROR [Provenance Repository Rollover Thread-1]
>> o.a.n.p.PersistentProvenanceRepository
>> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
>> out: NativeFSLock@/Users/tlou/nifi-1.3.0/provenance_repository/in
>> dex-1506411423000/write.lock
>> at org.apache.lucene.store.Lock.obtain(Lock.java:89)
>> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:755)
>> at org.apache.nifi.provenance.lucene.SimpleIndexManager.createW
>> riter(SimpleIndexManager.java:198)
>> at org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowI
>> ndexWriter(SimpleIndexManager.java:227)
>> at org.apache.nifi.provenance.PersistentProvenanceRepository.me
>> rgeJournals(PersistentProvenanceRepository.java:1677)
>> at org.apache.nifi.provenance.PersistentProvenanceRepository$8.
>> run(PersistentProvenanceRepository.java:1265)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>> s.java:511)
>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>> tureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>> tureTask.run(ScheduledThreadPoolExecutor.java:294)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> ------------------------------------------
>>
>>
>>
>> On Mon, Sep 25, 2017 at 6:55 PM, Joe Witt <jo...@gmail.com> wrote:
>>
>>> Tian,
>>>
>>> Ok - and was this with the 512MB heap again?  Can you try with a 1GB
>>> or 2GB heap and see if we're just looking at our minimum needs being
>>> an issue or if we're looking at what sounds like a leak.
>>>
>>> Thanks
>>>
>>> On Mon, Sep 25, 2017 at 12:41 PM, Lou Tian <ti...@gmail.com>
>>> wrote:
>>> > Hi Joe,
>>> >
>>> > I tested with a simple flow file.
>>> > Only 4 processors: HandleHttpRequest, RouteOnContent,
>>> HandleHttpResponse and
>>> > DebugFlow.
>>> > I run the test 3 times (10 m/time and at most 50 users).
>>> > It works fine for the first 2 run. And on the third run, got the error.
>>> >
>>> > I copied part of the log file. Please check if it is helpful to
>>> identify the
>>> > error.
>>> >
>>> > 2017-09-25 18:21:45,673 INFO [Provenance Maintenance Thread-2]
>>> > o.a.n.p.PersistentProvenanceRepository Created new Provenance Event
>>> Writers
>>> > for events starting with ID 131158
>>> >
>>> > 2017-09-25 18:24:00,921 ERROR [FileSystemRepository Workers Thread-3]
>>> > o.a.n.c.repository.FileSystemRepository Failed to handle destructable
>>> claims
>>> > due to java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,921 ERROR [Flow Service Tasks Thread-1]
>>> > org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Flow
>>> Service
>>> > Tasks Thread-1,5,main]: java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,922 WARN [qtp574205748-107]
>>> > o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
>>> > org.eclipse.jetty.util.thread.QueuedThreadPool$2@1e3a5886 in
>>> > qtp574205748{STARTED,8<=13<=200,i=4,q=0}
>>> > 2017-09-25 18:24:00,923 INFO [Provenance Repository Rollover Thread-1]
>>> > o.a.n.p.lucene.SimpleIndexManager Index Writer for
>>> > ./provenance_repository/index-1506354574000 has been returned to Index
>>> > Manager and is no longer in use. Closing Index Writer
>>> > 2017-09-25 18:24:00,925 ERROR [qtp574205748-107] org.apache.nifi.NiFi
>>> An
>>> > Unknown Error Occurred in Thread Thread[qtp574205748-107,5,main]:
>>> > java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,929 INFO [pool-10-thread-1]
>>> > o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of
>>> FlowFile
>>> > Repository
>>> > 2017-09-25 18:24:00,929 ERROR [Flow Service Tasks Thread-1]
>>> > org.apache.nifi.NiFi
>>> > java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,928 ERROR [Listen to Bootstrap]
>>> > org.apache.nifi.BootstrapListener Failed to process request from
>>> Bootstrap
>>> > due to java.lang.OutOfMemoryError: Java heap space
>>> > java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,929 WARN [NiFi Web Server-215]
>>> > org.eclipse.jetty.server.HttpChannel /nifi-api/flow/controller/bull
>>> etins
>>> > java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,930 ERROR [pool-30-thread-1] org.apache.nifi.NiFi
>>> An
>>> > Unknown Error Occurred in Thread Thread[pool-30-thread-1,5,main]:
>>> > java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,929 ERROR [Event-Driven Process Thread-3]
>>> > org.apache.nifi.engine.FlowEngine A flow controller task execution
>>> stopped
>>> > abnormally
>>> > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>> Java
>>> > heap space
>>> >    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>>> >    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>>> >    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.ja
>>> va:100)
>>> >    at
>>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1150)
>>> >    at
>>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> >    at java.lang.Thread.run(Thread.java:748)
>>> > Caused by: java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,931 ERROR [Scheduler-1985086499]
>>> org.apache.nifi.NiFi An
>>> > Unknown Error Occurred in Thread Thread[Scheduler-1985086499,5,main]:
>>> > java.lang.OutOfMemoryError: Java heap space
>>> > 2017-09-25 18:24:00,930 ERROR [Cleanup Archive for default]
>>> > org.apache.nifi.engine.FlowEngine A flow controller task execution
>>> stopped
>>> > abnormally
>>> > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>> Java
>>> > heap space
>>> >    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>>> >    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>>> >    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.ja
>>> va:100)
>>> >    at
>>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1150)
>>> >    at
>>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> >    at java.lang.Thread.run(Thread.java:748)
>>> > Caused by: java.lang.OutOfMemoryError: Java heap space
>>> >
>>> >
>>> >
>>> >
>>> > Kind Regards,
>>> > Tian
>>> >
>>> > On Mon, Sep 25, 2017 at 4:02 PM, Lou Tian <ti...@gmail.com>
>>> wrote:
>>> >>
>>> >> Hi Joe, Thanks for your reply.
>>> >> I will try to do those tests. And update you with the results.
>>> >>
>>> >> On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <jo...@gmail.com> wrote:
>>> >>>
>>> >>> Tian
>>> >>>
>>> >>> The most common sources of memory leaks in custom processors
>>> >>> 1) Loading large objects (contents of the flowfile, for example) into
>>> >>> memory through byte[] or doing so using libraries that do this and
>>> not
>>> >>> realizing it.  Doing this in parallel makes the problem even more
>>> >>> obvious.
>>> >>> 2) Caching objects in memory and not providing bounds on that or not
>>> >>> sizing the JVM Heap appropriate to your flow.
>>> >>> 3) Pull in lots of flowfiles to a single session or creating many in
>>> a
>>> >>> single session.
>>> >>>
>>> >>> Try moving to a 1GB heap and see if the problem still happens.  Is it
>>> >>> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
>>> >>> a leak.
>>> >>>
>>> >>> We dont have a benchmarking unit test sort of mechanism.
>>> >>>
>>> >>> Thanks
>>> >>>
>>> >>> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com>
>>> wrote:
>>> >>> > Hi Joe,
>>> >>> >
>>> >>> > 1. I will build a simple flow without our customised processor to
>>> test
>>> >>> > again.
>>> >>> >     It is a good test idea. We saw the OOME is under the
>>> >>> > HandleHttpRequest,
>>> >>> > we never thought about others.
>>> >>> >
>>> >>> > 2. About our customised processor, we use lots of these customised
>>> >>> > processors.
>>> >>> >     Properties are dynamic. We fetch the properties by a rest call
>>> and
>>> >>> > cached it.
>>> >>> >     Sorry, I cannot show you the code.
>>> >>> >
>>> >>> > 3. We had the unit test for the customised processors.
>>> >>> >    Is there a way to test the memory leak in unit test using some
>>> given
>>> >>> > methods from nifi?
>>> >>> >
>>> >>> > Thanks.
>>> >>> >
>>> >>> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com>
>>> wrote:
>>> >>> >>
>>> >>> >> Tian,
>>> >>> >>
>>> >>> >> Ok thanks.  I'd try to removing your customized processor from the
>>> >>> >> flow entirely and running your tests.  This will give you a sense
>>> of
>>> >>> >> base nifi and the stock processors.  Once you're comfortable with
>>> that
>>> >>> >> then add your processor in.
>>> >>> >>
>>> >>> >> I say this because if your custom processor is using up the heap
>>> we
>>> >>> >> will see OOME in various places.  That it shows up in the core
>>> >>> >> framework code, for example, does not mean that is the cause.
>>> >>> >>
>>> >>> >> Does your custom processor hold anything in class level variables?
>>> >>> >> Does it open a session and keep accumulating flowfiles?  If you
>>> can
>>> >>> >> talk more about what it is doing or show a link to the code we
>>> could
>>> >>> >> quickly assess that.
>>> >>> >>
>>> >>> >> Thanks
>>> >>> >>
>>> >>> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <tian.lou.293@gmail.com
>>> >
>>> >>> >> wrote:
>>> >>> >> > 1. The HandleHttpRequest Processor get the message.
>>> >>> >> > 2. The message route to other processors based on the attribute.
>>> >>> >> > 3. We have our customised processor to process the message.
>>> >>> >> > 4. Then message would be redirected to the HandleHttpResponse.
>>> >>> >> >
>>> >>> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com>
>>> >>> >> > wrote:
>>> >>> >> >>
>>> >>> >> >> What is the flow doing in between the request/response portion?
>>> >>> >> >> Please share more details about the configuration overall.
>>> >>> >> >>
>>> >>> >> >> Thanks
>>> >>> >> >>
>>> >>> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <
>>> tian.lou.293@gmail.com>
>>> >>> >> >> wrote:
>>> >>> >> >> > Hi Joe,
>>> >>> >> >> >
>>> >>> >> >> > java version: 1.8.0_121
>>> >>> >> >> > heap size:
>>> >>> >> >> > # JVM memory settings
>>> >>> >> >> > java.arg.2=-Xms512m
>>> >>> >> >> > java.arg.3=-Xmx512m
>>> >>> >> >> > nifi version: 1.3.0
>>> >>> >> >> >
>>> >>> >> >> > Also, we put Nifi in the Docker.
>>> >>> >> >> >
>>> >>> >> >> > Kind Regrads,
>>> >>> >> >> > Tian
>>> >>> >> >> >
>>> >>> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <
>>> joe.witt@gmail.com>
>>> >>> >> >> > wrote:
>>> >>> >> >> >>
>>> >>> >> >> >> Tian,
>>> >>> >> >> >>
>>> >>> >> >> >> Please provide information on the JRE being used (java
>>> -version)
>>> >>> >> >> >> and
>>> >>> >> >> >> the environment configuration.  How large is your heap?
>>> This
>>> >>> >> >> >> can be
>>> >>> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you
>>> >>> >> >> >> using?
>>> >>> >> >> >>
>>> >>> >> >> >> Thanks
>>> >>> >> >> >>
>>> >>> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian
>>> >>> >> >> >> <ti...@gmail.com>
>>> >>> >> >> >> wrote:
>>> >>> >> >> >> > Hi,
>>> >>> >> >> >> >
>>> >>> >> >> >> > We are doing performance test for our NIFI flow with
>>> Gatling.
>>> >>> >> >> >> > But
>>> >>> >> >> >> > after
>>> >>> >> >> >> > several run, the NIFI always has the OutOfMemory error. I
>>> did
>>> >>> >> >> >> > not
>>> >>> >> >> >> > find
>>> >>> >> >> >> > similar questions in the mailing list, if you already
>>> answered
>>> >>> >> >> >> > similar
>>> >>> >> >> >> > questions please let me know.
>>> >>> >> >> >> >
>>> >>> >> >> >> > Problem description:
>>> >>> >> >> >> > We have the Nifi flow. The normal flow works fine. To
>>> evaluate
>>> >>> >> >> >> > whether
>>> >>> >> >> >> > our
>>> >>> >> >> >> > flow can handle the load, we decided to do the performance
>>> >>> >> >> >> > test
>>> >>> >> >> >> > with
>>> >>> >> >> >> > Gatling.
>>> >>> >> >> >> >
>>> >>> >> >> >> > 1) We add the two processors HandleHttpRequest at the
>>> start of
>>> >>> >> >> >> > the
>>> >>> >> >> >> > flow
>>> >>> >> >> >> > and
>>> >>> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is
>>> like
>>> >>> >> >> >> > a
>>> >>> >> >> >> > webservice
>>> >>> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
>>> >>> >> >> >> > continuously
>>> >>> >> >> >> > push
>>> >>> >> >> >> > messages to HandleHttpRequest processor.
>>> >>> >> >> >> >
>>> >>> >> >> >> > Problem:
>>> >>> >> >> >> > Nifi can only handle two runs. Then the third time, it
>>> failed
>>> >>> >> >> >> > and
>>> >>> >> >> >> > we
>>> >>> >> >> >> > have to
>>> >>> >> >> >> > restart the NIFI. I copied some error log here.
>>> >>> >> >> >> >
>>> >>> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest
>>> HandleHttpRequest[id=**]
>>> >>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>>> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>>> >>> >> >> >> >> o.a.n.p.standard.HandleHttpRequest
>>> HandleHttpRequest[id=**]
>>> >>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>>> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>>> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
>>> >>> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.rep
>>> ository.StandardProcessSession.resetWriteClaims(StandardProc
>>> essSession.java:2720)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.rep
>>> ository.StandardProcessSession.checkpoint(StandardProcessSes
>>> sion.java:213)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.rep
>>> ository.StandardProcessSession.commit(StandardProcessSession.java:318)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.processor.Abst
>>> ractProcessor.onTrigger(AbstractProcessor.java:28)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.Sta
>>> ndardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.tas
>>> ks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorT
>>> ask.java:147)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.tas
>>> ks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> org.apache.nifi.controller.sch
>>> eduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenScheduli
>>> ngAgent.java:132)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> java.util.concurrent.Executors
>>> $RunnableAdapter.call(Executors.java:511)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> java.util.concurrent.FutureTas
>>> k.runAndReset(FutureTask.java:308)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> java.util.concurrent.Scheduled
>>> ThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledT
>>> hreadPoolExecutor.java:180)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> java.util.concurrent.Scheduled
>>> ThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPo
>>> olExecutor.java:294)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> java.util.concurrent.ThreadPoo
>>> lExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> >>> >> >> >> >> at
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >>
>>> >>> >> >> >> >> java.util.concurrent.ThreadPoo
>>> lExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> >>> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
>>> >>> >> >> >> >
>>> >>> >> >> >> >
>>> >>> >> >> >> > So our final questions:
>>> >>> >> >> >> > 1. Do you think it is the HandleHttpRequest processors
>>> >>> >> >> >> > problem? Or
>>> >>> >> >> >> > there
>>> >>> >> >> >> > is
>>> >>> >> >> >> > something wrong in our configuration. Anything we can do
>>> to
>>> >>> >> >> >> > avoid
>>> >>> >> >> >> > such
>>> >>> >> >> >> > problem?
>>> >>> >> >> >> > 2. If it's the processor, will you plan to fix it in the
>>> >>> >> >> >> > coming
>>> >>> >> >> >> > version?
>>> >>> >> >> >> >
>>> >>> >> >> >> > Thank you so much for your reply.
>>> >>> >> >> >> >
>>> >>> >> >> >> > Kind Regards,
>>> >>> >> >> >> > Tian
>>> >>> >> >> >> >
>>> >>> >> >> >
>>> >>> >> >> >
>>> >>> >> >> >
>>> >>> >> >> >
>>> >>> >> >> > --
>>> >>> >> >> > Kind Regards,
>>> >>> >> >> >
>>> >>> >> >> > Tian Lou
>>> >>> >> >> >
>>> >>> >> >
>>> >>> >> >
>>> >>> >> >
>>> >>> >> >
>>> >>> >> > --
>>> >>> >> > Kind Regards,
>>> >>> >> >
>>> >>> >> > Tian Lou
>>> >>> >> >
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> > --
>>> >>> > Kind Regards,
>>> >>> >
>>> >>> > Tian Lou
>>> >>> >
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Kind Regards,
>>> >>
>>> >> Tian Lou
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Kind Regards,
>>> >
>>> > Tian Lou
>>> >
>>>
>>
>>
>>
>> --
>> Kind Regards,
>>
>> Tian Lou
>>
>>
>


-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Joe Witt <jo...@gmail.com>.
The lock failure for the provenance repo can be ignored and it does
recover.  In addition you can switch to using the
WriteAheadProvenanceRepository which works in a completely different and
faster way anyway.

So once using 1GB heap you still saw an OOME but at 2GB you did not.
Right?  I wonder if at 2GB if you ran for a longer period if you would see
the OOME.

How to do throughput testing:
- I generally do not do this using RPC based options like HTTP requests
in/out but rather using GenerateFlowFile.  But for your case and what
you're trying to assess I think it is important to do it in a manner
similar to how you are.  I'll take a closer look at your linked
code/template to get a better idea and try to replicate your observations.

Thanks

On Tue, Sep 26, 2017 at 7:50 AM, Lou Tian <ti...@gmail.com> wrote:

> Hi Joe,
>
> Yes, that one is for the 512M.
>
> 1. I did several tests, the results are in the table.
>     It seems that every time, the error trace is not the same.
>     I doubt that if the error logs are really useful, but anyway I copied
> some here.
>
> [image: Inline image 1]
>
> 2. I isolated the performance test to a small project to
> https://github.com/lou78/TestPerformance.git .
>     The FlowFile template is also inside the project. Just in case you
> need to reproduce it.
>     Currently the test will have 50 request/second and last for 10
> minutes.
>     You can change it in the BasicSimulation.scala.
>
> 3. *Question*: I'd like to test if our flow can process so many messages
> in given time.
>      Do you have suggestions to do the Flow File performance test?
>
> Thanks.
>
>
> **********Some Error LOG********
>
> 2017-09-26 13:27:18,081 ERROR [Timer-Driven Process Thread-2]
> o.a.n.p.standard.HandleHttpResponse HandleHttpResponse[id=
> bd0f5d7d-015e-1000-3402-763e31542bbd] Failed to respond to HTTP request
> for StandardFlowFileRecord[uuid=d9383563-d317-40ad-b449-
> 37ea2806e7fe,claim=StandardContentClaim [resourceClaim=
> StandardResourceClaim[id=1506425095749-29, container=default,
> section=29], offset=781242, length=418],offset=0,name=15431969122198,size=418]
> because FlowFile had an 'http.context.identifier' attribute of
> de0f382b-0bac-4496-a74f-32e6197f378e but could not find an HTTP Response
> Object for this identifier
> 2017-09-26 13:28:58,259 ERROR [pool-10-thread-1] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[pool-10-thread-1,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:58,261 ERROR [NiFi Web Server-307] org.apache.nifi.NiFi
> An Unknown Error Occurred in Thread Thread[NiFi Web Server-307,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,800 WARN [qtp1908618024-320] org.eclipse.jetty.server.HttpChannel
> /nifi/
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,800 WARN [qtp1908618024-378] o.e.jetty.util.thread.QueuedThreadPool
> Unexpected thread death: org.eclipse.jetty.util.thread.
> QueuedThreadPool$2@577ddcc3 in qtp1908618024{STARTED,8<=41<=200,i=22,q=0}
> 2017-09-26 13:28:59,800 ERROR [Timer-Driven Process Thread-2]
> o.a.n.p.standard.HandleHttpResponse HandleHttpResponse[id=
> bd0f5d7d-015e-1000-3402-763e31542bbd] HandleHttpResponse[id=
> bd0f5d7d-015e-1000-3402-763e31542bbd] failed to process due to
> java.lang.OutOfMemoryError: Java heap space; rolling back session: {}
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,800 ERROR [pool-10-thread-1] org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,801 ERROR [pool-36-thread-1] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[pool-36-thread-1,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,801 ERROR [qtp1908618024-378] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[qtp1908618024-378,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,801 ERROR [pool-36-thread-1] org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,801 ERROR [qtp1908618024-378] org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,805 ERROR [Provenance Maintenance Thread-2]
> org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Provenance
> Maintenance Thread-2,5,main]: java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,805 ERROR [Provenance Maintenance Thread-2]
> org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,805 WARN [qtp1908618024-364] o.e.jetty.util.thread.
> QueuedThreadPool
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,805 ERROR [Timer-Driven Process Thread-5]
> o.a.n.processors.standard.RouteOnContent RouteOnContent[id=bd0f36de-015e-1000-2103-c1d81aaa36dc]
> RouteOnContent[id=bd0f36de-015e-1000-2103-c1d81aaa36dc] failed to process
> due to java.lang.OutOfMemoryError: Java heap space; rolling back session: {}
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,806 WARN [qtp1908618024-364] o.e.jetty.util.thread.QueuedThreadPool
> Unexpected thread death: org.eclipse.jetty.util.thread.
> QueuedThreadPool$2@577ddcc3 in qtp1908618024{STARTED,8<=41<=200,i=23,q=2}
> 2017-09-26 13:28:59,806 ERROR [NiFi Web Server-307] org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-26 13:28:59,807 WARN [NiFi Web Server-374]
> org.eclipse.jetty.server.HttpChannel /nifi-api/flow/process-groups/
> bd0e9451-015e-1000-c9a7-99594722fe60
> java.lang.OutOfMemoryError: Java heap space
> at java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
> at java.util.HashMap.putVal(HashMap.java:641)
> at java.util.HashMap.put(HashMap.java:611)
> at sun.util.resources.OpenListResourceBundle.loadLookup(
> OpenListResourceBundle.java:146)
> at sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(
> OpenListResourceBundle.java:128)
> at sun.util.resources.OpenListResourceBundle.handleKeySet(
> OpenListResourceBundle.java:96)
> at java.util.ResourceBundle.containsKey(ResourceBundle.java:1807)
> at sun.util.locale.provider.LocaleResources.getTimeZoneNames(
> LocaleResources.java:263)
> at sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplayNameArray(
> TimeZoneNameProviderImpl.java:124)
> at sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplayName(
> TimeZoneNameProviderImpl.java:99)
> at sun.util.locale.provider.TimeZoneNameUtility$
> TimeZoneNameGetter.getName(TimeZoneNameUtility.java:240)
> at sun.util.locale.provider.TimeZoneNameUtility$
> TimeZoneNameGetter.getObject(TimeZoneNameUtility.java:198)
> at sun.util.locale.provider.TimeZoneNameUtility$
> TimeZoneNameGetter.getObject(TimeZoneNameUtility.java:184)
> at sun.util.locale.provider.LocaleServiceProviderPool.
> getLocalizedObjectImpl(LocaleServiceProviderPool.java:281)
> at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObject(
> LocaleServiceProviderPool.java:265)
> at sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplayNamesImpl(
> TimeZoneNameUtility.java:166)
> at sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplayNames(
> TimeZoneNameUtility.java:107)
> at java.time.format.DateTimeFormatterBuilder$ZoneTextPrinterParser.
> getDisplayName(DateTimeFormatterBuilder.java:3650)
> at java.time.format.DateTimeFormatterBuilder$ZoneTextPrinterParser.format(
> DateTimeFormatterBuilder.java:3689)
> at java.time.format.DateTimeFormatterBuilder$
> CompositePrinterParser.format(DateTimeFormatterBuilder.java:2179)
> at java.time.format.DateTimeFormatter.formatTo(
> DateTimeFormatter.java:1746)
> at java.time.format.DateTimeFormatter.format(DateTimeFormatter.java:1720)
> at org.apache.nifi.web.api.dto.util.TimeAdapter.marshal(
> TimeAdapter.java:43)
> at org.apache.nifi.web.api.dto.util.TimeAdapter.marshal(
> TimeAdapter.java:33)
> at org.codehaus.jackson.xc.XmlAdapterJsonSerializer.serialize(
> XmlAdapterJsonSerializer.java:38)
> at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(
> BeanPropertyWriter.java:446)
> at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(
> BeanSerializerBase.java:150)
> at org.codehaus.jackson.map.ser.BeanSerializer.serialize(
> BeanSerializer.java:112)
> at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(
> BeanPropertyWriter.java:446)
> at org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(
> BeanSerializerBase.java:150)
> at org.codehaus.jackson.map.ser.BeanSerializer.serialize(
> BeanSerializer.java:112)
> at org.codehaus.jackson.map.ser.std.CollectionSerializer.
> serializeContents(CollectionSerializer.java:72)
> 2017-09-26 13:28:59,804 ERROR [FileSystemRepository Workers Thread-3]
> org.apache.nifi.engine.FlowEngine A flow controller task execution
> stopped abnormally
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1150)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
> ----------------------
> *For 2G, I cannot get OOME, but another error:*
> 2017-09-26 11:40:52,823 ERROR [Provenance Repository Rollover Thread-1]
> o.a.n.p.PersistentProvenanceRepository
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
> NativeFSLock@/Users/tlou/nifi-1.3.0/provenance_repository/in
> dex-1506411423000/write.lock
> at org.apache.lucene.store.Lock.obtain(Lock.java:89)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:755)
> at org.apache.nifi.provenance.lucene.SimpleIndexManager.createW
> riter(SimpleIndexManager.java:198)
> at org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowI
> ndexWriter(SimpleIndexManager.java:227)
> at org.apache.nifi.provenance.PersistentProvenanceRepository.me
> rgeJournals(PersistentProvenanceRepository.java:1677)
> at org.apache.nifi.provenance.PersistentProvenanceRepository$8.
> run(PersistentProvenanceRepository.java:1265)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
> tureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
> tureTask.run(ScheduledThreadPoolExecutor.java:294)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> ------------------------------------------
>
>
>
> On Mon, Sep 25, 2017 at 6:55 PM, Joe Witt <jo...@gmail.com> wrote:
>
>> Tian,
>>
>> Ok - and was this with the 512MB heap again?  Can you try with a 1GB
>> or 2GB heap and see if we're just looking at our minimum needs being
>> an issue or if we're looking at what sounds like a leak.
>>
>> Thanks
>>
>> On Mon, Sep 25, 2017 at 12:41 PM, Lou Tian <ti...@gmail.com>
>> wrote:
>> > Hi Joe,
>> >
>> > I tested with a simple flow file.
>> > Only 4 processors: HandleHttpRequest, RouteOnContent,
>> HandleHttpResponse and
>> > DebugFlow.
>> > I run the test 3 times (10 m/time and at most 50 users).
>> > It works fine for the first 2 run. And on the third run, got the error.
>> >
>> > I copied part of the log file. Please check if it is helpful to
>> identify the
>> > error.
>> >
>> > 2017-09-25 18:21:45,673 INFO [Provenance Maintenance Thread-2]
>> > o.a.n.p.PersistentProvenanceRepository Created new Provenance Event
>> Writers
>> > for events starting with ID 131158
>> >
>> > 2017-09-25 18:24:00,921 ERROR [FileSystemRepository Workers Thread-3]
>> > o.a.n.c.repository.FileSystemRepository Failed to handle destructable
>> claims
>> > due to java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,921 ERROR [Flow Service Tasks Thread-1]
>> > org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Flow
>> Service
>> > Tasks Thread-1,5,main]: java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,922 WARN [qtp574205748-107]
>> > o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
>> > org.eclipse.jetty.util.thread.QueuedThreadPool$2@1e3a5886 in
>> > qtp574205748{STARTED,8<=13<=200,i=4,q=0}
>> > 2017-09-25 18:24:00,923 INFO [Provenance Repository Rollover Thread-1]
>> > o.a.n.p.lucene.SimpleIndexManager Index Writer for
>> > ./provenance_repository/index-1506354574000 has been returned to Index
>> > Manager and is no longer in use. Closing Index Writer
>> > 2017-09-25 18:24:00,925 ERROR [qtp574205748-107] org.apache.nifi.NiFi An
>> > Unknown Error Occurred in Thread Thread[qtp574205748-107,5,main]:
>> > java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,929 INFO [pool-10-thread-1]
>> > o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of
>> FlowFile
>> > Repository
>> > 2017-09-25 18:24:00,929 ERROR [Flow Service Tasks Thread-1]
>> > org.apache.nifi.NiFi
>> > java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,928 ERROR [Listen to Bootstrap]
>> > org.apache.nifi.BootstrapListener Failed to process request from
>> Bootstrap
>> > due to java.lang.OutOfMemoryError: Java heap space
>> > java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,929 WARN [NiFi Web Server-215]
>> > org.eclipse.jetty.server.HttpChannel /nifi-api/flow/controller/bull
>> etins
>> > java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,930 ERROR [pool-30-thread-1] org.apache.nifi.NiFi An
>> > Unknown Error Occurred in Thread Thread[pool-30-thread-1,5,main]:
>> > java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,929 ERROR [Event-Driven Process Thread-3]
>> > org.apache.nifi.engine.FlowEngine A flow controller task execution
>> stopped
>> > abnormally
>> > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>> Java
>> > heap space
>> >    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>> >    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>> >    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.
>> java:100)
>> >    at
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1150)
>> >    at
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> >    at java.lang.Thread.run(Thread.java:748)
>> > Caused by: java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,931 ERROR [Scheduler-1985086499]
>> org.apache.nifi.NiFi An
>> > Unknown Error Occurred in Thread Thread[Scheduler-1985086499,5,main]:
>> > java.lang.OutOfMemoryError: Java heap space
>> > 2017-09-25 18:24:00,930 ERROR [Cleanup Archive for default]
>> > org.apache.nifi.engine.FlowEngine A flow controller task execution
>> stopped
>> > abnormally
>> > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>> Java
>> > heap space
>> >    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>> >    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>> >    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.
>> java:100)
>> >    at
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1150)
>> >    at
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> >    at java.lang.Thread.run(Thread.java:748)
>> > Caused by: java.lang.OutOfMemoryError: Java heap space
>> >
>> >
>> >
>> >
>> > Kind Regards,
>> > Tian
>> >
>> > On Mon, Sep 25, 2017 at 4:02 PM, Lou Tian <ti...@gmail.com>
>> wrote:
>> >>
>> >> Hi Joe, Thanks for your reply.
>> >> I will try to do those tests. And update you with the results.
>> >>
>> >> On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <jo...@gmail.com> wrote:
>> >>>
>> >>> Tian
>> >>>
>> >>> The most common sources of memory leaks in custom processors
>> >>> 1) Loading large objects (contents of the flowfile, for example) into
>> >>> memory through byte[] or doing so using libraries that do this and not
>> >>> realizing it.  Doing this in parallel makes the problem even more
>> >>> obvious.
>> >>> 2) Caching objects in memory and not providing bounds on that or not
>> >>> sizing the JVM Heap appropriate to your flow.
>> >>> 3) Pull in lots of flowfiles to a single session or creating many in a
>> >>> single session.
>> >>>
>> >>> Try moving to a 1GB heap and see if the problem still happens.  Is it
>> >>> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
>> >>> a leak.
>> >>>
>> >>> We dont have a benchmarking unit test sort of mechanism.
>> >>>
>> >>> Thanks
>> >>>
>> >>> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com>
>> wrote:
>> >>> > Hi Joe,
>> >>> >
>> >>> > 1. I will build a simple flow without our customised processor to
>> test
>> >>> > again.
>> >>> >     It is a good test idea. We saw the OOME is under the
>> >>> > HandleHttpRequest,
>> >>> > we never thought about others.
>> >>> >
>> >>> > 2. About our customised processor, we use lots of these customised
>> >>> > processors.
>> >>> >     Properties are dynamic. We fetch the properties by a rest call
>> and
>> >>> > cached it.
>> >>> >     Sorry, I cannot show you the code.
>> >>> >
>> >>> > 3. We had the unit test for the customised processors.
>> >>> >    Is there a way to test the memory leak in unit test using some
>> given
>> >>> > methods from nifi?
>> >>> >
>> >>> > Thanks.
>> >>> >
>> >>> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com>
>> wrote:
>> >>> >>
>> >>> >> Tian,
>> >>> >>
>> >>> >> Ok thanks.  I'd try to removing your customized processor from the
>> >>> >> flow entirely and running your tests.  This will give you a sense
>> of
>> >>> >> base nifi and the stock processors.  Once you're comfortable with
>> that
>> >>> >> then add your processor in.
>> >>> >>
>> >>> >> I say this because if your custom processor is using up the heap we
>> >>> >> will see OOME in various places.  That it shows up in the core
>> >>> >> framework code, for example, does not mean that is the cause.
>> >>> >>
>> >>> >> Does your custom processor hold anything in class level variables?
>> >>> >> Does it open a session and keep accumulating flowfiles?  If you can
>> >>> >> talk more about what it is doing or show a link to the code we
>> could
>> >>> >> quickly assess that.
>> >>> >>
>> >>> >> Thanks
>> >>> >>
>> >>> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com>
>> >>> >> wrote:
>> >>> >> > 1. The HandleHttpRequest Processor get the message.
>> >>> >> > 2. The message route to other processors based on the attribute.
>> >>> >> > 3. We have our customised processor to process the message.
>> >>> >> > 4. Then message would be redirected to the HandleHttpResponse.
>> >>> >> >
>> >>> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com>
>> >>> >> > wrote:
>> >>> >> >>
>> >>> >> >> What is the flow doing in between the request/response portion?
>> >>> >> >> Please share more details about the configuration overall.
>> >>> >> >>
>> >>> >> >> Thanks
>> >>> >> >>
>> >>> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <
>> tian.lou.293@gmail.com>
>> >>> >> >> wrote:
>> >>> >> >> > Hi Joe,
>> >>> >> >> >
>> >>> >> >> > java version: 1.8.0_121
>> >>> >> >> > heap size:
>> >>> >> >> > # JVM memory settings
>> >>> >> >> > java.arg.2=-Xms512m
>> >>> >> >> > java.arg.3=-Xmx512m
>> >>> >> >> > nifi version: 1.3.0
>> >>> >> >> >
>> >>> >> >> > Also, we put Nifi in the Docker.
>> >>> >> >> >
>> >>> >> >> > Kind Regrads,
>> >>> >> >> > Tian
>> >>> >> >> >
>> >>> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <joe.witt@gmail.com
>> >
>> >>> >> >> > wrote:
>> >>> >> >> >>
>> >>> >> >> >> Tian,
>> >>> >> >> >>
>> >>> >> >> >> Please provide information on the JRE being used (java
>> -version)
>> >>> >> >> >> and
>> >>> >> >> >> the environment configuration.  How large is your heap?  This
>> >>> >> >> >> can be
>> >>> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you
>> >>> >> >> >> using?
>> >>> >> >> >>
>> >>> >> >> >> Thanks
>> >>> >> >> >>
>> >>> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian
>> >>> >> >> >> <ti...@gmail.com>
>> >>> >> >> >> wrote:
>> >>> >> >> >> > Hi,
>> >>> >> >> >> >
>> >>> >> >> >> > We are doing performance test for our NIFI flow with
>> Gatling.
>> >>> >> >> >> > But
>> >>> >> >> >> > after
>> >>> >> >> >> > several run, the NIFI always has the OutOfMemory error. I
>> did
>> >>> >> >> >> > not
>> >>> >> >> >> > find
>> >>> >> >> >> > similar questions in the mailing list, if you already
>> answered
>> >>> >> >> >> > similar
>> >>> >> >> >> > questions please let me know.
>> >>> >> >> >> >
>> >>> >> >> >> > Problem description:
>> >>> >> >> >> > We have the Nifi flow. The normal flow works fine. To
>> evaluate
>> >>> >> >> >> > whether
>> >>> >> >> >> > our
>> >>> >> >> >> > flow can handle the load, we decided to do the performance
>> >>> >> >> >> > test
>> >>> >> >> >> > with
>> >>> >> >> >> > Gatling.
>> >>> >> >> >> >
>> >>> >> >> >> > 1) We add the two processors HandleHttpRequest at the
>> start of
>> >>> >> >> >> > the
>> >>> >> >> >> > flow
>> >>> >> >> >> > and
>> >>> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is
>> like
>> >>> >> >> >> > a
>> >>> >> >> >> > webservice
>> >>> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
>> >>> >> >> >> > continuously
>> >>> >> >> >> > push
>> >>> >> >> >> > messages to HandleHttpRequest processor.
>> >>> >> >> >> >
>> >>> >> >> >> > Problem:
>> >>> >> >> >> > Nifi can only handle two runs. Then the third time, it
>> failed
>> >>> >> >> >> > and
>> >>> >> >> >> > we
>> >>> >> >> >> > have to
>> >>> >> >> >> > restart the NIFI. I copied some error log here.
>> >>> >> >> >> >
>> >>> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest
>> HandleHttpRequest[id=**]
>> >>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >>> >> >> >> >> o.a.n.p.standard.HandleHttpRequest
>> HandleHttpRequest[id=**]
>> >>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
>> >>> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.rep
>> ository.StandardProcessSession.resetWriteClaims(StandardProc
>> essSession.java:2720)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.rep
>> ository.StandardProcessSession.checkpoint(StandardProcessSes
>> sion.java:213)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.rep
>> ository.StandardProcessSession.commit(StandardProcessSession.java:318)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.processor.Abst
>> ractProcessor.onTrigger(AbstractProcessor.java:28)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.Sta
>> ndardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.tas
>> ks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.tas
>> ks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> org.apache.nifi.controller.sch
>> eduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenScheduli
>> ngAgent.java:132)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >> java.util.concurrent.FutureTas
>> k.runAndReset(FutureTask.java:308)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> java.util.concurrent.Scheduled
>> ThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledT
>> hreadPoolExecutor.java:180)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> java.util.concurrent.Scheduled
>> ThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPo
>> olExecutor.java:294)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> java.util.concurrent.ThreadPoo
>> lExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >>> >> >> >> >> at
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >>
>> >>> >> >> >> >> java.util.concurrent.ThreadPoo
>> lExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >>> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> > So our final questions:
>> >>> >> >> >> > 1. Do you think it is the HandleHttpRequest processors
>> >>> >> >> >> > problem? Or
>> >>> >> >> >> > there
>> >>> >> >> >> > is
>> >>> >> >> >> > something wrong in our configuration. Anything we can do to
>> >>> >> >> >> > avoid
>> >>> >> >> >> > such
>> >>> >> >> >> > problem?
>> >>> >> >> >> > 2. If it's the processor, will you plan to fix it in the
>> >>> >> >> >> > coming
>> >>> >> >> >> > version?
>> >>> >> >> >> >
>> >>> >> >> >> > Thank you so much for your reply.
>> >>> >> >> >> >
>> >>> >> >> >> > Kind Regards,
>> >>> >> >> >> > Tian
>> >>> >> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > --
>> >>> >> >> > Kind Regards,
>> >>> >> >> >
>> >>> >> >> > Tian Lou
>> >>> >> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> > --
>> >>> >> > Kind Regards,
>> >>> >> >
>> >>> >> > Tian Lou
>> >>> >> >
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Kind Regards,
>> >>> >
>> >>> > Tian Lou
>> >>> >
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Kind Regards,
>> >>
>> >> Tian Lou
>> >>
>> >
>> >
>> >
>> > --
>> > Kind Regards,
>> >
>> > Tian Lou
>> >
>>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>
>

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
Hi Joe,

Yes, that one is for the 512M.

1. I did several tests, the results are in the table.
    It seems that every time, the error trace is not the same.
    I doubt that if the error logs are really useful, but anyway I copied
some here.

[image: Inline image 1]

2. I isolated the performance test to a small project to
https://github.com/lou78/TestPerformance.git .
    The FlowFile template is also inside the project. Just in case you need
to reproduce it.
    Currently the test will have 50 request/second and last for 10 minutes.
    You can change it in the BasicSimulation.scala.

3. *Question*: I'd like to test if our flow can process so many messages in
given time.
     Do you have suggestions to do the Flow File performance test?

Thanks.


**********Some Error LOG********

2017-09-26 13:27:18,081 ERROR [Timer-Driven Process Thread-2]
o.a.n.p.standard.HandleHttpResponse
HandleHttpResponse[id=bd0f5d7d-015e-1000-3402-763e31542bbd] Failed to
respond to HTTP request for
StandardFlowFileRecord[uuid=d9383563-d317-40ad-b449-37ea2806e7fe,claim=StandardContentClaim
[resourceClaim=StandardResourceClaim[id=1506425095749-29,
container=default, section=29], offset=781242,
length=418],offset=0,name=15431969122198,size=418] because FlowFile had an
'http.context.identifier' attribute of de0f382b-0bac-4496-a74f-32e6197f378e
but could not find an HTTP Response Object for this identifier
2017-09-26 13:28:58,259 ERROR [pool-10-thread-1] org.apache.nifi.NiFi An
Unknown Error Occurred in Thread Thread[pool-10-thread-1,5,main]:
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:58,261 ERROR [NiFi Web Server-307] org.apache.nifi.NiFi An
Unknown Error Occurred in Thread Thread[NiFi Web Server-307,5,main]:
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,800 WARN [qtp1908618024-320]
org.eclipse.jetty.server.HttpChannel /nifi/
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,800 WARN [qtp1908618024-378]
o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
org.eclipse.jetty.util.thread.QueuedThreadPool$2@577ddcc3 in
qtp1908618024{STARTED,8<=41<=200,i=22,q=0}
2017-09-26 13:28:59,800 ERROR [Timer-Driven Process Thread-2]
o.a.n.p.standard.HandleHttpResponse
HandleHttpResponse[id=bd0f5d7d-015e-1000-3402-763e31542bbd]
HandleHttpResponse[id=bd0f5d7d-015e-1000-3402-763e31542bbd] failed to
process due to java.lang.OutOfMemoryError: Java heap space; rolling back
session: {}
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,800 ERROR [pool-10-thread-1] org.apache.nifi.NiFi
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,801 ERROR [pool-36-thread-1] org.apache.nifi.NiFi An
Unknown Error Occurred in Thread Thread[pool-36-thread-1,5,main]:
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,801 ERROR [qtp1908618024-378] org.apache.nifi.NiFi An
Unknown Error Occurred in Thread Thread[qtp1908618024-378,5,main]:
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,801 ERROR [pool-36-thread-1] org.apache.nifi.NiFi
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,801 ERROR [qtp1908618024-378] org.apache.nifi.NiFi
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,805 ERROR [Provenance Maintenance Thread-2]
org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Provenance
Maintenance Thread-2,5,main]: java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,805 ERROR [Provenance Maintenance Thread-2]
org.apache.nifi.NiFi
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,805 WARN [qtp1908618024-364]
o.e.jetty.util.thread.QueuedThreadPool
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,805 ERROR [Timer-Driven Process Thread-5]
o.a.n.processors.standard.RouteOnContent
RouteOnContent[id=bd0f36de-015e-1000-2103-c1d81aaa36dc]
RouteOnContent[id=bd0f36de-015e-1000-2103-c1d81aaa36dc] failed to process
due to java.lang.OutOfMemoryError: Java heap space; rolling back session: {}
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,806 WARN [qtp1908618024-364]
o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
org.eclipse.jetty.util.thread.QueuedThreadPool$2@577ddcc3 in
qtp1908618024{STARTED,8<=41<=200,i=23,q=2}
2017-09-26 13:28:59,806 ERROR [NiFi Web Server-307] org.apache.nifi.NiFi
java.lang.OutOfMemoryError: Java heap space
2017-09-26 13:28:59,807 WARN [NiFi Web Server-374]
org.eclipse.jetty.server.HttpChannel
/nifi-api/flow/process-groups/bd0e9451-015e-1000-c9a7-99594722fe60
java.lang.OutOfMemoryError: Java heap space
at java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
at java.util.HashMap.putVal(HashMap.java:641)
at java.util.HashMap.put(HashMap.java:611)
at
sun.util.resources.OpenListResourceBundle.loadLookup(OpenListResourceBundle.java:146)
at
sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(OpenListResourceBundle.java:128)
at
sun.util.resources.OpenListResourceBundle.handleKeySet(OpenListResourceBundle.java:96)
at java.util.ResourceBundle.containsKey(ResourceBundle.java:1807)
at
sun.util.locale.provider.LocaleResources.getTimeZoneNames(LocaleResources.java:263)
at
sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplayNameArray(TimeZoneNameProviderImpl.java:124)
at
sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplayName(TimeZoneNameProviderImpl.java:99)
at
sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter.getName(TimeZoneNameUtility.java:240)
at
sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter.getObject(TimeZoneNameUtility.java:198)
at
sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter.getObject(TimeZoneNameUtility.java:184)
at
sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:281)
at
sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:265)
at
sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplayNamesImpl(TimeZoneNameUtility.java:166)
at
sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:107)
at
java.time.format.DateTimeFormatterBuilder$ZoneTextPrinterParser.getDisplayName(DateTimeFormatterBuilder.java:3650)
at
java.time.format.DateTimeFormatterBuilder$ZoneTextPrinterParser.format(DateTimeFormatterBuilder.java:3689)
at
java.time.format.DateTimeFormatterBuilder$CompositePrinterParser.format(DateTimeFormatterBuilder.java:2179)
at java.time.format.DateTimeFormatter.formatTo(DateTimeFormatter.java:1746)
at java.time.format.DateTimeFormatter.format(DateTimeFormatter.java:1720)
at org.apache.nifi.web.api.dto.util.TimeAdapter.marshal(TimeAdapter.java:43)
at org.apache.nifi.web.api.dto.util.TimeAdapter.marshal(TimeAdapter.java:33)
at
org.codehaus.jackson.xc.XmlAdapterJsonSerializer.serialize(XmlAdapterJsonSerializer.java:38)
at
org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:446)
at
org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:150)
at
org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:112)
at
org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:446)
at
org.codehaus.jackson.map.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:150)
at
org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:112)
at
org.codehaus.jackson.map.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:72)
2017-09-26 13:28:59,804 ERROR [FileSystemRepository Workers Thread-3]
org.apache.nifi.engine.FlowEngine A flow controller task execution stopped
abnormally
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
heap space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space

----------------------
*For 2G, I cannot get OOME, but another error:*
2017-09-26 11:40:52,823 ERROR [Provenance Repository Rollover Thread-1]
o.a.n.p.PersistentProvenanceRepository
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
NativeFSLock@/Users/tlou/nifi-1.3.0/provenance_repository/in
dex-1506411423000/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:755)
at org.apache.nifi.provenance.lucene.SimpleIndexManager.createW
riter(SimpleIndexManager.java:198)
at org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowI
ndexWriter(SimpleIndexManager.java:227)
at org.apache.nifi.provenance.PersistentProvenanceRepository.me
rgeJournals(PersistentProvenanceRepository.java:1677)
at org.apache.nifi.provenance.PersistentProvenanceRepository$8.
run(PersistentProvenanceRepository.java:1265)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
tureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
tureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
Executor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
lExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
------------------------------------------



On Mon, Sep 25, 2017 at 6:55 PM, Joe Witt <jo...@gmail.com> wrote:

> Tian,
>
> Ok - and was this with the 512MB heap again?  Can you try with a 1GB
> or 2GB heap and see if we're just looking at our minimum needs being
> an issue or if we're looking at what sounds like a leak.
>
> Thanks
>
> On Mon, Sep 25, 2017 at 12:41 PM, Lou Tian <ti...@gmail.com> wrote:
> > Hi Joe,
> >
> > I tested with a simple flow file.
> > Only 4 processors: HandleHttpRequest, RouteOnContent, HandleHttpResponse
> and
> > DebugFlow.
> > I run the test 3 times (10 m/time and at most 50 users).
> > It works fine for the first 2 run. And on the third run, got the error.
> >
> > I copied part of the log file. Please check if it is helpful to identify
> the
> > error.
> >
> > 2017-09-25 18:21:45,673 INFO [Provenance Maintenance Thread-2]
> > o.a.n.p.PersistentProvenanceRepository Created new Provenance Event
> Writers
> > for events starting with ID 131158
> >
> > 2017-09-25 18:24:00,921 ERROR [FileSystemRepository Workers Thread-3]
> > o.a.n.c.repository.FileSystemRepository Failed to handle destructable
> claims
> > due to java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,921 ERROR [Flow Service Tasks Thread-1]
> > org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Flow
> Service
> > Tasks Thread-1,5,main]: java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,922 WARN [qtp574205748-107]
> > o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
> > org.eclipse.jetty.util.thread.QueuedThreadPool$2@1e3a5886 in
> > qtp574205748{STARTED,8<=13<=200,i=4,q=0}
> > 2017-09-25 18:24:00,923 INFO [Provenance Repository Rollover Thread-1]
> > o.a.n.p.lucene.SimpleIndexManager Index Writer for
> > ./provenance_repository/index-1506354574000 has been returned to Index
> > Manager and is no longer in use. Closing Index Writer
> > 2017-09-25 18:24:00,925 ERROR [qtp574205748-107] org.apache.nifi.NiFi An
> > Unknown Error Occurred in Thread Thread[qtp574205748-107,5,main]:
> > java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,929 INFO [pool-10-thread-1]
> > o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile
> > Repository
> > 2017-09-25 18:24:00,929 ERROR [Flow Service Tasks Thread-1]
> > org.apache.nifi.NiFi
> > java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,928 ERROR [Listen to Bootstrap]
> > org.apache.nifi.BootstrapListener Failed to process request from
> Bootstrap
> > due to java.lang.OutOfMemoryError: Java heap space
> > java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,929 WARN [NiFi Web Server-215]
> > org.eclipse.jetty.server.HttpChannel /nifi-api/flow/controller/bulletins
> > java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,930 ERROR [pool-30-thread-1] org.apache.nifi.NiFi An
> > Unknown Error Occurred in Thread Thread[pool-30-thread-1,5,main]:
> > java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,929 ERROR [Event-Driven Process Thread-3]
> > org.apache.nifi.engine.FlowEngine A flow controller task execution
> stopped
> > abnormally
> > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> Java
> > heap space
> >    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> >    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> >    at org.apache.nifi.engine.FlowEngine.afterExecute(
> FlowEngine.java:100)
> >    at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1150)
> >    at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >    at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,931 ERROR [Scheduler-1985086499]
> org.apache.nifi.NiFi An
> > Unknown Error Occurred in Thread Thread[Scheduler-1985086499,5,main]:
> > java.lang.OutOfMemoryError: Java heap space
> > 2017-09-25 18:24:00,930 ERROR [Cleanup Archive for default]
> > org.apache.nifi.engine.FlowEngine A flow controller task execution
> stopped
> > abnormally
> > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> Java
> > heap space
> >    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> >    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> >    at org.apache.nifi.engine.FlowEngine.afterExecute(
> FlowEngine.java:100)
> >    at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1150)
> >    at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >    at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.lang.OutOfMemoryError: Java heap space
> >
> >
> >
> >
> > Kind Regards,
> > Tian
> >
> > On Mon, Sep 25, 2017 at 4:02 PM, Lou Tian <ti...@gmail.com>
> wrote:
> >>
> >> Hi Joe, Thanks for your reply.
> >> I will try to do those tests. And update you with the results.
> >>
> >> On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <jo...@gmail.com> wrote:
> >>>
> >>> Tian
> >>>
> >>> The most common sources of memory leaks in custom processors
> >>> 1) Loading large objects (contents of the flowfile, for example) into
> >>> memory through byte[] or doing so using libraries that do this and not
> >>> realizing it.  Doing this in parallel makes the problem even more
> >>> obvious.
> >>> 2) Caching objects in memory and not providing bounds on that or not
> >>> sizing the JVM Heap appropriate to your flow.
> >>> 3) Pull in lots of flowfiles to a single session or creating many in a
> >>> single session.
> >>>
> >>> Try moving to a 1GB heap and see if the problem still happens.  Is it
> >>> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
> >>> a leak.
> >>>
> >>> We dont have a benchmarking unit test sort of mechanism.
> >>>
> >>> Thanks
> >>>
> >>> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com>
> wrote:
> >>> > Hi Joe,
> >>> >
> >>> > 1. I will build a simple flow without our customised processor to
> test
> >>> > again.
> >>> >     It is a good test idea. We saw the OOME is under the
> >>> > HandleHttpRequest,
> >>> > we never thought about others.
> >>> >
> >>> > 2. About our customised processor, we use lots of these customised
> >>> > processors.
> >>> >     Properties are dynamic. We fetch the properties by a rest call
> and
> >>> > cached it.
> >>> >     Sorry, I cannot show you the code.
> >>> >
> >>> > 3. We had the unit test for the customised processors.
> >>> >    Is there a way to test the memory leak in unit test using some
> given
> >>> > methods from nifi?
> >>> >
> >>> > Thanks.
> >>> >
> >>> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com>
> wrote:
> >>> >>
> >>> >> Tian,
> >>> >>
> >>> >> Ok thanks.  I'd try to removing your customized processor from the
> >>> >> flow entirely and running your tests.  This will give you a sense of
> >>> >> base nifi and the stock processors.  Once you're comfortable with
> that
> >>> >> then add your processor in.
> >>> >>
> >>> >> I say this because if your custom processor is using up the heap we
> >>> >> will see OOME in various places.  That it shows up in the core
> >>> >> framework code, for example, does not mean that is the cause.
> >>> >>
> >>> >> Does your custom processor hold anything in class level variables?
> >>> >> Does it open a session and keep accumulating flowfiles?  If you can
> >>> >> talk more about what it is doing or show a link to the code we could
> >>> >> quickly assess that.
> >>> >>
> >>> >> Thanks
> >>> >>
> >>> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com>
> >>> >> wrote:
> >>> >> > 1. The HandleHttpRequest Processor get the message.
> >>> >> > 2. The message route to other processors based on the attribute.
> >>> >> > 3. We have our customised processor to process the message.
> >>> >> > 4. Then message would be redirected to the HandleHttpResponse.
> >>> >> >
> >>> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com>
> >>> >> > wrote:
> >>> >> >>
> >>> >> >> What is the flow doing in between the request/response portion?
> >>> >> >> Please share more details about the configuration overall.
> >>> >> >>
> >>> >> >> Thanks
> >>> >> >>
> >>> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <
> tian.lou.293@gmail.com>
> >>> >> >> wrote:
> >>> >> >> > Hi Joe,
> >>> >> >> >
> >>> >> >> > java version: 1.8.0_121
> >>> >> >> > heap size:
> >>> >> >> > # JVM memory settings
> >>> >> >> > java.arg.2=-Xms512m
> >>> >> >> > java.arg.3=-Xmx512m
> >>> >> >> > nifi version: 1.3.0
> >>> >> >> >
> >>> >> >> > Also, we put Nifi in the Docker.
> >>> >> >> >
> >>> >> >> > Kind Regrads,
> >>> >> >> > Tian
> >>> >> >> >
> >>> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com>
> >>> >> >> > wrote:
> >>> >> >> >>
> >>> >> >> >> Tian,
> >>> >> >> >>
> >>> >> >> >> Please provide information on the JRE being used (java
> -version)
> >>> >> >> >> and
> >>> >> >> >> the environment configuration.  How large is your heap?  This
> >>> >> >> >> can be
> >>> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you
> >>> >> >> >> using?
> >>> >> >> >>
> >>> >> >> >> Thanks
> >>> >> >> >>
> >>> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian
> >>> >> >> >> <ti...@gmail.com>
> >>> >> >> >> wrote:
> >>> >> >> >> > Hi,
> >>> >> >> >> >
> >>> >> >> >> > We are doing performance test for our NIFI flow with
> Gatling.
> >>> >> >> >> > But
> >>> >> >> >> > after
> >>> >> >> >> > several run, the NIFI always has the OutOfMemory error. I
> did
> >>> >> >> >> > not
> >>> >> >> >> > find
> >>> >> >> >> > similar questions in the mailing list, if you already
> answered
> >>> >> >> >> > similar
> >>> >> >> >> > questions please let me know.
> >>> >> >> >> >
> >>> >> >> >> > Problem description:
> >>> >> >> >> > We have the Nifi flow. The normal flow works fine. To
> evaluate
> >>> >> >> >> > whether
> >>> >> >> >> > our
> >>> >> >> >> > flow can handle the load, we decided to do the performance
> >>> >> >> >> > test
> >>> >> >> >> > with
> >>> >> >> >> > Gatling.
> >>> >> >> >> >
> >>> >> >> >> > 1) We add the two processors HandleHttpRequest at the start
> of
> >>> >> >> >> > the
> >>> >> >> >> > flow
> >>> >> >> >> > and
> >>> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is
> like
> >>> >> >> >> > a
> >>> >> >> >> > webservice
> >>> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
> >>> >> >> >> > continuously
> >>> >> >> >> > push
> >>> >> >> >> > messages to HandleHttpRequest processor.
> >>> >> >> >> >
> >>> >> >> >> > Problem:
> >>> >> >> >> > Nifi can only handle two runs. Then the third time, it
> failed
> >>> >> >> >> > and
> >>> >> >> >> > we
> >>> >> >> >> > have to
> >>> >> >> >> > restart the NIFI. I copied some error log here.
> >>> >> >> >> >
> >>> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest
> HandleHttpRequest[id=**]
> >>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >>> >> >> >> >> o.a.n.p.standard.HandleHttpRequest
> HandleHttpRequest[id=**]
> >>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
> >>> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.repository.
> StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.repository.
> StandardProcessSession.checkpoint(StandardProcessSession.java:213)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.repository.
> StandardProcessSession.commit(StandardProcessSession.java:318)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(
> AbstractProcessor.java:28)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.StandardProcessorNode.
> onTrigger(StandardProcessorNode.java:1120)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.tasks.
> ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.tasks.
> ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> org.apache.nifi.controller.scheduling.
> TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >> java.util.concurrent.FutureTask.runAndReset(
> FutureTask.java:308)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >>> >> >> >> >> at
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >>> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
> >>> >> >> >> >
> >>> >> >> >> >
> >>> >> >> >> > So our final questions:
> >>> >> >> >> > 1. Do you think it is the HandleHttpRequest processors
> >>> >> >> >> > problem? Or
> >>> >> >> >> > there
> >>> >> >> >> > is
> >>> >> >> >> > something wrong in our configuration. Anything we can do to
> >>> >> >> >> > avoid
> >>> >> >> >> > such
> >>> >> >> >> > problem?
> >>> >> >> >> > 2. If it's the processor, will you plan to fix it in the
> >>> >> >> >> > coming
> >>> >> >> >> > version?
> >>> >> >> >> >
> >>> >> >> >> > Thank you so much for your reply.
> >>> >> >> >> >
> >>> >> >> >> > Kind Regards,
> >>> >> >> >> > Tian
> >>> >> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > --
> >>> >> >> > Kind Regards,
> >>> >> >> >
> >>> >> >> > Tian Lou
> >>> >> >> >
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > --
> >>> >> > Kind Regards,
> >>> >> >
> >>> >> > Tian Lou
> >>> >> >
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Kind Regards,
> >>> >
> >>> > Tian Lou
> >>> >
> >>
> >>
> >>
> >>
> >> --
> >> Kind Regards,
> >>
> >> Tian Lou
> >>
> >
> >
> >
> > --
> > Kind Regards,
> >
> > Tian Lou
> >
>



-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Joe Witt <jo...@gmail.com>.
Tian,

Ok - and was this with the 512MB heap again?  Can you try with a 1GB
or 2GB heap and see if we're just looking at our minimum needs being
an issue or if we're looking at what sounds like a leak.

Thanks

On Mon, Sep 25, 2017 at 12:41 PM, Lou Tian <ti...@gmail.com> wrote:
> Hi Joe,
>
> I tested with a simple flow file.
> Only 4 processors: HandleHttpRequest, RouteOnContent, HandleHttpResponse and
> DebugFlow.
> I run the test 3 times (10 m/time and at most 50 users).
> It works fine for the first 2 run. And on the third run, got the error.
>
> I copied part of the log file. Please check if it is helpful to identify the
> error.
>
> 2017-09-25 18:21:45,673 INFO [Provenance Maintenance Thread-2]
> o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers
> for events starting with ID 131158
>
> 2017-09-25 18:24:00,921 ERROR [FileSystemRepository Workers Thread-3]
> o.a.n.c.repository.FileSystemRepository Failed to handle destructable claims
> due to java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,921 ERROR [Flow Service Tasks Thread-1]
> org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Flow Service
> Tasks Thread-1,5,main]: java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,922 WARN [qtp574205748-107]
> o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
> org.eclipse.jetty.util.thread.QueuedThreadPool$2@1e3a5886 in
> qtp574205748{STARTED,8<=13<=200,i=4,q=0}
> 2017-09-25 18:24:00,923 INFO [Provenance Repository Rollover Thread-1]
> o.a.n.p.lucene.SimpleIndexManager Index Writer for
> ./provenance_repository/index-1506354574000 has been returned to Index
> Manager and is no longer in use. Closing Index Writer
> 2017-09-25 18:24:00,925 ERROR [qtp574205748-107] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[qtp574205748-107,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,929 INFO [pool-10-thread-1]
> o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile
> Repository
> 2017-09-25 18:24:00,929 ERROR [Flow Service Tasks Thread-1]
> org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,928 ERROR [Listen to Bootstrap]
> org.apache.nifi.BootstrapListener Failed to process request from Bootstrap
> due to java.lang.OutOfMemoryError: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,929 WARN [NiFi Web Server-215]
> org.eclipse.jetty.server.HttpChannel /nifi-api/flow/controller/bulletins
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,930 ERROR [pool-30-thread-1] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[pool-30-thread-1,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,929 ERROR [Event-Driven Process Thread-3]
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped
> abnormally
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space
>    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>    at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>    at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,931 ERROR [Scheduler-1985086499] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[Scheduler-1985086499,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,930 ERROR [Cleanup Archive for default]
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped
> abnormally
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space
>    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>    at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>    at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
>
>
>
> Kind Regards,
> Tian
>
> On Mon, Sep 25, 2017 at 4:02 PM, Lou Tian <ti...@gmail.com> wrote:
>>
>> Hi Joe, Thanks for your reply.
>> I will try to do those tests. And update you with the results.
>>
>> On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <jo...@gmail.com> wrote:
>>>
>>> Tian
>>>
>>> The most common sources of memory leaks in custom processors
>>> 1) Loading large objects (contents of the flowfile, for example) into
>>> memory through byte[] or doing so using libraries that do this and not
>>> realizing it.  Doing this in parallel makes the problem even more
>>> obvious.
>>> 2) Caching objects in memory and not providing bounds on that or not
>>> sizing the JVM Heap appropriate to your flow.
>>> 3) Pull in lots of flowfiles to a single session or creating many in a
>>> single session.
>>>
>>> Try moving to a 1GB heap and see if the problem still happens.  Is it
>>> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
>>> a leak.
>>>
>>> We dont have a benchmarking unit test sort of mechanism.
>>>
>>> Thanks
>>>
>>> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com> wrote:
>>> > Hi Joe,
>>> >
>>> > 1. I will build a simple flow without our customised processor to test
>>> > again.
>>> >     It is a good test idea. We saw the OOME is under the
>>> > HandleHttpRequest,
>>> > we never thought about others.
>>> >
>>> > 2. About our customised processor, we use lots of these customised
>>> > processors.
>>> >     Properties are dynamic. We fetch the properties by a rest call and
>>> > cached it.
>>> >     Sorry, I cannot show you the code.
>>> >
>>> > 3. We had the unit test for the customised processors.
>>> >    Is there a way to test the memory leak in unit test using some given
>>> > methods from nifi?
>>> >
>>> > Thanks.
>>> >
>>> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com> wrote:
>>> >>
>>> >> Tian,
>>> >>
>>> >> Ok thanks.  I'd try to removing your customized processor from the
>>> >> flow entirely and running your tests.  This will give you a sense of
>>> >> base nifi and the stock processors.  Once you're comfortable with that
>>> >> then add your processor in.
>>> >>
>>> >> I say this because if your custom processor is using up the heap we
>>> >> will see OOME in various places.  That it shows up in the core
>>> >> framework code, for example, does not mean that is the cause.
>>> >>
>>> >> Does your custom processor hold anything in class level variables?
>>> >> Does it open a session and keep accumulating flowfiles?  If you can
>>> >> talk more about what it is doing or show a link to the code we could
>>> >> quickly assess that.
>>> >>
>>> >> Thanks
>>> >>
>>> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com>
>>> >> wrote:
>>> >> > 1. The HandleHttpRequest Processor get the message.
>>> >> > 2. The message route to other processors based on the attribute.
>>> >> > 3. We have our customised processor to process the message.
>>> >> > 4. Then message would be redirected to the HandleHttpResponse.
>>> >> >
>>> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com>
>>> >> > wrote:
>>> >> >>
>>> >> >> What is the flow doing in between the request/response portion?
>>> >> >> Please share more details about the configuration overall.
>>> >> >>
>>> >> >> Thanks
>>> >> >>
>>> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com>
>>> >> >> wrote:
>>> >> >> > Hi Joe,
>>> >> >> >
>>> >> >> > java version: 1.8.0_121
>>> >> >> > heap size:
>>> >> >> > # JVM memory settings
>>> >> >> > java.arg.2=-Xms512m
>>> >> >> > java.arg.3=-Xmx512m
>>> >> >> > nifi version: 1.3.0
>>> >> >> >
>>> >> >> > Also, we put Nifi in the Docker.
>>> >> >> >
>>> >> >> > Kind Regrads,
>>> >> >> > Tian
>>> >> >> >
>>> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com>
>>> >> >> > wrote:
>>> >> >> >>
>>> >> >> >> Tian,
>>> >> >> >>
>>> >> >> >> Please provide information on the JRE being used (java -version)
>>> >> >> >> and
>>> >> >> >> the environment configuration.  How large is your heap?  This
>>> >> >> >> can be
>>> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you
>>> >> >> >> using?
>>> >> >> >>
>>> >> >> >> Thanks
>>> >> >> >>
>>> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian
>>> >> >> >> <ti...@gmail.com>
>>> >> >> >> wrote:
>>> >> >> >> > Hi,
>>> >> >> >> >
>>> >> >> >> > We are doing performance test for our NIFI flow with Gatling.
>>> >> >> >> > But
>>> >> >> >> > after
>>> >> >> >> > several run, the NIFI always has the OutOfMemory error. I did
>>> >> >> >> > not
>>> >> >> >> > find
>>> >> >> >> > similar questions in the mailing list, if you already answered
>>> >> >> >> > similar
>>> >> >> >> > questions please let me know.
>>> >> >> >> >
>>> >> >> >> > Problem description:
>>> >> >> >> > We have the Nifi flow. The normal flow works fine. To evaluate
>>> >> >> >> > whether
>>> >> >> >> > our
>>> >> >> >> > flow can handle the load, we decided to do the performance
>>> >> >> >> > test
>>> >> >> >> > with
>>> >> >> >> > Gatling.
>>> >> >> >> >
>>> >> >> >> > 1) We add the two processors HandleHttpRequest at the start of
>>> >> >> >> > the
>>> >> >> >> > flow
>>> >> >> >> > and
>>> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is like
>>> >> >> >> > a
>>> >> >> >> > webservice
>>> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
>>> >> >> >> > continuously
>>> >> >> >> > push
>>> >> >> >> > messages to HandleHttpRequest processor.
>>> >> >> >> >
>>> >> >> >> > Problem:
>>> >> >> >> > Nifi can only handle two runs. Then the third time, it failed
>>> >> >> >> > and
>>> >> >> >> > we
>>> >> >> >> > have to
>>> >> >> >> > restart the NIFI. I copied some error log here.
>>> >> >> >> >
>>> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>>> >> >> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
>>> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> > So our final questions:
>>> >> >> >> > 1. Do you think it is the HandleHttpRequest processors
>>> >> >> >> > problem? Or
>>> >> >> >> > there
>>> >> >> >> > is
>>> >> >> >> > something wrong in our configuration. Anything we can do to
>>> >> >> >> > avoid
>>> >> >> >> > such
>>> >> >> >> > problem?
>>> >> >> >> > 2. If it's the processor, will you plan to fix it in the
>>> >> >> >> > coming
>>> >> >> >> > version?
>>> >> >> >> >
>>> >> >> >> > Thank you so much for your reply.
>>> >> >> >> >
>>> >> >> >> > Kind Regards,
>>> >> >> >> > Tian
>>> >> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> > --
>>> >> >> > Kind Regards,
>>> >> >> >
>>> >> >> > Tian Lou
>>> >> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Kind Regards,
>>> >> >
>>> >> > Tian Lou
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Kind Regards,
>>> >
>>> > Tian Lou
>>> >
>>
>>
>>
>>
>> --
>> Kind Regards,
>>
>> Tian Lou
>>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
Hi Joe,

I tested with a simple flow file.
Only 4 processors: HandleHttpRequest, RouteOnContent, HandleHttpResponse
and DebugFlow.
I run the test 3 times (10 m/time and at most 50 users).
It works fine for the first 2 run. And on the third run, got the error.

I copied part of the log file. Please check if it is helpful to identify
the error.

2017-09-25 18:21:45,673 INFO [Provenance Maintenance Thread-2]
o.a.n.p.PersistentProvenanceRepository Created new Provenance Event
Writers for events starting with ID 131158

2017-09-25 18:24:00,921 ERROR [FileSystemRepository Workers Thread-3]
o.a.n.c.repository.FileSystemRepository Failed to handle destructable
claims due to java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,921 ERROR [Flow Service Tasks Thread-1]
org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Flow
Service Tasks Thread-1,5,main]: java.lang.OutOfMemoryError: Java heap
space
2017-09-25 18:24:00,922 WARN [qtp574205748-107]
o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
org.eclipse.jetty.util.thread.QueuedThreadPool$2@1e3a5886 in
qtp574205748{STARTED,8<=13<=200,i=4,q=0}
2017-09-25 18:24:00,923 INFO [Provenance Repository Rollover Thread-1]
o.a.n.p.lucene.SimpleIndexManager Index Writer for
./provenance_repository/index-1506354574000 has been returned to Index
Manager and is no longer in use. Closing Index Writer
2017-09-25 18:24:00,925 ERROR [qtp574205748-107] org.apache.nifi.NiFi
An Unknown Error Occurred in Thread Thread[qtp574205748-107,5,main]:
java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,929 INFO [pool-10-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of
FlowFile Repository
2017-09-25 18:24:00,929 ERROR [Flow Service Tasks Thread-1]
org.apache.nifi.NiFi
java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,928 ERROR [Listen to Bootstrap]
org.apache.nifi.BootstrapListener Failed to process request from
Bootstrap due to java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,929 WARN [NiFi Web Server-215]
org.eclipse.jetty.server.HttpChannel
/nifi-api/flow/controller/bulletins
java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,930 ERROR [pool-30-thread-1] org.apache.nifi.NiFi
An Unknown Error Occurred in Thread Thread[pool-30-thread-1,5,main]:
java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,929 ERROR [Event-Driven Process Thread-3]
org.apache.nifi.engine.FlowEngine A flow controller task execution
stopped abnormally
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
2017-09-25 18:24:00,931 ERROR [Scheduler-1985086499]
org.apache.nifi.NiFi An Unknown Error Occurred in Thread
Thread[Scheduler-1985086499,5,main]: java.lang.OutOfMemoryError: Java
heap space
2017-09-25 18:24:00,930 ERROR [Cleanup Archive for default]
org.apache.nifi.engine.FlowEngine A flow controller task execution
stopped abnormally
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space




Kind Regards,
Tian

On Mon, Sep 25, 2017 at 4:02 PM, Lou Tian <ti...@gmail.com> wrote:

> Hi Joe, Thanks for your reply.
> I will try to do those tests. And update you with the results.
>
> On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <jo...@gmail.com> wrote:
>
>> Tian
>>
>> The most common sources of memory leaks in custom processors
>> 1) Loading large objects (contents of the flowfile, for example) into
>> memory through byte[] or doing so using libraries that do this and not
>> realizing it.  Doing this in parallel makes the problem even more
>> obvious.
>> 2) Caching objects in memory and not providing bounds on that or not
>> sizing the JVM Heap appropriate to your flow.
>> 3) Pull in lots of flowfiles to a single session or creating many in a
>> single session.
>>
>> Try moving to a 1GB heap and see if the problem still happens.  Is it
>> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
>> a leak.
>>
>> We dont have a benchmarking unit test sort of mechanism.
>>
>> Thanks
>>
>> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com> wrote:
>> > Hi Joe,
>> >
>> > 1. I will build a simple flow without our customised processor to test
>> > again.
>> >     It is a good test idea. We saw the OOME is under the
>> HandleHttpRequest,
>> > we never thought about others.
>> >
>> > 2. About our customised processor, we use lots of these customised
>> > processors.
>> >     Properties are dynamic. We fetch the properties by a rest call and
>> > cached it.
>> >     Sorry, I cannot show you the code.
>> >
>> > 3. We had the unit test for the customised processors.
>> >    Is there a way to test the memory leak in unit test using some given
>> > methods from nifi?
>> >
>> > Thanks.
>> >
>> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com> wrote:
>> >>
>> >> Tian,
>> >>
>> >> Ok thanks.  I'd try to removing your customized processor from the
>> >> flow entirely and running your tests.  This will give you a sense of
>> >> base nifi and the stock processors.  Once you're comfortable with that
>> >> then add your processor in.
>> >>
>> >> I say this because if your custom processor is using up the heap we
>> >> will see OOME in various places.  That it shows up in the core
>> >> framework code, for example, does not mean that is the cause.
>> >>
>> >> Does your custom processor hold anything in class level variables?
>> >> Does it open a session and keep accumulating flowfiles?  If you can
>> >> talk more about what it is doing or show a link to the code we could
>> >> quickly assess that.
>> >>
>> >> Thanks
>> >>
>> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com>
>> wrote:
>> >> > 1. The HandleHttpRequest Processor get the message.
>> >> > 2. The message route to other processors based on the attribute.
>> >> > 3. We have our customised processor to process the message.
>> >> > 4. Then message would be redirected to the HandleHttpResponse.
>> >> >
>> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com>
>> wrote:
>> >> >>
>> >> >> What is the flow doing in between the request/response portion?
>> >> >> Please share more details about the configuration overall.
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com>
>> >> >> wrote:
>> >> >> > Hi Joe,
>> >> >> >
>> >> >> > java version: 1.8.0_121
>> >> >> > heap size:
>> >> >> > # JVM memory settings
>> >> >> > java.arg.2=-Xms512m
>> >> >> > java.arg.3=-Xmx512m
>> >> >> > nifi version: 1.3.0
>> >> >> >
>> >> >> > Also, we put Nifi in the Docker.
>> >> >> >
>> >> >> > Kind Regrads,
>> >> >> > Tian
>> >> >> >
>> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com>
>> wrote:
>> >> >> >>
>> >> >> >> Tian,
>> >> >> >>
>> >> >> >> Please provide information on the JRE being used (java -version)
>> and
>> >> >> >> the environment configuration.  How large is your heap?  This
>> can be
>> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you
>> using?
>> >> >> >>
>> >> >> >> Thanks
>> >> >> >>
>> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <
>> tian.lou.293@gmail.com>
>> >> >> >> wrote:
>> >> >> >> > Hi,
>> >> >> >> >
>> >> >> >> > We are doing performance test for our NIFI flow with Gatling.
>> But
>> >> >> >> > after
>> >> >> >> > several run, the NIFI always has the OutOfMemory error. I did
>> not
>> >> >> >> > find
>> >> >> >> > similar questions in the mailing list, if you already answered
>> >> >> >> > similar
>> >> >> >> > questions please let me know.
>> >> >> >> >
>> >> >> >> > Problem description:
>> >> >> >> > We have the Nifi flow. The normal flow works fine. To evaluate
>> >> >> >> > whether
>> >> >> >> > our
>> >> >> >> > flow can handle the load, we decided to do the performance test
>> >> >> >> > with
>> >> >> >> > Gatling.
>> >> >> >> >
>> >> >> >> > 1) We add the two processors HandleHttpRequest at the start of
>> the
>> >> >> >> > flow
>> >> >> >> > and
>> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is like
>> a
>> >> >> >> > webservice
>> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
>> >> >> >> > continuously
>> >> >> >> > push
>> >> >> >> > messages to HandleHttpRequest processor.
>> >> >> >> >
>> >> >> >> > Problem:
>> >> >> >> > Nifi can only handle two runs. Then the third time, it failed
>> and
>> >> >> >> > we
>> >> >> >> > have to
>> >> >> >> > restart the NIFI. I copied some error log here.
>> >> >> >> >
>> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> >> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
>> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession
>> .resetWriteClaims(StandardProcessSession.java:2720)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession
>> .checkpoint(StandardProcessSession.java:213)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession
>> .commit(StandardProcessSession.java:318)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(Abstra
>> ctProcessor.java:28)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(S
>> tandardProcessorNode.java:1120)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask
>> .call(ContinuallyRunProcessorTask.java:147)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask
>> .call(ContinuallyRunProcessorTask.java:47)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingA
>> gent$1.run(TimerDrivenSchedulingAgent.java:132)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> >> >> >> >> at
>> >> >> >> >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:
>> 308)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>> tureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFu
>> tureTask.run(ScheduledThreadPoolExecutor.java:294)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> >> >> >> >> at
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > So our final questions:
>> >> >> >> > 1. Do you think it is the HandleHttpRequest processors
>> problem? Or
>> >> >> >> > there
>> >> >> >> > is
>> >> >> >> > something wrong in our configuration. Anything we can do to
>> avoid
>> >> >> >> > such
>> >> >> >> > problem?
>> >> >> >> > 2. If it's the processor, will you plan to fix it in the coming
>> >> >> >> > version?
>> >> >> >> >
>> >> >> >> > Thank you so much for your reply.
>> >> >> >> >
>> >> >> >> > Kind Regards,
>> >> >> >> > Tian
>> >> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > --
>> >> >> > Kind Regards,
>> >> >> >
>> >> >> > Tian Lou
>> >> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Kind Regards,
>> >> >
>> >> > Tian Lou
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > Kind Regards,
>> >
>> > Tian Lou
>> >
>>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>
>


-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
Hi Joe, Thanks for your reply.
I will try to do those tests. And update you with the results.

On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <jo...@gmail.com> wrote:

> Tian
>
> The most common sources of memory leaks in custom processors
> 1) Loading large objects (contents of the flowfile, for example) into
> memory through byte[] or doing so using libraries that do this and not
> realizing it.  Doing this in parallel makes the problem even more
> obvious.
> 2) Caching objects in memory and not providing bounds on that or not
> sizing the JVM Heap appropriate to your flow.
> 3) Pull in lots of flowfiles to a single session or creating many in a
> single session.
>
> Try moving to a 1GB heap and see if the problem still happens.  Is it
> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
> a leak.
>
> We dont have a benchmarking unit test sort of mechanism.
>
> Thanks
>
> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com> wrote:
> > Hi Joe,
> >
> > 1. I will build a simple flow without our customised processor to test
> > again.
> >     It is a good test idea. We saw the OOME is under the
> HandleHttpRequest,
> > we never thought about others.
> >
> > 2. About our customised processor, we use lots of these customised
> > processors.
> >     Properties are dynamic. We fetch the properties by a rest call and
> > cached it.
> >     Sorry, I cannot show you the code.
> >
> > 3. We had the unit test for the customised processors.
> >    Is there a way to test the memory leak in unit test using some given
> > methods from nifi?
> >
> > Thanks.
> >
> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com> wrote:
> >>
> >> Tian,
> >>
> >> Ok thanks.  I'd try to removing your customized processor from the
> >> flow entirely and running your tests.  This will give you a sense of
> >> base nifi and the stock processors.  Once you're comfortable with that
> >> then add your processor in.
> >>
> >> I say this because if your custom processor is using up the heap we
> >> will see OOME in various places.  That it shows up in the core
> >> framework code, for example, does not mean that is the cause.
> >>
> >> Does your custom processor hold anything in class level variables?
> >> Does it open a session and keep accumulating flowfiles?  If you can
> >> talk more about what it is doing or show a link to the code we could
> >> quickly assess that.
> >>
> >> Thanks
> >>
> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com>
> wrote:
> >> > 1. The HandleHttpRequest Processor get the message.
> >> > 2. The message route to other processors based on the attribute.
> >> > 3. We have our customised processor to process the message.
> >> > 4. Then message would be redirected to the HandleHttpResponse.
> >> >
> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com> wrote:
> >> >>
> >> >> What is the flow doing in between the request/response portion?
> >> >> Please share more details about the configuration overall.
> >> >>
> >> >> Thanks
> >> >>
> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com>
> >> >> wrote:
> >> >> > Hi Joe,
> >> >> >
> >> >> > java version: 1.8.0_121
> >> >> > heap size:
> >> >> > # JVM memory settings
> >> >> > java.arg.2=-Xms512m
> >> >> > java.arg.3=-Xmx512m
> >> >> > nifi version: 1.3.0
> >> >> >
> >> >> > Also, we put Nifi in the Docker.
> >> >> >
> >> >> > Kind Regrads,
> >> >> > Tian
> >> >> >
> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com>
> wrote:
> >> >> >>
> >> >> >> Tian,
> >> >> >>
> >> >> >> Please provide information on the JRE being used (java -version)
> and
> >> >> >> the environment configuration.  How large is your heap?  This can
> be
> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you using?
> >> >> >>
> >> >> >> Thanks
> >> >> >>
> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <tian.lou.293@gmail.com
> >
> >> >> >> wrote:
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > We are doing performance test for our NIFI flow with Gatling.
> But
> >> >> >> > after
> >> >> >> > several run, the NIFI always has the OutOfMemory error. I did
> not
> >> >> >> > find
> >> >> >> > similar questions in the mailing list, if you already answered
> >> >> >> > similar
> >> >> >> > questions please let me know.
> >> >> >> >
> >> >> >> > Problem description:
> >> >> >> > We have the Nifi flow. The normal flow works fine. To evaluate
> >> >> >> > whether
> >> >> >> > our
> >> >> >> > flow can handle the load, we decided to do the performance test
> >> >> >> > with
> >> >> >> > Gatling.
> >> >> >> >
> >> >> >> > 1) We add the two processors HandleHttpRequest at the start of
> the
> >> >> >> > flow
> >> >> >> > and
> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is like a
> >> >> >> > webservice
> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
> >> >> >> > continuously
> >> >> >> > push
> >> >> >> > messages to HandleHttpRequest processor.
> >> >> >> >
> >> >> >> > Problem:
> >> >> >> > Nifi can only handle two runs. Then the third time, it failed
> and
> >> >> >> > we
> >> >> >> > have to
> >> >> >> > restart the NIFI. I copied some error log here.
> >> >> >> >
> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> >> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.
> resetWriteClaims(StandardProcessSession.java:2720)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.
> checkpoint(StandardProcessSession.java:213)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.repository.
> StandardProcessSession.commit(StandardProcessSession.java:318)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(
> AbstractProcessor.java:28)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(
> StandardProcessorNode.java:1120)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.
> call(ContinuallyRunProcessorTask.java:147)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.
> call(ContinuallyRunProcessorTask.java:47)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> org.apache.nifi.controller.scheduling.
> TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >> >> >> >> at
> >> >> >> >> java.util.concurrent.FutureTask.runAndReset(
> FutureTask.java:308)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >> >> >> >> at
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
> >> >> >> >
> >> >> >> >
> >> >> >> > So our final questions:
> >> >> >> > 1. Do you think it is the HandleHttpRequest processors problem?
> Or
> >> >> >> > there
> >> >> >> > is
> >> >> >> > something wrong in our configuration. Anything we can do to
> avoid
> >> >> >> > such
> >> >> >> > problem?
> >> >> >> > 2. If it's the processor, will you plan to fix it in the coming
> >> >> >> > version?
> >> >> >> >
> >> >> >> > Thank you so much for your reply.
> >> >> >> >
> >> >> >> > Kind Regards,
> >> >> >> > Tian
> >> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Kind Regards,
> >> >> >
> >> >> > Tian Lou
> >> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Kind Regards,
> >> >
> >> > Tian Lou
> >> >
> >
> >
> >
> >
> > --
> > Kind Regards,
> >
> > Tian Lou
> >
>



-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Joe Witt <jo...@gmail.com>.
Tian

The most common sources of memory leaks in custom processors
1) Loading large objects (contents of the flowfile, for example) into
memory through byte[] or doing so using libraries that do this and not
realizing it.  Doing this in parallel makes the problem even more
obvious.
2) Caching objects in memory and not providing bounds on that or not
sizing the JVM Heap appropriate to your flow.
3) Pull in lots of flowfiles to a single session or creating many in a
single session.

Try moving to a 1GB heap and see if the problem still happens.  Is it
as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
a leak.

We dont have a benchmarking unit test sort of mechanism.

Thanks

On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <ti...@gmail.com> wrote:
> Hi Joe,
>
> 1. I will build a simple flow without our customised processor to test
> again.
>     It is a good test idea. We saw the OOME is under the HandleHttpRequest,
> we never thought about others.
>
> 2. About our customised processor, we use lots of these customised
> processors.
>     Properties are dynamic. We fetch the properties by a rest call and
> cached it.
>     Sorry, I cannot show you the code.
>
> 3. We had the unit test for the customised processors.
>    Is there a way to test the memory leak in unit test using some given
> methods from nifi?
>
> Thanks.
>
> On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com> wrote:
>>
>> Tian,
>>
>> Ok thanks.  I'd try to removing your customized processor from the
>> flow entirely and running your tests.  This will give you a sense of
>> base nifi and the stock processors.  Once you're comfortable with that
>> then add your processor in.
>>
>> I say this because if your custom processor is using up the heap we
>> will see OOME in various places.  That it shows up in the core
>> framework code, for example, does not mean that is the cause.
>>
>> Does your custom processor hold anything in class level variables?
>> Does it open a session and keep accumulating flowfiles?  If you can
>> talk more about what it is doing or show a link to the code we could
>> quickly assess that.
>>
>> Thanks
>>
>> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com> wrote:
>> > 1. The HandleHttpRequest Processor get the message.
>> > 2. The message route to other processors based on the attribute.
>> > 3. We have our customised processor to process the message.
>> > 4. Then message would be redirected to the HandleHttpResponse.
>> >
>> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com> wrote:
>> >>
>> >> What is the flow doing in between the request/response portion?
>> >> Please share more details about the configuration overall.
>> >>
>> >> Thanks
>> >>
>> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com>
>> >> wrote:
>> >> > Hi Joe,
>> >> >
>> >> > java version: 1.8.0_121
>> >> > heap size:
>> >> > # JVM memory settings
>> >> > java.arg.2=-Xms512m
>> >> > java.arg.3=-Xmx512m
>> >> > nifi version: 1.3.0
>> >> >
>> >> > Also, we put Nifi in the Docker.
>> >> >
>> >> > Kind Regrads,
>> >> > Tian
>> >> >
>> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com> wrote:
>> >> >>
>> >> >> Tian,
>> >> >>
>> >> >> Please provide information on the JRE being used (java -version) and
>> >> >> the environment configuration.  How large is your heap?  This can be
>> >> >> found in conf/bootstrap.conf.  What version of nifi are you using?
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com>
>> >> >> wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > We are doing performance test for our NIFI flow with Gatling. But
>> >> >> > after
>> >> >> > several run, the NIFI always has the OutOfMemory error. I did not
>> >> >> > find
>> >> >> > similar questions in the mailing list, if you already answered
>> >> >> > similar
>> >> >> > questions please let me know.
>> >> >> >
>> >> >> > Problem description:
>> >> >> > We have the Nifi flow. The normal flow works fine. To evaluate
>> >> >> > whether
>> >> >> > our
>> >> >> > flow can handle the load, we decided to do the performance test
>> >> >> > with
>> >> >> > Gatling.
>> >> >> >
>> >> >> > 1) We add the two processors HandleHttpRequest at the start of the
>> >> >> > flow
>> >> >> > and
>> >> >> > HandleHttpResponse at the end of the flow. So our nifi is like a
>> >> >> > webservice
>> >> >> > and Gatling will evaluate the response time.  2) Then  we
>> >> >> > continuously
>> >> >> > push
>> >> >> > messages to HandleHttpRequest processor.
>> >> >> >
>> >> >> > Problem:
>> >> >> > Nifi can only handle two runs. Then the third time, it failed and
>> >> >> > we
>> >> >> > have to
>> >> >> > restart the NIFI. I copied some error log here.
>> >> >> >
>> >> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> >> >> java.lang.OutOfMemoryError: Java heap space
>> >> >> >> at java.util.HashMap.values(HashMap.java:958)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> >> >> >> at
>> >> >> >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >> >> >> at
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >> >> >> at java.lang.Thread.run(Thread.java:748)
>> >> >> >
>> >> >> >
>> >> >> > So our final questions:
>> >> >> > 1. Do you think it is the HandleHttpRequest processors problem? Or
>> >> >> > there
>> >> >> > is
>> >> >> > something wrong in our configuration. Anything we can do to avoid
>> >> >> > such
>> >> >> > problem?
>> >> >> > 2. If it's the processor, will you plan to fix it in the coming
>> >> >> > version?
>> >> >> >
>> >> >> > Thank you so much for your reply.
>> >> >> >
>> >> >> > Kind Regards,
>> >> >> > Tian
>> >> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Kind Regards,
>> >> >
>> >> > Tian Lou
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > Kind Regards,
>> >
>> > Tian Lou
>> >
>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
Hi Joe,

1. I will build a simple flow without our customised processor to test
again.
    It is a good test idea. We saw the OOME is under the HandleHttpRequest,
we never thought about others.

2. About our customised processor, we use lots of these customised
processors.
    Properties are dynamic. We fetch the properties by a rest call and
cached it.
    Sorry, I cannot show you the code.

3. We had the unit test for the customised processors.
   Is there a way to test the memory leak in unit test using some given
methods from nifi?

Thanks.

On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <jo...@gmail.com> wrote:

> Tian,
>
> Ok thanks.  I'd try to removing your customized processor from the
> flow entirely and running your tests.  This will give you a sense of
> base nifi and the stock processors.  Once you're comfortable with that
> then add your processor in.
>
> I say this because if your custom processor is using up the heap we
> will see OOME in various places.  That it shows up in the core
> framework code, for example, does not mean that is the cause.
>
> Does your custom processor hold anything in class level variables?
> Does it open a session and keep accumulating flowfiles?  If you can
> talk more about what it is doing or show a link to the code we could
> quickly assess that.
>
> Thanks
>
> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com> wrote:
> > 1. The HandleHttpRequest Processor get the message.
> > 2. The message route to other processors based on the attribute.
> > 3. We have our customised processor to process the message.
> > 4. Then message would be redirected to the HandleHttpResponse.
> >
> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com> wrote:
> >>
> >> What is the flow doing in between the request/response portion?
> >> Please share more details about the configuration overall.
> >>
> >> Thanks
> >>
> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com>
> wrote:
> >> > Hi Joe,
> >> >
> >> > java version: 1.8.0_121
> >> > heap size:
> >> > # JVM memory settings
> >> > java.arg.2=-Xms512m
> >> > java.arg.3=-Xmx512m
> >> > nifi version: 1.3.0
> >> >
> >> > Also, we put Nifi in the Docker.
> >> >
> >> > Kind Regrads,
> >> > Tian
> >> >
> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com> wrote:
> >> >>
> >> >> Tian,
> >> >>
> >> >> Please provide information on the JRE being used (java -version) and
> >> >> the environment configuration.  How large is your heap?  This can be
> >> >> found in conf/bootstrap.conf.  What version of nifi are you using?
> >> >>
> >> >> Thanks
> >> >>
> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com>
> >> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > We are doing performance test for our NIFI flow with Gatling. But
> >> >> > after
> >> >> > several run, the NIFI always has the OutOfMemory error. I did not
> >> >> > find
> >> >> > similar questions in the mailing list, if you already answered
> >> >> > similar
> >> >> > questions please let me know.
> >> >> >
> >> >> > Problem description:
> >> >> > We have the Nifi flow. The normal flow works fine. To evaluate
> >> >> > whether
> >> >> > our
> >> >> > flow can handle the load, we decided to do the performance test
> with
> >> >> > Gatling.
> >> >> >
> >> >> > 1) We add the two processors HandleHttpRequest at the start of the
> >> >> > flow
> >> >> > and
> >> >> > HandleHttpResponse at the end of the flow. So our nifi is like a
> >> >> > webservice
> >> >> > and Gatling will evaluate the response time.  2) Then  we
> >> >> > continuously
> >> >> > push
> >> >> > messages to HandleHttpRequest processor.
> >> >> >
> >> >> > Problem:
> >> >> > Nifi can only handle two runs. Then the third time, it failed and
> we
> >> >> > have to
> >> >> > restart the NIFI. I copied some error log here.
> >> >> >
> >> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> >> >> HandleHttpRequest[id=**] failed to process session due to
> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> >> >> HandleHttpRequest[id=**] failed to process session due to
> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> >> >> java.lang.OutOfMemoryError: Java heap space
> >> >> >> at java.util.HashMap.values(HashMap.java:958)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.
> resetWriteClaims(StandardProcessSession.java:2720)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.
> checkpoint(StandardProcessSession.java:213)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.repository.
> StandardProcessSession.commit(StandardProcessSession.java:318)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(
> AbstractProcessor.java:28)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(
> StandardProcessorNode.java:1120)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.
> call(ContinuallyRunProcessorTask.java:147)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.
> call(ContinuallyRunProcessorTask.java:47)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> org.apache.nifi.controller.scheduling.
> TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> >> >> >> at
> >> >> >>
> >> >> >> java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >> >> >> at java.util.concurrent.FutureTask.runAndReset(
> FutureTask.java:308)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >> >> >> at
> >> >> >>
> >> >> >>
> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >> >> >> at java.lang.Thread.run(Thread.java:748)
> >> >> >
> >> >> >
> >> >> > So our final questions:
> >> >> > 1. Do you think it is the HandleHttpRequest processors problem? Or
> >> >> > there
> >> >> > is
> >> >> > something wrong in our configuration. Anything we can do to avoid
> >> >> > such
> >> >> > problem?
> >> >> > 2. If it's the processor, will you plan to fix it in the coming
> >> >> > version?
> >> >> >
> >> >> > Thank you so much for your reply.
> >> >> >
> >> >> > Kind Regards,
> >> >> > Tian
> >> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Kind Regards,
> >> >
> >> > Tian Lou
> >> >
> >
> >
> >
> >
> > --
> > Kind Regards,
> >
> > Tian Lou
> >
>



-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Joe Witt <jo...@gmail.com>.
Tian,

Ok thanks.  I'd try to removing your customized processor from the
flow entirely and running your tests.  This will give you a sense of
base nifi and the stock processors.  Once you're comfortable with that
then add your processor in.

I say this because if your custom processor is using up the heap we
will see OOME in various places.  That it shows up in the core
framework code, for example, does not mean that is the cause.

Does your custom processor hold anything in class level variables?
Does it open a session and keep accumulating flowfiles?  If you can
talk more about what it is doing or show a link to the code we could
quickly assess that.

Thanks

On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <ti...@gmail.com> wrote:
> 1. The HandleHttpRequest Processor get the message.
> 2. The message route to other processors based on the attribute.
> 3. We have our customised processor to process the message.
> 4. Then message would be redirected to the HandleHttpResponse.
>
> On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com> wrote:
>>
>> What is the flow doing in between the request/response portion?
>> Please share more details about the configuration overall.
>>
>> Thanks
>>
>> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com> wrote:
>> > Hi Joe,
>> >
>> > java version: 1.8.0_121
>> > heap size:
>> > # JVM memory settings
>> > java.arg.2=-Xms512m
>> > java.arg.3=-Xmx512m
>> > nifi version: 1.3.0
>> >
>> > Also, we put Nifi in the Docker.
>> >
>> > Kind Regrads,
>> > Tian
>> >
>> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com> wrote:
>> >>
>> >> Tian,
>> >>
>> >> Please provide information on the JRE being used (java -version) and
>> >> the environment configuration.  How large is your heap?  This can be
>> >> found in conf/bootstrap.conf.  What version of nifi are you using?
>> >>
>> >> Thanks
>> >>
>> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com>
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > We are doing performance test for our NIFI flow with Gatling. But
>> >> > after
>> >> > several run, the NIFI always has the OutOfMemory error. I did not
>> >> > find
>> >> > similar questions in the mailing list, if you already answered
>> >> > similar
>> >> > questions please let me know.
>> >> >
>> >> > Problem description:
>> >> > We have the Nifi flow. The normal flow works fine. To evaluate
>> >> > whether
>> >> > our
>> >> > flow can handle the load, we decided to do the performance test with
>> >> > Gatling.
>> >> >
>> >> > 1) We add the two processors HandleHttpRequest at the start of the
>> >> > flow
>> >> > and
>> >> > HandleHttpResponse at the end of the flow. So our nifi is like a
>> >> > webservice
>> >> > and Gatling will evaluate the response time.  2) Then  we
>> >> > continuously
>> >> > push
>> >> > messages to HandleHttpRequest processor.
>> >> >
>> >> > Problem:
>> >> > Nifi can only handle two runs. Then the third time, it failed and we
>> >> > have to
>> >> > restart the NIFI. I copied some error log here.
>> >> >
>> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> >> HandleHttpRequest[id=**] failed to process session due to
>> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> >> java.lang.OutOfMemoryError: Java heap space
>> >> >> at java.util.HashMap.values(HashMap.java:958)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>> >> >> at
>> >> >>
>> >> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> >> >> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >> >> at
>> >> >>
>> >> >>
>> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >> >> at java.lang.Thread.run(Thread.java:748)
>> >> >
>> >> >
>> >> > So our final questions:
>> >> > 1. Do you think it is the HandleHttpRequest processors problem? Or
>> >> > there
>> >> > is
>> >> > something wrong in our configuration. Anything we can do to avoid
>> >> > such
>> >> > problem?
>> >> > 2. If it's the processor, will you plan to fix it in the coming
>> >> > version?
>> >> >
>> >> > Thank you so much for your reply.
>> >> >
>> >> > Kind Regards,
>> >> > Tian
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > Kind Regards,
>> >
>> > Tian Lou
>> >
>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
1. The HandleHttpRequest Processor get the message.
2. The message route to other processors based on the attribute.
3. We have our customised processor to process the message.
4. Then message would be redirected to the HandleHttpResponse.

On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <jo...@gmail.com> wrote:

> What is the flow doing in between the request/response portion?
> Please share more details about the configuration overall.
>
> Thanks
>
> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com> wrote:
> > Hi Joe,
> >
> > java version: 1.8.0_121
> > heap size:
> > # JVM memory settings
> > java.arg.2=-Xms512m
> > java.arg.3=-Xmx512m
> > nifi version: 1.3.0
> >
> > Also, we put Nifi in the Docker.
> >
> > Kind Regrads,
> > Tian
> >
> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com> wrote:
> >>
> >> Tian,
> >>
> >> Please provide information on the JRE being used (java -version) and
> >> the environment configuration.  How large is your heap?  This can be
> >> found in conf/bootstrap.conf.  What version of nifi are you using?
> >>
> >> Thanks
> >>
> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com>
> wrote:
> >> > Hi,
> >> >
> >> > We are doing performance test for our NIFI flow with Gatling. But
> after
> >> > several run, the NIFI always has the OutOfMemory error. I did not find
> >> > similar questions in the mailing list, if you already answered similar
> >> > questions please let me know.
> >> >
> >> > Problem description:
> >> > We have the Nifi flow. The normal flow works fine. To evaluate whether
> >> > our
> >> > flow can handle the load, we decided to do the performance test with
> >> > Gatling.
> >> >
> >> > 1) We add the two processors HandleHttpRequest at the start of the
> flow
> >> > and
> >> > HandleHttpResponse at the end of the flow. So our nifi is like a
> >> > webservice
> >> > and Gatling will evaluate the response time.  2) Then  we continuously
> >> > push
> >> > messages to HandleHttpRequest processor.
> >> >
> >> > Problem:
> >> > Nifi can only handle two runs. Then the third time, it failed and we
> >> > have to
> >> > restart the NIFI. I copied some error log here.
> >> >
> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> >> HandleHttpRequest[id=**] failed to process session due to
> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> >> HandleHttpRequest[id=**] failed to process session due to
> >> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> >> java.lang.OutOfMemoryError: Java heap space
> >> >> at java.util.HashMap.values(HashMap.java:958)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.repository.StandardProcessSession.
> resetWriteClaims(StandardProcessSession.java:2720)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.repository.StandardProcessSession.
> checkpoint(StandardProcessSession.java:213)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(
> StandardProcessSession.java:318)
> >> >> at
> >> >>
> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(
> AbstractProcessor.java:28)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(
> StandardProcessorNode.java:1120)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(
> ContinuallyRunProcessorTask.java:147)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(
> ContinuallyRunProcessorTask.java:47)
> >> >> at
> >> >>
> >> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.
> run(TimerDrivenSchedulingAgent.java:132)
> >> >> at
> >> >> java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >> >> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> >> >> at
> >> >>
> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >> >> at
> >> >>
> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >> >> at
> >> >>
> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >> >> at
> >> >>
> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >> >> at java.lang.Thread.run(Thread.java:748)
> >> >
> >> >
> >> > So our final questions:
> >> > 1. Do you think it is the HandleHttpRequest processors problem? Or
> there
> >> > is
> >> > something wrong in our configuration. Anything we can do to avoid such
> >> > problem?
> >> > 2. If it's the processor, will you plan to fix it in the coming
> version?
> >> >
> >> > Thank you so much for your reply.
> >> >
> >> > Kind Regards,
> >> > Tian
> >> >
> >
> >
> >
> >
> > --
> > Kind Regards,
> >
> > Tian Lou
> >
>



-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Joe Witt <jo...@gmail.com>.
What is the flow doing in between the request/response portion?
Please share more details about the configuration overall.

Thanks

On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <ti...@gmail.com> wrote:
> Hi Joe,
>
> java version: 1.8.0_121
> heap size:
> # JVM memory settings
> java.arg.2=-Xms512m
> java.arg.3=-Xmx512m
> nifi version: 1.3.0
>
> Also, we put Nifi in the Docker.
>
> Kind Regrads,
> Tian
>
> On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com> wrote:
>>
>> Tian,
>>
>> Please provide information on the JRE being used (java -version) and
>> the environment configuration.  How large is your heap?  This can be
>> found in conf/bootstrap.conf.  What version of nifi are you using?
>>
>> Thanks
>>
>> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com> wrote:
>> > Hi,
>> >
>> > We are doing performance test for our NIFI flow with Gatling. But after
>> > several run, the NIFI always has the OutOfMemory error. I did not find
>> > similar questions in the mailing list, if you already answered similar
>> > questions please let me know.
>> >
>> > Problem description:
>> > We have the Nifi flow. The normal flow works fine. To evaluate whether
>> > our
>> > flow can handle the load, we decided to do the performance test with
>> > Gatling.
>> >
>> > 1) We add the two processors HandleHttpRequest at the start of the flow
>> > and
>> > HandleHttpResponse at the end of the flow. So our nifi is like a
>> > webservice
>> > and Gatling will evaluate the response time.  2) Then  we continuously
>> > push
>> > messages to HandleHttpRequest processor.
>> >
>> > Problem:
>> > Nifi can only handle two runs. Then the third time, it failed and we
>> > have to
>> > restart the NIFI. I copied some error log here.
>> >
>> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> HandleHttpRequest[id=**] failed to process session due to
>> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> >> HandleHttpRequest[id=**] failed to process session due to
>> >> java.lang.OutOfMemoryError: Java heap space: {}
>> >> java.lang.OutOfMemoryError: Java heap space
>> >> at java.util.HashMap.values(HashMap.java:958)
>> >> at
>> >>
>> >> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
>> >> at
>> >>
>> >> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
>> >> at
>> >>
>> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
>> >> at
>> >>
>> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>> >> at
>> >>
>> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>> >> at
>> >>
>> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>> >> at
>> >>
>> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>> >> at
>> >>
>> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>> >> at
>> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> >> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> >> at
>> >>
>> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> >> at
>> >>
>> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> >> at
>> >>
>> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >> at
>> >>
>> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >> at java.lang.Thread.run(Thread.java:748)
>> >
>> >
>> > So our final questions:
>> > 1. Do you think it is the HandleHttpRequest processors problem? Or there
>> > is
>> > something wrong in our configuration. Anything we can do to avoid such
>> > problem?
>> > 2. If it's the processor, will you plan to fix it in the coming version?
>> >
>> > Thank you so much for your reply.
>> >
>> > Kind Regards,
>> > Tian
>> >
>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>

Re: Memory leak for HandleHttpRequest processor?

Posted by Lou Tian <ti...@gmail.com>.
Hi Joe,

java version: 1.8.0_121
heap size:
# JVM memory settings
java.arg.2=-Xms512m
java.arg.3=-Xmx512m
nifi version: 1.3.0

Also, we put Nifi in the Docker.

Kind Regrads,
Tian

On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <jo...@gmail.com> wrote:

> Tian,
>
> Please provide information on the JRE being used (java -version) and
> the environment configuration.  How large is your heap?  This can be
> found in conf/bootstrap.conf.  What version of nifi are you using?
>
> Thanks
>
> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com> wrote:
> > Hi,
> >
> > We are doing performance test for our NIFI flow with Gatling. But after
> > several run, the NIFI always has the OutOfMemory error. I did not find
> > similar questions in the mailing list, if you already answered similar
> > questions please let me know.
> >
> > Problem description:
> > We have the Nifi flow. The normal flow works fine. To evaluate whether
> our
> > flow can handle the load, we decided to do the performance test with
> > Gatling.
> >
> > 1) We add the two processors HandleHttpRequest at the start of the flow
> and
> > HandleHttpResponse at the end of the flow. So our nifi is like a
> webservice
> > and Gatling will evaluate the response time.  2) Then  we continuously
> push
> > messages to HandleHttpRequest processor.
> >
> > Problem:
> > Nifi can only handle two runs. Then the third time, it failed and we
> have to
> > restart the NIFI. I copied some error log here.
> >
> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> HandleHttpRequest[id=**] failed to process session due to
> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
> >> HandleHttpRequest[id=**] failed to process session due to
> >> java.lang.OutOfMemoryError: Java heap space: {}
> >> java.lang.OutOfMemoryError: Java heap space
> >> at java.util.HashMap.values(HashMap.java:958)
> >> at
> >> org.apache.nifi.controller.repository.StandardProcessSession.
> resetWriteClaims(StandardProcessSession.java:2720)
> >> at
> >> org.apache.nifi.controller.repository.StandardProcessSession.
> checkpoint(StandardProcessSession.java:213)
> >> at
> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(
> StandardProcessSession.java:318)
> >> at
> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(
> AbstractProcessor.java:28)
> >> at
> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(
> StandardProcessorNode.java:1120)
> >> at
> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(
> ContinuallyRunProcessorTask.java:147)
> >> at
> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(
> ContinuallyRunProcessorTask.java:47)
> >> at
> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(
> TimerDrivenSchedulingAgent.java:132)
> >> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> >> at
> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >> at
> >> java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >> at
> >> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >> at
> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >> at java.lang.Thread.run(Thread.java:748)
> >
> >
> > So our final questions:
> > 1. Do you think it is the HandleHttpRequest processors problem? Or there
> is
> > something wrong in our configuration. Anything we can do to avoid such
> > problem?
> > 2. If it's the processor, will you plan to fix it in the coming version?
> >
> > Thank you so much for your reply.
> >
> > Kind Regards,
> > Tian
> >
>



-- 
Kind Regards,

Tian Lou

Re: Memory leak for HandleHttpRequest processor?

Posted by Joe Witt <jo...@gmail.com>.
Tian,

Please provide information on the JRE being used (java -version) and
the environment configuration.  How large is your heap?  This can be
found in conf/bootstrap.conf.  What version of nifi are you using?

Thanks

On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <ti...@gmail.com> wrote:
> Hi,
>
> We are doing performance test for our NIFI flow with Gatling. But after
> several run, the NIFI always has the OutOfMemory error. I did not find
> similar questions in the mailing list, if you already answered similar
> questions please let me know.
>
> Problem description:
> We have the Nifi flow. The normal flow works fine. To evaluate whether our
> flow can handle the load, we decided to do the performance test with
> Gatling.
>
> 1) We add the two processors HandleHttpRequest at the start of the flow and
> HandleHttpResponse at the end of the flow. So our nifi is like a webservice
> and Gatling will evaluate the response time.  2) Then  we continuously push
> messages to HandleHttpRequest processor.
>
> Problem:
> Nifi can only handle two runs. Then the third time, it failed and we have to
> restart the NIFI. I copied some error log here.
>
>>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> HandleHttpRequest[id=**] failed to process session due to
>> java.lang.OutOfMemoryError: Java heap space: {}
>> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>> HandleHttpRequest[id=**] failed to process session due to
>> java.lang.OutOfMemoryError: Java heap space: {}
>> java.lang.OutOfMemoryError: Java heap space
>> at java.util.HashMap.values(HashMap.java:958)
>> at
>> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
>> at
>> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
>> at
>> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
>> at
>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>> at
>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>> at
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>> at
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>> at
>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:748)
>
>
> So our final questions:
> 1. Do you think it is the HandleHttpRequest processors problem? Or there is
> something wrong in our configuration. Anything we can do to avoid such
> problem?
> 2. If it's the processor, will you plan to fix it in the coming version?
>
> Thank you so much for your reply.
>
> Kind Regards,
> Tian
>