You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@nifi.apache.org by "Madhire, Naveen" <Na...@capitalone.com> on 2015/11/10 23:28:37 UTC

Memory Issues on Split Text

Hi,

I am reading a 2 GB file from local and putting the data into a Kafka topic.

Since GetFile only creates one flow file per file, I am making use of SplitText processor to split the file into one flow file per line before inserting the data into a Kafka topic.
I am seeing a lot of “GC Overhead limit exceeded errors” on SplitText processor. I am running Nifi on a single linux server with 16 GB memory.

Is this the right approach of reading and putting into Kafka?
Or there is any better approach?

Thanks,
Naveen


________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: Memory Issues on Split Text

Posted by "Madhire, Naveen" <Na...@capitalone.com>.
Hey Joe, 

I am just testing a simple flow of reading the file from local file system
and inserting into a kafka topic.

The flow is

GetFile -> SplitText (with 20000 line split) -> SplitText(1 line split)
-> putkafka 


So total there are 220K events processed in 1 min 44 sec. Which I think is
pretty good so far.

I¹ve Nifi running on a single m4.4xlarge EC2 instance, I¹ve not changed
any other Nifi settings. Is there anything else which needs to be modified
for flow file, content, provenance repos?


Thanks,
Naveen

On 11/11/15, 12:31 PM, "Joe Witt" <jo...@gmail.com> wrote:

>Naveen,
>
>For throughput can you state what the desired events/sec/node would be
>for you and can you describe how the flowfile vs content vs prov repo
>is setup on the machine it is running on?
>
>Thanks
>Joe
>
>On Wed, Nov 11, 2015 at 1:21 PM, Madhire, Naveen
><Na...@capitalone.com> wrote:
>> Thanks Mark. The workaround to have intermediate split text to split few
>> lines works well, as you said, the throughput is not quite there. I
>>think it
>> serves our purpose as of now.
>>
>>
>>
>> From: Mark Payne <ma...@hotmail.com>
>> Reply-To: "users@nifi.apache.org" <us...@nifi.apache.org>
>> Date: Tuesday, November 10, 2015 at 7:12 PM
>> To: "users@nifi.apache.org" <us...@nifi.apache.org>
>> Subject: Re: Memory Issues on Split Text
>>
>> Naveen,
>>
>> There is a ticket [1] that will make this work more cleanly so that we
>>can
>> use SplitText to split a large
>> file into millions of FlowFiles. Right now, as you noted you will end up
>> running out of memory. There are
>> a few possible solutions that you can use.
>>
>> If you need to split each line into a separate FlowFile, the easiest
>>way is
>> to use two SplitText processors.
>> The first would be configured with a Line Split Count of say 10,000.
>>Then,
>> the "splits" relationship is routed
>> to a second SplitText processor with the Line Split Count set to 1. This
>> prevents the processor from holding
>> those millions of FlowFiles in memory. The only downside here is that
>>if you
>> create a FlowFile for every single
>> message, your throughput will not be quite as good.
>>
>> The next approach is to just send the entire 2 GB FlowFile to PutKafka
>>and
>> set the Message Delimiter to "\n".
>> This will send each line in the FlowFile to Kafka as a separate
>>message. The
>> down side here is that if you have
>> sent say 1 million messages to Kafka and then NiFi is restarted, it
>>doesn't
>> know that those 1 million messages have
>> been sent, so you will end up sending all of the data again and will
>> duplicate a lot of the messages.
>>
>> The third approach is a hybrid of the two. You can use SplitText to
>>split
>> the FlowFile into 10,000 lines each. Then,
>> instead of sending to another SplitText, you can send the "splits"
>> relationship to PutKafka with a Message Delimiter
>> of "\n". This way, you will still get great throughput by not splitting
>>each
>> FlowFile into millions of FlowFiles, but you will
>> avoid duplicating millions of messages (you'll duplicate at the very
>>most
>> 10,000 messages in this example).
>>
>> So you can use any of these approaches. You just have to consider the
>>pro's
>> and con's of each and decide which
>> trade-offs you want to make.
>>
>> Thanks
>> -Mark
>>
>>
>> [1] https://issues.apache.org/jira/browse/NIFI-1008
>>
>>
>>
>>
>>
>> On Nov 10, 2015, at 5:28 PM, Madhire, Naveen
>><Na...@capitalone.com>
>> wrote:
>>
>> Hi,
>>
>> I am reading a 2 GB file from local and putting the data into a Kafka
>>topic.
>>
>> Since GetFile only creates one flow file per file, I am making use of
>> SplitText processor to split the file into one flow file per line before
>> inserting the data into a Kafka topic.
>> I am seeing a lot of ³GC Overhead limit exceeded errors² on SplitText
>> processor. I am running Nifi on a single linux server with 16 GB memory.
>>
>> Is this the right approach of reading and putting into Kafka?
>> Or there is any better approach?
>>
>> Thanks,
>> Naveen
>>
>>
>>
>> ________________________________
>> The information contained in this e-mail is confidential and/or
>>proprietary
>> to Capital One and/or its affiliates and may only be used solely in
>> performance of work or services for Capital One. The information
>>transmitted
>> herewith is intended only for use by the individual or entity to which
>>it is
>> addressed. If the reader of this message is not the intended recipient,
>>you
>> are hereby notified that any review, retransmission, dissemination,
>> distribution, copying or other use of, or taking of any action in
>>reliance
>> upon this information is strictly prohibited. If you have received this
>> communication in error, please contact the sender and delete the
>>material
>> from your computer.
>>
>>
>>
>> ________________________________
>>
>> The information contained in this e-mail is confidential and/or
>>proprietary
>> to Capital One and/or its affiliates and may only be used solely in
>> performance of work or services for Capital One. The information
>>transmitted
>> herewith is intended only for use by the individual or entity to which
>>it is
>> addressed. If the reader of this message is not the intended recipient,
>>you
>> are hereby notified that any review, retransmission, dissemination,
>> distribution, copying or other use of, or taking of any action in
>>reliance
>> upon this information is strictly prohibited. If you have received this
>> communication in error, please contact the sender and delete the
>>material
>> from your computer.

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.


Re: Memory Issues on Split Text

Posted by Joe Witt <jo...@gmail.com>.
Naveen,

For throughput can you state what the desired events/sec/node would be
for you and can you describe how the flowfile vs content vs prov repo
is setup on the machine it is running on?

Thanks
Joe

On Wed, Nov 11, 2015 at 1:21 PM, Madhire, Naveen
<Na...@capitalone.com> wrote:
> Thanks Mark. The workaround to have intermediate split text to split few
> lines works well, as you said, the throughput is not quite there. I think it
> serves our purpose as of now.
>
>
>
> From: Mark Payne <ma...@hotmail.com>
> Reply-To: "users@nifi.apache.org" <us...@nifi.apache.org>
> Date: Tuesday, November 10, 2015 at 7:12 PM
> To: "users@nifi.apache.org" <us...@nifi.apache.org>
> Subject: Re: Memory Issues on Split Text
>
> Naveen,
>
> There is a ticket [1] that will make this work more cleanly so that we can
> use SplitText to split a large
> file into millions of FlowFiles. Right now, as you noted you will end up
> running out of memory. There are
> a few possible solutions that you can use.
>
> If you need to split each line into a separate FlowFile, the easiest way is
> to use two SplitText processors.
> The first would be configured with a Line Split Count of say 10,000. Then,
> the "splits" relationship is routed
> to a second SplitText processor with the Line Split Count set to 1. This
> prevents the processor from holding
> those millions of FlowFiles in memory. The only downside here is that if you
> create a FlowFile for every single
> message, your throughput will not be quite as good.
>
> The next approach is to just send the entire 2 GB FlowFile to PutKafka and
> set the Message Delimiter to "\n".
> This will send each line in the FlowFile to Kafka as a separate message. The
> down side here is that if you have
> sent say 1 million messages to Kafka and then NiFi is restarted, it doesn't
> know that those 1 million messages have
> been sent, so you will end up sending all of the data again and will
> duplicate a lot of the messages.
>
> The third approach is a hybrid of the two. You can use SplitText to split
> the FlowFile into 10,000 lines each. Then,
> instead of sending to another SplitText, you can send the "splits"
> relationship to PutKafka with a Message Delimiter
> of "\n". This way, you will still get great throughput by not splitting each
> FlowFile into millions of FlowFiles, but you will
> avoid duplicating millions of messages (you'll duplicate at the very most
> 10,000 messages in this example).
>
> So you can use any of these approaches. You just have to consider the pro's
> and con's of each and decide which
> trade-offs you want to make.
>
> Thanks
> -Mark
>
>
> [1] https://issues.apache.org/jira/browse/NIFI-1008
>
>
>
>
>
> On Nov 10, 2015, at 5:28 PM, Madhire, Naveen <Na...@capitalone.com>
> wrote:
>
> Hi,
>
> I am reading a 2 GB file from local and putting the data into a Kafka topic.
>
> Since GetFile only creates one flow file per file, I am making use of
> SplitText processor to split the file into one flow file per line before
> inserting the data into a Kafka topic.
> I am seeing a lot of “GC Overhead limit exceeded errors” on SplitText
> processor. I am running Nifi on a single linux server with 16 GB memory.
>
> Is this the right approach of reading and putting into Kafka?
> Or there is any better approach?
>
> Thanks,
> Naveen
>
>
>
> ________________________________
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
>
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.

Re: Memory Issues on Split Text

Posted by "Madhire, Naveen" <Na...@capitalone.com>.
Thanks Mark. The workaround to have intermediate split text to split few lines works well, as you said, the throughput is not quite there. I think it serves our purpose as of now.



From: Mark Payne <ma...@hotmail.com>>
Reply-To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Date: Tuesday, November 10, 2015 at 7:12 PM
To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Subject: Re: Memory Issues on Split Text

Naveen,

There is a ticket [1] that will make this work more cleanly so that we can use SplitText to split a large
file into millions of FlowFiles. Right now, as you noted you will end up running out of memory. There are
a few possible solutions that you can use.

If you need to split each line into a separate FlowFile, the easiest way is to use two SplitText processors.
The first would be configured with a Line Split Count of say 10,000. Then, the "splits" relationship is routed
to a second SplitText processor with the Line Split Count set to 1. This prevents the processor from holding
those millions of FlowFiles in memory. The only downside here is that if you create a FlowFile for every single
message, your throughput will not be quite as good.

The next approach is to just send the entire 2 GB FlowFile to PutKafka and set the Message Delimiter to "\n".
This will send each line in the FlowFile to Kafka as a separate message. The down side here is that if you have
sent say 1 million messages to Kafka and then NiFi is restarted, it doesn't know that those 1 million messages have
been sent, so you will end up sending all of the data again and will duplicate a lot of the messages.

The third approach is a hybrid of the two. You can use SplitText to split the FlowFile into 10,000 lines each. Then,
instead of sending to another SplitText, you can send the "splits" relationship to PutKafka with a Message Delimiter
of "\n". This way, you will still get great throughput by not splitting each FlowFile into millions of FlowFiles, but you will
avoid duplicating millions of messages (you'll duplicate at the very most 10,000 messages in this example).

So you can use any of these approaches. You just have to consider the pro's and con's of each and decide which
trade-offs you want to make.

Thanks
-Mark


[1] https://issues.apache.org/jira/browse/NIFI-1008





On Nov 10, 2015, at 5:28 PM, Madhire, Naveen <Na...@capitalone.com>> wrote:

Hi,

I am reading a 2 GB file from local and putting the data into a Kafka topic.

Since GetFile only creates one flow file per file, I am making use of SplitText processor to split the file into one flow file per line before inserting the data into a Kafka topic.
I am seeing a lot of “GC Overhead limit exceeded errors” on SplitText processor. I am running Nifi on a single linux server with 16 GB memory.

Is this the right approach of reading and putting into Kafka?
Or there is any better approach?

Thanks,
Naveen



________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: Memory Issues on Split Text

Posted by Mark Payne <ma...@hotmail.com>.
Naveen,

There is a ticket [1] that will make this work more cleanly so that we can use SplitText to split a large
file into millions of FlowFiles. Right now, as you noted you will end up running out of memory. There are
a few possible solutions that you can use.

If you need to split each line into a separate FlowFile, the easiest way is to use two SplitText processors.
The first would be configured with a Line Split Count of say 10,000. Then, the "splits" relationship is routed
to a second SplitText processor with the Line Split Count set to 1. This prevents the processor from holding
those millions of FlowFiles in memory. The only downside here is that if you create a FlowFile for every single
message, your throughput will not be quite as good.

The next approach is to just send the entire 2 GB FlowFile to PutKafka and set the Message Delimiter to "\n".
This will send each line in the FlowFile to Kafka as a separate message. The down side here is that if you have
sent say 1 million messages to Kafka and then NiFi is restarted, it doesn't know that those 1 million messages have
been sent, so you will end up sending all of the data again and will duplicate a lot of the messages.

The third approach is a hybrid of the two. You can use SplitText to split the FlowFile into 10,000 lines each. Then,
instead of sending to another SplitText, you can send the "splits" relationship to PutKafka with a Message Delimiter
of "\n". This way, you will still get great throughput by not splitting each FlowFile into millions of FlowFiles, but you will
avoid duplicating millions of messages (you'll duplicate at the very most 10,000 messages in this example).

So you can use any of these approaches. You just have to consider the pro's and con's of each and decide which
trade-offs you want to make.

Thanks
-Mark


[1] https://issues.apache.org/jira/browse/NIFI-1008 <https://issues.apache.org/jira/browse/NIFI-1008>





> On Nov 10, 2015, at 5:28 PM, Madhire, Naveen <Na...@capitalone.com> wrote:
> 
> Hi,
> 
> I am reading a 2 GB file from local and putting the data into a Kafka topic.
> 
> Since GetFile only creates one flow file per file, I am making use of SplitText processor to split the file into one flow file per line before inserting the data into a Kafka topic.  
> I am seeing a lot of “GC Overhead limit exceeded errors” on SplitText processor. I am running Nifi on a single linux server with 16 GB memory.
> 
> Is this the right approach of reading and putting into Kafka?
> Or there is any better approach?
> 
> Thanks,
> Naveen
> 
> 
> 
> The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.