You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ant.apache.org by Antoine Levy Lambert <an...@gmx.de> on 2015/04/08 03:22:38 UTC

Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Hi,

I wonder whether we should not upgrade ivy to use the latest http client library too ?

Regards,

Antoine

On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <ji...@apache.org> wrote:

> 
>    [ https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483468#comment-14483468 ] 
> 
> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
> ------------------------------------------------------------
> 
> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
> 
> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
> 
> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
> 
> 
> was (Author: qphase):
> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
> 
> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
> 
> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Posted by Antoine Levy Lambert <an...@gmx.de>.
Hello,

thanks to every one involved for your input on this discussion.

So I will not attempt to make the use of http-client compulsory.

I have just changed a file in the ant-site http repository under ivy/sources/choose-distrib.html [1]

Afterwards, I tried to run the build file located here : [2]

I am getting an error :

bash-3.2$ ant generate-site
Buildfile: /Users/antoine/dev/asf/ant-site/ivy/build.xml

generate-site:
[xooki:generate] processing 22 source files...
processing 19 source files...
 [generate] processing /Users/antoine/dev/asf/ant-site/ivy/sources/choose-distrib.html...
 [generate] script error in file /Users/antoine/dev/asf/ant-site/xooki/xooki.js : sun.org.mozilla.javascript.internal.EcmaError: TypeError: Cannot read property "id" from null (/Users/antoine/dev/asf/ant-site/xooki/xooki.js#1041) in /Users/antoine/dev/asf/ant-site/xooki/xooki.js at line number 1041
 [generate] Result: 10
    [print] processing /Users/antoine/dev/asf/ant-site/ivy/sources/history/latest-milestone/index.html...
    [print] index.html is not a valid xooki source. ignored.

BUILD SUCCESSFUL
Total time: 7 seconds

I notice that choose-distrib is not regenerated.

Also this choose-distrib.html had long lines, is it due to a constraint of xooki ?

Regards,

Antoine

[1] http://svn.apache.org/viewvc/ant/site/ivy/sources/choose-distrib.html?view=log
[2] http://svn.apache.org/viewvc/ant/site/ivy/build.xml?view=co&revision=1635330&content-type=text%2Fplain


On Apr 9, 2015, at 11:20 AM, Loren Kratzke <LK...@blueorigin.com> wrote:

> The short term fix would be documentation. Say it in clear language right next to the download link - 
> 
>    "If you publish large artifacts then you must download Ivy+deps. 
>    Install commons httpclient, codec, and logging jars into ant/lib next to ivy jar."
> 
> Note that you need all three jars, not just httpclient. That detail is not documented anywhere that I know of.
> 
> That is what can be done now. Going forward the options are as follows:
> 
>    1. Keep everything the same, consider the documentation as the solution.
>    2. Require httpclient jars to be installed.
>    3. Find a work around for the buffering/authentication issues of HttpURLConnection. 
>    4. Include necessary httpclient classes inside ivy.jar. 
> 
> Several options available. Each has its own merits.
> 
> L.K.
> 
> -----Original Message-----
> From: Maarten Coene [mailto:maarten_coene@yahoo.com.INVALID] 
> Sent: Thursday, April 09, 2015 7:51 AM
> To: Ant Developers List
> Subject: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish
> 
> I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies when using standard resolvers.
> Perhaps it could be added to the documentation that if you use the URLresolver for large uploads you'll have to add httpclient to the classpath?
> 
> 
> Maarten
> 
> 
> 
> 
> ----- Oorspronkelijk bericht -----
> Van: Antoine Levy Lambert <an...@gmx.de>
> Aan: Ant Developers List <de...@ant.apache.org>
> Cc: 
> Verzonden: donderdag 9 april 3:50 2015
> Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish
> 
> Also, I wonder whether we should not make the use of httpclient with ivy compulsory, since Loren says that the HttpUrlConnection of the JDK is always copying the full file into a ByteArray when authentication is performed.
> 
> That would make the code more simple.
> 
> Regards,
> 
> Antoine
> 
> On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert <an...@gmx.de> wrote:
> 
>> Hi,
>> 
>> I wonder whether we should not upgrade ivy to use the latest http client library too ?
>> 
>> Regards,
>> 
>> Antoine
>> 
>> On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <ji...@apache.org> wrote:
>> 
>>> 
>>>  [ 
>>> https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jir
>>> a.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1448
>>> 3468#comment-14483468 ]
>>> 
>>> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
>>> ------------------------------------------------------------
>>> 
>>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>>> 
>>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>>> 
>>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>>> 
>>> 
>>> was (Author: qphase):
>>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>>> 
>>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>>> 
>>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>>> 
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional 
>> commands, e-mail: dev-help@ant.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
> For additional commands, e-mail: dev-help@ant.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


RE: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Posted by Loren Kratzke <LK...@blueorigin.com>.
The short term fix would be documentation. Say it in clear language right next to the download link - 

    "If you publish large artifacts then you must download Ivy+deps. 
    Install commons httpclient, codec, and logging jars into ant/lib next to ivy jar."

Note that you need all three jars, not just httpclient. That detail is not documented anywhere that I know of.

That is what can be done now. Going forward the options are as follows:

    1. Keep everything the same, consider the documentation as the solution.
    2. Require httpclient jars to be installed.
    3. Find a work around for the buffering/authentication issues of HttpURLConnection. 
    4. Include necessary httpclient classes inside ivy.jar. 

Several options available. Each has its own merits.

L.K.

-----Original Message-----
From: Maarten Coene [mailto:maarten_coene@yahoo.com.INVALID] 
Sent: Thursday, April 09, 2015 7:51 AM
To: Ant Developers List
Subject: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies when using standard resolvers.
Perhaps it could be added to the documentation that if you use the URLresolver for large uploads you'll have to add httpclient to the classpath?


Maarten




----- Oorspronkelijk bericht -----
Van: Antoine Levy Lambert <an...@gmx.de>
Aan: Ant Developers List <de...@ant.apache.org>
Cc: 
Verzonden: donderdag 9 april 3:50 2015
Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Also, I wonder whether we should not make the use of httpclient with ivy compulsory, since Loren says that the HttpUrlConnection of the JDK is always copying the full file into a ByteArray when authentication is performed.

That would make the code more simple.

Regards,

Antoine

On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert <an...@gmx.de> wrote:

> Hi,
> 
> I wonder whether we should not upgrade ivy to use the latest http client library too ?
> 
> Regards,
> 
> Antoine
> 
> On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <ji...@apache.org> wrote:
> 
>> 
>>   [ 
>> https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jir
>> a.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1448
>> 3468#comment-14483468 ]
>> 
>> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
>> ------------------------------------------------------------
>> 
>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>> 
>> 
>> was (Author: qphase):
>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional 
> commands, e-mail: dev-help@ant.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Posted by Nicolas Lalevée <ni...@hibnet.org>.
> Le 9 avr. 2015 à 16:51, Maarten Coene <ma...@yahoo.com.INVALID> a écrit :
> 
> I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies when using standard resolvers.
> Perhaps it could be added to the documentation that if you use the URLresolver for large uploads you'll have to add httpclient to the classpath?

+1
And considering we are packaging Ivy for Eclipse, we would have to make somehow httpclient installed there if not.

Nicolas

> 
> 
> Maarten
> 
> 
> 
> 
> ----- Oorspronkelijk bericht -----
> Van: Antoine Levy Lambert <an...@gmx.de>
> Aan: Ant Developers List <de...@ant.apache.org>
> Cc: 
> Verzonden: donderdag 9 april 3:50 2015
> Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish
> 
> Also, I wonder whether we should not make the use of httpclient with ivy compulsory, since Loren says that the HttpUrlConnection of the JDK is always copying the full file into a ByteArray when authentication is performed.
> 
> That would make the code more simple.
> 
> Regards,
> 
> Antoine
> 
> On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert <an...@gmx.de> wrote:
> 
>> Hi,
>> 
>> I wonder whether we should not upgrade ivy to use the latest http client library too ?
>> 
>> Regards,
>> 
>> Antoine
>> 
>> On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <ji...@apache.org> wrote:
>> 
>>> 
>>>  [ https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483468#comment-14483468 ] 
>>> 
>>> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
>>> ------------------------------------------------------------
>>> 
>>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>>> 
>>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>>> 
>>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>>> 
>>> 
>>> was (Author: qphase):
>>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>>> 
>>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>>> 
>>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>>> 
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
>> For additional commands, e-mail: dev-help@ant.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
> For additional commands, e-mail: dev-help@ant.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Posted by Maarten Coene <ma...@yahoo.com.INVALID>.
I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies when using standard resolvers.
Perhaps it could be added to the documentation that if you use the URLresolver for large uploads you'll have to add httpclient to the classpath?


Maarten




----- Oorspronkelijk bericht -----
Van: Antoine Levy Lambert <an...@gmx.de>
Aan: Ant Developers List <de...@ant.apache.org>
Cc: 
Verzonden: donderdag 9 april 3:50 2015
Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Also, I wonder whether we should not make the use of httpclient with ivy compulsory, since Loren says that the HttpUrlConnection of the JDK is always copying the full file into a ByteArray when authentication is performed.

That would make the code more simple.

Regards,

Antoine

On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert <an...@gmx.de> wrote:

> Hi,
> 
> I wonder whether we should not upgrade ivy to use the latest http client library too ?
> 
> Regards,
> 
> Antoine
> 
> On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <ji...@apache.org> wrote:
> 
>> 
>>   [ https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483468#comment-14483468 ] 
>> 
>> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
>> ------------------------------------------------------------
>> 
>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>> 
>> 
>> was (Author: qphase):
>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
> For additional commands, e-mail: dev-help@ant.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Posted by Antoine Levy Lambert <an...@gmx.de>.
Also, I wonder whether we should not make the use of httpclient with ivy compulsory, since Loren says that the HttpUrlConnection of the JDK is always copying the full file into a ByteArray when authentication is performed.

That would make the code more simple.

Regards,

Antoine
On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert <an...@gmx.de> wrote:

> Hi,
> 
> I wonder whether we should not upgrade ivy to use the latest http client library too ?
> 
> Regards,
> 
> Antoine
> 
> On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <ji...@apache.org> wrote:
> 
>> 
>>   [ https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483468#comment-14483468 ] 
>> 
>> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
>> ------------------------------------------------------------
>> 
>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>> 
>> 
>> was (Author: qphase):
>> I would be happy to provide you with a project that will reproduce the issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling convenience methods and writing to streams that ultimately buffer the data being written. There is buffering, then more buffering, and even more buffering until you have multiple copies of the entire content of the stream stored in over sized buffers (because they double in size when they fill up). Oddly, the twist is that the JVM hits a limit no matter how much RAM you allocate. Once the buffers total more than about ~1GB (which is what happens with a 100-200MB upload) the JVM refuses to allocate more buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in buffering any of this data to begin with, it is just a side effect of using high level copy methods. There is no memory ballooning at all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug and watch the process walk all the way down the isle to an OOME. I will have this for you asap.
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
> For additional commands, e-mail: dev-help@ant.apache.org