You are viewing a plain text version of this content. The canonical link for it is here.
Posted to c-dev@axis.apache.org by Manjula Peiris <ma...@wso2.com> on 2008/02/13 05:00:18 UTC

Re: Sending large attachments with Axis2/C with httpd

Hi devs,

While implementing an array of buffer solution for the $subject I tried
with another alternative. Please see the comment on 
https://issues.apache.org/jira/browse/AXIS2C-862

I also attached the modified mime_parser.c file.
I don't think with the array of buffer approach we can't achieve lesser
memory usage than that. I have implemented that also but need to fix
some more because the logic is bit complex and bit hard to debug.

The other thing is our current MTOM implementaion does not work properly
when chunking is enabled. See
https://issues.apache.org/jira/browse/AXIS2C-982

If we can solve the chunked case the attached logic in the Jira is fine
for me.  

But I also think we need to re-visit creating our apr_allocator. If we
can fixed Sandesha2/C for that and used the attached solution in the
jira it would be fine.

Your thoughts are highly appreciated.   

Thanks,
-Manjula.



On Wed, 2008-01-23 at 14:28 +0530, Manjula Peiris wrote:
> Hi devs,
> 
> While I am working on ,https://issues.apache.org/jira/browse/AXIS2C-862
> i found with httpd it takes around 800MB of memory to process a 40MB
> attachment. The reason is when processing the request with an optimized
> binary data the entire SOAP message is parsed by mime_parse_parse()
> method to separates the soap envelope and mime parts. Since the incoming
> stream is read block by block, after processing a block the parser
> reallocates the whole (previous + new) for the precessing of second
> block. With simple axis server this is not a problem , because it uses
> free () method to release the memory. But with httpd since we are
> allocating memory from pool and releasing the pool is done by httpd the
> memory usage grows, hence after 3 or 4 requests the system crashes due
> to limited memory.
> 
> Actually the main problem here is httpd does not release the request
> pool even after processing the request. So I create my own pool using my
> own allocator and destroy both after processing the request. This cause
> problems with Sandesha2/c becasue freeng allocator (here allocator means
> apr_allocator not axutil_allocator)cause problems because Sandesha2/C
> creates its own threads and those threads use the allocator.
> 
> Even the above solution does not prevent using larger amount of memory
> to process large attachments. 
> 
> So I think we need to change the mime_parser to process each block
> separately and concatenate them. The problem with this approach is the
> search string(eg : MimeBoundary) may not contain in one buffer. I am
> trying to come up with an algorithm for this problem.
> 
> Any thoughts?
> 
> Thanks,
> -Manjula.
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: axis-c-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: axis-c-dev-help@ws.apache.org