You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tika.apache.org by "Tim Allison (JIRA)" <ji...@apache.org> on 2019/01/03 18:30:00 UTC
[jira] [Commented] (TIKA-2802) Out of memory issues when extracting
large files (pst)
[ https://issues.apache.org/jira/browse/TIKA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733340#comment-16733340 ]
Tim Allison commented on TIKA-2802:
-----------------------------------
I don't know how generalizable this is, but it looks like this is holding a large xlsx sheet in memory....{{<worksheet...}}. That part _might_ be a problem with xerces.
I notice with xerces2 (at least), that the XMLReader held by the SAXParser is not released/cleared when we call {{reset()}} on the SAXParser. If we add some nullifications to our reset (as below), that clears out the reader:
{noformat}
XMLReader reader = saxParser.getXMLReader();
reader.setContentHandler(null);
reader.setDTDHandler(null);
reader.setEntityResolver(null);
reader.setErrorHandler(null);
{noformat}
Let me try some things with xerces, and I'll let you know what I find.
> Out of memory issues when extracting large files (pst)
> ------------------------------------------------------
>
> Key: TIKA-2802
> URL: https://issues.apache.org/jira/browse/TIKA-2802
> Project: Tika
> Issue Type: Bug
> Components: parser
> Affects Versions: 1.20, 1.19.1
> Environment: Reproduced on Windows 2012 R2 and Ubuntu 18.04.
> Java: jdk1.8.0_151
>
> Reporter: Caleb Ott
> Priority: Critical
> Attachments: Selection_111.png
>
>
> I have an application that extracts text from multiple files on a file share. I've been running into issues with the application running out of memory (~26g dedicated to the heap).
> I found in the heap dumps there is a "fDTDDecl" buffer which is creating very large char arrays and never releasing that memory. In the picture you can see the heap dump with 4 SAXParsers holding onto a large chunk of memory. The fourth one is expanded to show it is all being held by the "fDTDDecl" field. This dump is from a scaled down execution (not a 26g heap).
> It looks like that DTD field should never be that large, I'm wondering if this is a bug with xerces instead? I can easily reproduce the issue by attempting to extract text from large .pst files.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)