You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tika.apache.org by "Tim Allison (JIRA)" <ji...@apache.org> on 2019/01/07 14:02:00 UTC

[jira] [Comment Edited] (TIKA-2802) Out of memory issues when extracting large files (pst)

    [ https://issues.apache.org/jira/browse/TIKA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735843#comment-16735843 ] 

Tim Allison edited comment on TIKA-2802 at 1/7/19 2:01 PM:
-----------------------------------------------------------

Does the problem go away if you specify xerces2, e.g.:

{noformat}
-Djavax.xml.parsers.SAXParserFactory=org.apache.xerces.jaxp.SAXParserFactoryImpl
{noformat}

Second question:
bq. Then I can manually clear or reset the parser after tika is done parsing the file.
How are you clearing/resetting?  Our code _should_ be doing this successfully.  Can we replicate it within Tika?


was (Author: tallison@mitre.org):
Does the problem go away if you specify xerces2, e.g.:

{noformat}
-Djavax.xml.parsers.SAXParserFactory=org.apache.xerces.jaxp.SAXParserFactoryImpl
{noformat}

bq. Then I can manually clear or reset the parser after tika is done parsing the file.
How are you clearing/resetting?  Our code _should_ be doing this successfully.  Can we replicate it within Tika?

> Out of memory issues when extracting large files (pst)
> ------------------------------------------------------
>
>                 Key: TIKA-2802
>                 URL: https://issues.apache.org/jira/browse/TIKA-2802
>             Project: Tika
>          Issue Type: Bug
>          Components: parser
>    Affects Versions: 1.20, 1.19.1
>         Environment: Reproduced on Windows 2012 R2 and Ubuntu 18.04.
> Java: jdk1.8.0_151
>  
>            Reporter: Caleb Ott
>            Priority: Critical
>         Attachments: Selection_111.png, Selection_117.png
>
>
> I have an application that extracts text from multiple files on a file share. I've been running into issues with the application running out of memory (~26g dedicated to the heap).
> I found in the heap dumps there is a "fDTDDecl" buffer which is creating very large char arrays and never releasing that memory. In the picture you can see the heap dump with 4 SAXParsers holding onto a large chunk of memory. The fourth one is expanded to show it is all being held by the "fDTDDecl" field. This dump is from a scaled down execution (not a 26g heap).
> It looks like that DTD field should never be that large, I'm wondering if this is a bug with xerces instead? I can easily reproduce the issue by attempting to extract text from large .pst files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)