You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by John X <jo...@neasys.com> on 2006/03/01 08:12:16 UTC

Re: Nutch Parsing PDFs, and general PDF extraction

On Tue, Feb 28, 2006 at 09:55:18AM -0500, Richard Braman wrote:
> thanks for the help.  I dont know what happenned , but it is working no.
> Did any other contributros read what I sent about parsing PDFs?
> I dont think nutch is capable with this based on the text stripper code
> in parse pdf
>  
> http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs-pd
> f/f1040.pdf+irs+1040+pdf
> <http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs-p
> df/f1040.pdf+irs+1040+pdf&hl=en&gl=us&ct=clnk&cd=1>
> &hl=en&gl=us&ct=clnk&cd=1
>  
>  
> Its time to implement some real pdf parsing technology.
> any other takers?

Nutch is about search and it relies on 3rd party libraries
to extract text from various mimetypes, including application/pdf.
Whether nutch can correctly extract text from a pdf file largely
depends on the pdf parsing library it uses, currently PDFBox.
It won't be very difficult to switch to other libraries.
However it seems hard to find a free/open implementation
that can parse every pdf file in the wild. There is an alternative:
use nutch's parse-ext with a command line pdf parser/converter,
which can just be an executable.

John

RE: Nutch Parsing PDFs, and general PDF extraction

Posted by Richard Braman <rb...@bramantax.com>.
I am not a pdf guru, but I have amassed quite a bit of information on
the topic.  I have pinged around asking the pdf mavens of the world what
the issues are about parsing pdf and reading up on the subject to get a
better understanding.  I have contributed all of this to mailing lists,
but coding this is not something I would feel confortable doing at this
point.  Maybe it would be best for a coordinated lucene-nutch-pdfbox
development to produce some good code to do this.  I am trying to get
some dialog going.

Here is some code I was asked to debug by  another interested developer
that uses PDFBox to extract pdf tabular data, it seems to have some bugs
in it that I am trying to figure out.


	try
        {
        	int i =1;
        	String wordsep = null;
        	String str = null;
        	boolean flag = false;
               	Writer output = null;
        	PDDocument document = null;
        	document = PDDocument.load( "53 Nostro Ofc Cofc Daily
Position_AUS.pdf" );
        	
        	PDDocumentOutline root =
document.getDocumentCatalog().getDocumentOutline();
		PDOutlineItem item = root.getFirstChild();
		PDOutlineItem item1 = item.getNextSibling();
		
		while( item1 != null )
      		{       
      			System.out.println( "Item:" + item.getTitle() );
      			System.out.println( "Item1:" + item1.getTitle()
);
      			output = new OutputStreamWriter(new
FileOutputStream( "simple"+i+".txt" ) );
      			PDFTextStripperByArea stripper= null;
      			stripper=new PDFTextStripperByArea(); 
      			List reg = stripper.getRegions();
      			System.out.println(reg.size());
          
           	// 	PDFTextStripper stripper = null;
           		//stripper = new PDFTextStripper();
           		wordsep = stripper.getWordSeparator();
           		//stripper.setSortByPosition(true);
        
            		stripper.setStartBookmark(item);
            		
            		
            		stripper.setLineSeparator("\n");
            		stripper.setWordSeparator("  ");
            		stripper.setPageSeparator("\n\n\n\n");
            		stripper.setWordSeparator("   ");
            		stripper.setEndBookmark(item1);
            		//str = stripper.getText(document);
            		//output.write( str, 0, str.length()); 
            		 
            		stripper.writeText( document, output );
            		i++;
      			item = item.getNextSibling();
      	        	item1 = item1.getNextSibling();
          		
      		}
      			PDOutlineItem child = item.getFirstChild();
      			PDOutlineItem child1 = new PDOutlineItem();
          		while( child != null )
          		{
          			child1 = child; 
          			child = child.getNextSibling();
          			
          		}
          		System.out.println( "Item:" + item.getTitle() );
          		System.out.println( "Item1:" + child1.getTitle()
);
      			output = new OutputStreamWriter(new
FileOutputStream( "simple"+i+".txt" ) );
           		PDFTextStripperByArea stripper= null;
      			stripper=new PDFTextStripperByArea(); 
           		
           		System.out.println("The word separator
is"+flag);
           		
           		//stripper.setSortByPosition(true);
           		
          
            		stripper.setLineSeparator("\n");
            		
            		stripper.setPageSeparator("\n\n\n\n");
            		stripper.setWordSeparator("  ");
            		stripper.setStartBookmark(item);
            		stripper.setEndBookmark(child1);
            		//str = stripper.getText(document);
 
stripper.setShouldSeparateByBeads(stripper.shouldSeparateByBeads());
            		stripper.writeText( document, output );
      		
            	output.close();  
            	document.close();
        }
         catch(Exception ex)
        {
        	System.out.println(ex);
        }
    }    

-----Original Message-----
From: Jérôme Charron [mailto:jerome.charron@gmail.com] 
Sent: Thursday, March 02, 2006 3:42 AM
To: nutch-dev@lucene.apache.org; rbraman@bramantax.com
Subject: Re: Nutch Parsing PDFs, and general PDF extraction


> This is something google does very well, and something nutch must 
> match to compete.

Richard, it seems you are a real pdf guru, so any code contribution to
nutch is welcome.
;-)

Regards

Jérôme

--
http://motrech.free.fr/
http://www.frutch.org/


Re: Nutch Parsing PDFs, and general PDF extraction

Posted by Jérôme Charron <je...@gmail.com>.
> This is something google does very well, and something nutch must match
> to compete.

Richard, it seems you are a real pdf guru, so any code contribution to nutch
is welcome.
;-)

Regards

Jérôme

--
http://motrech.free.fr/
http://www.frutch.org/

RE: Nutch Parsing PDFs, and general PDF extraction

Posted by Richard Braman <rb...@bramantax.com>.
Hi Ben

>but that the cost of converting PDF to text is already resource
intensive and some users may not want to pay the additional cost to 
>analyze each page.

Agreed. For nutch it could be a simple config parameter to turn that on
or off. Pdf parsing is already optional, maybe there could be
alternaitive pasring strategies when parsing is turned on, to choose one
of the parsing methods (simple, complex1, complex2, etc)

>While PDFs are unstructured, most documents give pretty good results
with the default text extraction.  Usually the extracted text is 
>already in reading order.

Except if there are text and columns, then it goes haywire. For example,
parsing tax instructions always fails, and the content is always layed
out in columns.  Many newspaper articles have the same problem.

>An extremely small percent of PDFs actually include tagged information
Agreed, but that may change with Section 508, at least for government,
which is still the largest volume of pdfs on the net.
Is this hard to support with PDFBox?

>Overall, the easiest thing to do would be to implement good PDF->HTML
conversion capabilities to PDFBox, then Nutch just uses that 
>resulting HTML for indexing and for preview mode.  Until that is done
there is not much the Nutch developers can do.  

Agreed, I want nutch dev to know whats going on because I do think this
functionality is important for nutch's future. Maybe they have some
insights into parsing methods as many of these devs are experts with
ontologies.

Ben, maybe we should move this into pdf box dev list, and any one who is
interested (nutch developers or not) can get in on it.  I would think
nutch should assign this to someone on their team given the importnance
of the fucntionality.

Rich


-----Original Message-----
From: Ben Litchfield [mailto:ben@csh.rit.edu] 
Sent: Thursday, March 02, 2006 4:46 PM
To: Richard Braman
Cc: nutch-dev@lucene.apache.org; john@neasys.com
Subject: RE: Nutch Parsing PDFs, and general PDF extraction



To chime in and give my comments.

It is true that better search engine results could be obtained by first
analysing each PDF page and converting it to some other
structure(XML/HTML) before the indexing process.  But that the cost of
converting PDF to text is already resource intensive and some users may
not want to pay the additional cost to analyze each page.

While PDFs are unstructured, most documents give pretty good results
with the default text extraction.  Usually the extracted text is already
in reading order.

An extremely small percent of PDFs actually include tagged information

Converting a PDF to HTML is something that needs to get implemented in
PDFBox, then it is trivial for Nutch to include it.

Overall, the easiest thing to do would be to implement good PDF->HTML
conversion capabilities to PDFBox, then Nutch just uses that resulting
HTML for indexing and for preview mode.  Until that is done there is not
much the Nutch developers can do.

Ben


On Thu, 2 Mar 2006, Richard Braman wrote:

> It is possible to come up with some better parsing algorithms , than 
> simply doing a Stripper.get text, which is what nutch does right now.

> I am not recommending switching from PDFBox.  I think most important 
> is that the algorith used in the page does the best  job possible in 
> preserving the flow of text.  If the text doesn't flow correctly, 
> search results may be altered, which is why if nutch is about search 
> it must be able to parse PDF correctly.  Ben Litchfield, the developer

> of PDFbox, has noted that he has developed some better parsing 
> technology, and hopes to share those with us soon.
>
> Another thing to consider is if the pdf is "tagged" then it carries a 
> XML markup that desribes the flow of text, which was designed to be 
> use for accessability under section 508.  I think Ben also noted that 
> PDFBOx did not support pdf tags. 
> http://www.planetpdf.com/enterprise/article.asp?ContentID=6067
>
> A better parsing strategy may involve the following pseducode:
>
> Determine whther pdf contains tagged content
>
> 	If so,
> 		parse tagged content so that returned text flows
> correctly
>
> 	If not
>
> 		Determine whether the pdf contains bounding boxes that
indicate that 
> content is contained in tablular format.
>
> 		If not,
> 			parse getting stripper.get text
>
> 		If so, implement algorithm to extract text from pdf
preserving flow 
> of text
>
>
> An adiditonal feature may include saving the pdf as html as nutch 
> crawls the web.
>
>
> An example of such algortithms may be found at: 
> www.tamirhassan.com/final.pdf 
> http://www.chilisoftware.net/Private/Christian/ideas_for_extracting_da
> ta
> _from_unstructured_documents.pdf.
>
>
> This is something google does very well, and something nutch must 
> match to compete.
>
> -----Original Message-----
> From: John X [mailto:john@neasys.com]
> Sent: Wednesday, March 01, 2006 2:12 AM
> To: nutch-dev@lucene.apache.org; rbraman@bramantax.com
> Cc: john@neasys.com
> Subject: Re: Nutch Parsing PDFs, and general PDF extraction
>
>
> On Tue, Feb 28, 2006 at 09:55:18AM -0500, Richard Braman wrote:
> > thanks for the help.  I dont know what happenned , but it is working

> > no. Did any other contributros read what I sent about parsing PDFs? 
> > I dont think nutch is capable with this based on the text stripper 
> > code in parse pdf
> >
> > http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/ir
> > s-
> > pd
> > f/f1040.pdf+irs+1040+pdf
> >
> <http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs
> -p
> > df/f1040.pdf+irs+1040+pdf&hl=en&gl=us&ct=clnk&cd=1>
> > &hl=en&gl=us&ct=clnk&cd=1
> >
> >
> > Its time to implement some real pdf parsing technology.
> > any other takers?
>
> Nutch is about search and it relies on 3rd party libraries
> to extract text from various mimetypes, including application/pdf. 
> Whether nutch can correctly extract text from a pdf file largely 
> depends on the pdf parsing library it uses, currently PDFBox. It won't

> be very difficult to switch to other libraries. However it seems hard 
> to find a free/open implementation that can parse every pdf file in 
> the wild. There is an alternative: use nutch's parse-ext with a 
> command line pdf parser/converter, which can just be an executable.
>
> John
>


RE: Nutch Parsing PDFs, and general PDF extraction

Posted by Ben Litchfield <be...@csh.rit.edu>.
To chime in and give my comments.

It is true that better search engine results could be obtained by first
analysing each PDF page and converting it to some other
structure(XML/HTML) before the indexing process.  But that the cost of
converting PDF to text is already resource intensive and some users may
not want to pay the additional cost to analyze each page.

While PDFs are unstructured, most documents give pretty good results with
the default text extraction.  Usually the extracted text is already in
reading order.

An extremely small percent of PDFs actually include tagged information

Converting a PDF to HTML is something that needs to get implemented in
PDFBox, then it is trivial for Nutch to include it.

Overall, the easiest thing to do would be to implement good PDF->HTML
conversion capabilities to PDFBox, then Nutch just uses that resulting
HTML for indexing and for preview mode.  Until that is done there is not
much the Nutch developers can do.

Ben


On Thu, 2 Mar 2006, Richard Braman wrote:

> It is possible to come up with some better parsing algorithms , than
> simply doing a Stripper.get text, which is what nutch does right now.  I
> am not recommending switching from PDFBox.  I think most important is
> that the algorith used in the page does the best  job possible in
> preserving the flow of text.  If the text doesn't flow correctly, search
> results may be altered, which is why if nutch is about search it must be
> able to parse PDF correctly.  Ben Litchfield, the developer of PDFbox,
> has noted that he has developed some better parsing technology, and
> hopes to share those with us soon.
>
> Another thing to consider is if the pdf is "tagged" then it carries a
> XML markup that desribes the flow of text, which was designed to be use
> for accessability under section 508.  I think Ben also noted that PDFBOx
> did not support pdf tags.
> http://www.planetpdf.com/enterprise/article.asp?ContentID=6067
>
> A better parsing strategy may involve the following pseducode:
>
> Determine whther pdf contains tagged content
>
> 	If so,
> 		parse tagged content so that returned text flows
> correctly
>
> 	If not
>
> 		Determine whether the pdf contains bounding boxes that
> indicate that content is contained in tablular format.
>
> 		If not,
> 			parse getting stripper.get text
>
> 		If so, implement algorithm to extract text from pdf
> preserving flow of text
>
>
> An adiditonal feature may include saving the pdf as html as nutch crawls
> the web.
>
>
> An example of such algortithms may be found at:
> www.tamirhassan.com/final.pdf
> http://www.chilisoftware.net/Private/Christian/ideas_for_extracting_data
> _from_unstructured_documents.pdf.
>
>
> This is something google does very well, and something nutch must match
> to compete.
>
> -----Original Message-----
> From: John X [mailto:john@neasys.com]
> Sent: Wednesday, March 01, 2006 2:12 AM
> To: nutch-dev@lucene.apache.org; rbraman@bramantax.com
> Cc: john@neasys.com
> Subject: Re: Nutch Parsing PDFs, and general PDF extraction
>
>
> On Tue, Feb 28, 2006 at 09:55:18AM -0500, Richard Braman wrote:
> > thanks for the help.  I dont know what happenned , but it is working
> > no. Did any other contributros read what I sent about parsing PDFs? I
> > dont think nutch is capable with this based on the text stripper code
> > in parse pdf
> >
> > http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs-
> > pd
> > f/f1040.pdf+irs+1040+pdf
> >
> <http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs-p
> > df/f1040.pdf+irs+1040+pdf&hl=en&gl=us&ct=clnk&cd=1>
> > &hl=en&gl=us&ct=clnk&cd=1
> >
> >
> > Its time to implement some real pdf parsing technology.
> > any other takers?
>
> Nutch is about search and it relies on 3rd party libraries
> to extract text from various mimetypes, including application/pdf.
> Whether nutch can correctly extract text from a pdf file largely depends
> on the pdf parsing library it uses, currently PDFBox. It won't be very
> difficult to switch to other libraries. However it seems hard to find a
> free/open implementation that can parse every pdf file in the wild.
> There is an alternative: use nutch's parse-ext with a command line pdf
> parser/converter, which can just be an executable.
>
> John
>

RE: Nutch Parsing PDFs, and general PDF extraction

Posted by Richard Braman <rb...@bramantax.com>.
It is possible to come up with some better parsing algorithms , than
simply doing a Stripper.get text, which is what nutch does right now.  I
am not recommending switching from PDFBox.  I think most important is
that the algorith used in the page does the best  job possible in
preserving the flow of text.  If the text doesn't flow correctly, search
results may be altered, which is why if nutch is about search it must be
able to parse PDF correctly.  Ben Litchfield, the developer of PDFbox,
has noted that he has developed some better parsing technology, and
hopes to share those with us soon.

Another thing to consider is if the pdf is "tagged" then it carries a
XML markup that desribes the flow of text, which was designed to be use
for accessability under section 508.  I think Ben also noted that PDFBOx
did not support pdf tags.
http://www.planetpdf.com/enterprise/article.asp?ContentID=6067

A better parsing strategy may involve the following pseducode:

Determine whther pdf contains tagged content

	If so, 
		parse tagged content so that returned text flows
correctly

	If not

		Determine whether the pdf contains bounding boxes that
indicate that content is contained in tablular format.

		If not, 
			parse getting stripper.get text

		If so, implement algorithm to extract text from pdf
preserving flow of text


An adiditonal feature may include saving the pdf as html as nutch crawls
the web. 


An example of such algortithms may be found at:
www.tamirhassan.com/final.pdf
http://www.chilisoftware.net/Private/Christian/ideas_for_extracting_data
_from_unstructured_documents.pdf. 


This is something google does very well, and something nutch must match
to compete.

-----Original Message-----
From: John X [mailto:john@neasys.com] 
Sent: Wednesday, March 01, 2006 2:12 AM
To: nutch-dev@lucene.apache.org; rbraman@bramantax.com
Cc: john@neasys.com
Subject: Re: Nutch Parsing PDFs, and general PDF extraction


On Tue, Feb 28, 2006 at 09:55:18AM -0500, Richard Braman wrote:
> thanks for the help.  I dont know what happenned , but it is working 
> no. Did any other contributros read what I sent about parsing PDFs? I 
> dont think nutch is capable with this based on the text stripper code 
> in parse pdf
>  
> http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs-
> pd
> f/f1040.pdf+irs+1040+pdf
>
<http://64.233.179.104/search?q=cache:QOwcLFXNw5oJ:www.irs.gov/pub/irs-p
> df/f1040.pdf+irs+1040+pdf&hl=en&gl=us&ct=clnk&cd=1>
> &hl=en&gl=us&ct=clnk&cd=1
>  
>  
> Its time to implement some real pdf parsing technology.
> any other takers?

Nutch is about search and it relies on 3rd party libraries
to extract text from various mimetypes, including application/pdf.
Whether nutch can correctly extract text from a pdf file largely depends
on the pdf parsing library it uses, currently PDFBox. It won't be very
difficult to switch to other libraries. However it seems hard to find a
free/open implementation that can parse every pdf file in the wild.
There is an alternative: use nutch's parse-ext with a command line pdf
parser/converter, which can just be an executable.

John