You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tika.apache.org by ta...@apache.org on 2018/07/27 14:16:34 UTC

[tika] branch branch_1x updated (8f61126 -> d811a3a)

This is an automated email from the ASF dual-hosted git repository.

tallison pushed a change to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git.


    from 8f61126  TIKA-2692 -- general upgrades in prep for 1.19
     new 6afdf19  Depend on Parso for SAS7BDAT support
     new 4c5bbae  Add parso to the OSGi bundle
     new fa5f282  Test Columnar files - SAS7BDAT and CSV (other spreadsheet+DB formats still required)
     new 2d19fe0  TIKA-2462 Initial parser for SAS7BDAT files powered by Parso (now ASLv2). Still to do: Metadata, Unit Tests, Consistency with similar format tests
     new f3508f2  XHTML improvements
     new 284965e  Some SAS7BDAT metadata and unit testing
     new 39e1194  More SAS7BDAT metadata
     new c31d40f  SAS7BDAT html tests
     new 02bef03  Clean up imports
     new aaa78a3  Stub a unit test for TIKA-2641
     new b6399c6  Handle .epub files using .htm rather than .html extensions for the embedded contents (TIKA-1288)
     new 95a247c  Add a test .sas7bdat file with labels, and generate the columnar/tabular test file in a few more formats
     new 7f68ebb  Add a time column to the test columnar files
     new b92f752  CSV assert as best we can (no dedicated parser), start on XLS and SAS7BDAT consistency tests
     new 65af2d9  Check header contents, check data rows count, add XLSX test
     new 3f2b7a5  Remaining values to check
     new 5d3dd69  Ensure that empty cells are still output
     new 507f59f  Not all formats know about %s, dates not completely consistent either...
     new 81caa71  Use patterns to handle the date format variations
     new d871b1f  Add disabled, currently failing ODS test
     new de53df9  Mime magic for DPX and ACES, thanks to Andreas Meier (TIKA-2628 and TIKA-2629)
     new 6880127  TIKA-2479 Option to request missing rows where possible in Excel-like formats
     new dcfbe5a  TIKA-2479 Output missing left/mid cells in XLSX and XLSB, and optionally also missing rows
     new b336360  Updated Columnar output from SAS with better formats
     new 65cf9f2  Formatted columns in the columnar test Excel files
     new 8ea6b22  TIKA-2479 Update XLS missing cell/row handling to match XLSX and XLSB, add unit test for missing rows, and enable the Columnar tests for the Excel formats
     new 060bfa5  Move some fixes that didn't make it into 1.18 into 1.19
     new 08a767a  Changelog update
     new 3da39b8  Add the other jackcess jar to the bundle
     new d811a3a  Move some fixes that didn't make it into 1.18 into 1.19, clean up

The 30 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt                                        |  23 +-
 tika-bundle/pom.xml                                |   3 +
 .../java/org/apache/tika/metadata/Database.java    |   5 +-
 .../org/apache/tika/mime/tika-mimetypes.xml        |  19 ++
 tika-parsers/pom.xml                               |   5 +
 .../org/apache/tika/parser/epub/EpubParser.java    |   3 +-
 .../tika/parser/microsoft/ExcelExtractor.java      |  26 +-
 .../tika/parser/microsoft/OfficeParserConfig.java  |  16 +-
 .../ooxml/XSSFBExcelExtractorDecorator.java        |   2 +-
 .../ooxml/XSSFExcelExtractorDecorator.java         |  35 ++-
 .../org/apache/tika/parser/sas/SAS7BDATParser.java | 151 +++++++++++
 .../services/org.apache.tika.parser.Parser         |   1 +
 .../org/apache/tika/parser/TabularFormatsTest.java | 283 +++++++++++++++++++++
 .../tika/parser/microsoft/ExcelParserTest.java     |  25 +-
 .../apache/tika/parser/sas/SAS7BDATParserTest.java | 148 +++++++++++
 .../resources/test-documents/test-columnar.csv     |  12 +
 .../resources/test-documents/test-columnar.ods     | Bin 0 -> 12854 bytes
 .../resources/test-documents/test-columnar.sas.xml | 113 ++++++++
 .../{testACCESS.mdb => test-columnar.sas7bdat}     | Bin 110592 -> 131072 bytes
 .../resources/test-documents/test-columnar.xls     | Bin 0 -> 32768 bytes
 .../resources/test-documents/test-columnar.xlsb    | Bin 0 -> 9691 bytes
 .../resources/test-documents/test-columnar.xlsx    | Bin 0 -> 10556 bytes
 .../resources/test-documents/test-columnar.xpt     | Bin 0 -> 4720 bytes
 .../src/test/resources/test-documents/testSAS2.sas |  70 +++++
 24 files changed, 906 insertions(+), 34 deletions(-)
 create mode 100644 tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
 create mode 100644 tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
 create mode 100644 tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.csv
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.ods
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml
 copy tika-parsers/src/test/resources/test-documents/{testACCESS.mdb => test-columnar.sas7bdat} (73%)
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.xls
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.xlsb
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.xlsx
 create mode 100644 tika-parsers/src/test/resources/test-documents/test-columnar.xpt
 create mode 100644 tika-parsers/src/test/resources/test-documents/testSAS2.sas


[tika] 09/30: Clean up imports

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 02bef032e654c3d2e23de0f51e037b61cd130c50
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 3 21:25:38 2018 +0100

    Clean up imports
---
 .../src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java  | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index 37be73b..2657ac2 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -17,9 +17,7 @@
 package org.apache.tika.parser.sas;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
 
-import java.io.IOException;
 import java.io.InputStream;
 import java.util.Arrays;
 
@@ -35,10 +33,8 @@ import org.apache.tika.parser.ParseContext;
 import org.apache.tika.parser.Parser;
 import org.apache.tika.parser.executable.MachineMetadata;
 import org.apache.tika.sax.BodyContentHandler;
-import org.apache.tika.sax.WriteOutContentHandler;
 import org.junit.Test;
 import org.xml.sax.ContentHandler;
-import org.xml.sax.helpers.DefaultHandler;
 
 public class SAS7BDATParserTest extends TikaTest {
     private Parser parser = new SAS7BDATParser();


[tika] 06/30: Some SAS7BDAT metadata and unit testing

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 284965e789ec86364c31d471f16d6732c5e4e41d
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri Apr 27 17:34:52 2018 +0100

    Some SAS7BDAT metadata and unit testing
---
 .../org/apache/tika/parser/sas/SAS7BDATParser.java | 56 +++++++++++++++++++++-
 .../apache/tika/parser/sas/SAS7BDATParserTest.java | 35 ++++++++++++--
 2 files changed, 85 insertions(+), 6 deletions(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
index 4944c12..5992e15 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
@@ -33,6 +33,7 @@ import org.xml.sax.SAXException;
 
 import com.epam.parso.Column;
 import com.epam.parso.DataWriterUtil;
+import com.epam.parso.SasFileProperties;
 import com.epam.parso.SasFileReader;
 import com.epam.parso.impl.SasFileReaderImpl;
 
@@ -63,10 +64,61 @@ public class SAS7BDATParser extends AbstractParser {
         xhtml.startDocument();
         
         SasFileReader sas = new SasFileReaderImpl(stream);
+        SasFileProperties props = sas.getSasFileProperties();
 
-        // TODO Metadata
+        // Record the interesting parts of the file's metadata
+        metadata.set(TikaCoreProperties.TITLE, props.getName());
+        metadata.set(TikaCoreProperties.CREATED, props.getDateCreated());
+        metadata.set(TikaCoreProperties.MODIFIED, props.getDateModified());
 
-        // Output as a table
+        // TODO What about these?
+/*
+u64 - false
+compressionMethod - null
+endianness - 1
+encoding - windows-1252
+sessionEncoding - null
+fileType - DATA
+sasRelease - 9.0101M3
+serverType - XP_PRO
+osName - 
+osType - 
+headerLength - 1024
+pageLength - 8192
+pageCount - 1
+rowLength - 96
+rowCount - 31
+mixPageRowCount - 69
+columnsCount - 5
+*/
+
+        // TODO Should we output more Column info as metadata?
+/*
+5 Columns defined:
+ 1 - A
+  Label: A
+  Format: $58.
+  Size 58 of java.lang.String
+ 2 - B
+  Label: B
+  Format: 
+  Size 8 of java.lang.Number
+ 3 - C
+  Label: C
+  Format: DATE8.
+  Size 8 of java.lang.Number
+ 4 - D
+  Label: D
+  Format: DATETIME17.
+  Size 8 of java.lang.Number
+ 5 - E
+  Label: E
+  Format: 
+  Size 8 of java.lang.Number
+*/
+
+        // Output file contents as a table
+        xhtml.element("h1", props.getName());
         xhtml.startElement("table");
         xhtml.newline();
         
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index 9f57c95..2f29a13 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -48,8 +48,20 @@ public class SAS7BDATParserTest extends TikaTest {
         }
 
         assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
-        // TODO Test the metadata
-        // TODO Test the contents
+        assertEquals("TESTING", metadata.get(TikaCoreProperties.TITLE));
+
+        // Mon Jan 30 07:31:47 GMT 2017
+        assertEquals("2017-01-30T07:31:47Z", metadata.get(TikaCoreProperties.CREATED));
+        assertEquals("2017-01-30T07:31:47Z", metadata.get(TikaCoreProperties.MODIFIED));
+        
+        // TODO Test the rest of the metadata
+        
+        String content = handler.toString();
+        assertContains("TESTING", content);
+        assertContains("\t3\t", content);
+        assertContains("\t10\t", content);
+        assertContains("\tThis is row", content);
+        assertContains(" of ", content);
     }
     
     @Test
@@ -64,9 +76,24 @@ public class SAS7BDATParserTest extends TikaTest {
         }
 
         assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
-        // TODO Test the metadata
-        // TODO Test the contents
+        assertEquals("SHEET1", metadata.get(TikaCoreProperties.TITLE));
+
+        // Fri Mar 06 19:10:19 GMT 2015
+        assertEquals("2015-03-06T19:10:19Z", metadata.get(TikaCoreProperties.CREATED));
+        assertEquals("2015-03-06T19:10:19Z", metadata.get(TikaCoreProperties.MODIFIED));
+        
+        // TODO Test the rest of the metadata
+        
+        String content = handler.toString();
+        assertContains("SHEET1", content);
+        assertContains("A\tB\tC", content);
+        assertContains("Num=0\t", content);
+        assertContains("Num=404242\t", content);
+        assertContains("\t0\t", content);
+        assertContains("\t404242\t", content);
+        assertContains("\t08Feb1904\t", content);
     }
 
     // TODO HTML contents unit test
+    // TODO Column names vs labels, with a different test file
 }


[tika] 28/30: Changelog update

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 08a767ab7490d8fde8bb238d4c60b005096d9e38
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri May 18 15:17:56 2018 +0100

    Changelog update
---
 CHANGES.txt | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/CHANGES.txt b/CHANGES.txt
index ae7627d..822139e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -101,6 +101,11 @@ Release 1.18 - 4/20/2018
      images to be built from source (TIKA-1518).
 
 
+   * For sparse XLSX and XLSB files, always output missing cells to
+     the left of filled ones (matching XLS), and optionally output
+     missing rows on all 3 formats if requested via the
+     OfficeParserContext (TIKA-2479)
+
 Release 1.17 - 12/8/2017
 
   ***NOTE: THIS IS THE LAST VERSION OF TIKA THAT WILL RUN


[tika] 22/30: TIKA-2479 Option to request missing rows where possible in Excel-like formats

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 68801273967bba2ed5fee90adbd06c9c19f27abc
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 17 22:15:34 2018 +0100

    TIKA-2479 Option to request missing rows where possible in Excel-like formats
---
 .../apache/tika/parser/microsoft/OfficeParserConfig.java | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/OfficeParserConfig.java b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/OfficeParserConfig.java
index 34b865e..5d34b2e 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/OfficeParserConfig.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/OfficeParserConfig.java
@@ -29,6 +29,7 @@ public class OfficeParserConfig implements Serializable {
     private boolean includeMoveFromContent = false;
     private boolean includeShapeBasedContent = true;
     private boolean includeHeadersAndFooters = true;
+    private boolean includeMissingRows = false;
     private boolean concatenatePhoneticRuns = true;
 
     private boolean useSAXDocxExtractor = false;
@@ -188,10 +189,23 @@ public class OfficeParserConfig implements Serializable {
         this.extractAllAlternativesFromMSG = extractAllAlternativesFromMSG;
     }
 
-
     public boolean getExtractAllAlternativesFromMSG() {
         return extractAllAlternativesFromMSG;
     }
+
+    /**
+     * For table-like formats, and tables within other formats, should
+     *  missing rows in sparse tables be output where detected?
+     * The default is to only output rows defined within the file, which
+     *  avoid lots of blank lines, but means layout isn't preserved.
+     */
+    public void setIncludeMissingRows(boolean includeMissingRows) {
+        this.includeMissingRows = includeMissingRows;
+    }
+
+    public boolean getIncludeMissingRows() {
+        return includeMissingRows;
+    }
 }
 
 


[tika] 04/30: TIKA-2462 Initial parser for SAS7BDAT files powered by Parso (now ASLv2). Still to do: Metadata, Unit Tests, Consistency with similar format tests

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 2d19fe0ad26f68c6cc2caeaed713cd28179dede7
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu Apr 26 23:43:16 2018 +0100

    TIKA-2462 Initial parser for SAS7BDAT files powered by Parso (now ASLv2). Still to do: Metadata, Unit Tests, Consistency with similar format tests
---
 .../org/apache/tika/parser/sas/SAS7BDATParser.java | 100 +++++++++++++++++++++
 .../services/org.apache.tika.parser.Parser         |   1 +
 2 files changed, 101 insertions(+)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
new file mode 100644
index 0000000..1cef3cd
--- /dev/null
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.tika.parser.sas;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Collections;
+import java.util.Set;
+
+import org.apache.tika.exception.TikaException;
+import org.apache.tika.metadata.Metadata;
+import org.apache.tika.metadata.TikaCoreProperties;
+import org.apache.tika.mime.MediaType;
+import org.apache.tika.parser.AbstractParser;
+import org.apache.tika.parser.ParseContext;
+import org.apache.tika.sax.XHTMLContentHandler;
+import org.xml.sax.ContentHandler;
+import org.xml.sax.SAXException;
+
+import com.epam.parso.Column;
+import com.epam.parso.DataWriterUtil;
+import com.epam.parso.SasFileReader;
+import com.epam.parso.impl.SasFileReaderImpl;
+
+/**
+ * Processes the SAS7BDAT data columnar database file used by SAS and 
+ *  other similar languages.
+ */
+public class SAS7BDATParser extends AbstractParser {
+    private static final long serialVersionUID = -2775485539937983150L;
+    
+    private static final MediaType TYPE_SAS7BDAT =
+            MediaType.application("x-sas-data");
+    private static final Set<MediaType> SUPPORTED_TYPES =
+            Collections.singleton(TYPE_SAS7BDAT);
+
+    @Override
+    public Set<MediaType> getSupportedTypes(ParseContext context) {
+        return SUPPORTED_TYPES;
+    }
+
+    @Override
+    public void parse(InputStream stream, ContentHandler handler,
+            Metadata metadata, ParseContext context)
+            throws IOException, SAXException, TikaException {
+        metadata.set(Metadata.CONTENT_TYPE, TYPE_SAS7BDAT.toString());
+
+        XHTMLContentHandler xhtml = new XHTMLContentHandler(handler, metadata);
+        xhtml.startDocument();
+        
+        SasFileReader sas = new SasFileReaderImpl(stream);
+        
+        // TODO Metadata
+
+        // Output as a table
+        xhtml.startElement("table");
+        xhtml.newline();
+        
+        // Do the column headings
+        xhtml.startElement("tr");
+        for (Column c : sas.getColumns()) {
+            xhtml.startElement("th", "title", c.getName());
+            xhtml.characters(c.getLabel());
+            xhtml.endElement("th");
+        }
+        xhtml.endElement("tr");
+        xhtml.newline();
+        
+        // Process each row in turn
+        Object[] row = null;
+        while ((row = sas.readNext()) != null) {
+            xhtml.startElement("tr");
+            for (String val : DataWriterUtil.getRowValues(sas.getColumns(), row)) {
+                xhtml.startElement("td");
+                xhtml.characters(val);
+                xhtml.endElement("td");
+            }
+            xhtml.endElement("tr");
+            xhtml.newline();
+        }
+
+        // Finish
+        xhtml.endElement("table");
+        xhtml.newline();
+    }
+}
diff --git a/tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser b/tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser
index aa8725e..a33a578 100644
--- a/tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser
+++ b/tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser
@@ -58,6 +58,7 @@ org.apache.tika.parser.pkg.CompressorParser
 org.apache.tika.parser.pkg.PackageParser
 org.apache.tika.parser.pkg.RarParser
 org.apache.tika.parser.rtf.RTFParser
+org.apache.tika.parser.sas.SAS7BDATParser
 org.apache.tika.parser.txt.TXTParser
 org.apache.tika.parser.video.FLVParser
 org.apache.tika.parser.wordperfect.QuattroProParser


[tika] 11/30: Handle .epub files using .htm rather than .html extensions for the embedded contents (TIKA-1288)

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit b6399c65a70b768c41febbc228c1cdcdd8ed04b4
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Wed May 9 10:23:09 2018 +0100

    Handle .epub files using .htm rather than .html extensions for the embedded contents (TIKA-1288)
    
    # Conflicts:
    #	CHANGES.txt
---
 tika-parsers/src/main/java/org/apache/tika/parser/epub/EpubParser.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/epub/EpubParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/epub/EpubParser.java
index c4f72de..775b319 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/epub/EpubParser.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/epub/EpubParser.java
@@ -105,7 +105,8 @@ public class EpubParser extends AbstractParser {
                 meta.parse(zip, new DefaultHandler(), metadata, context);
             } else if (entry.getName().endsWith(".opf")) {
                 meta.parse(zip, new DefaultHandler(), metadata, context);
-            } else if (entry.getName().endsWith(".html") || 
+            } else if (entry.getName().endsWith(".htm") || 
+                           entry.getName().endsWith(".html") || 
             		   entry.getName().endsWith(".xhtml")) {
                 content.parse(zip, childHandler, metadata, context);
             }


[tika] 30/30: Move some fixes that didn't make it into 1.18 into 1.19, clean up

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit d811a3a098bd8b9b20e6ab9083f86adc931b91e2
Author: TALLISON <ta...@apache.org>
AuthorDate: Fri Jul 27 09:51:28 2018 -0400

    Move some fixes that didn't make it into 1.18 into 1.19, clean up
---
 CHANGES.txt | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 822139e..ae7627d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -101,11 +101,6 @@ Release 1.18 - 4/20/2018
      images to be built from source (TIKA-1518).
 
 
-   * For sparse XLSX and XLSB files, always output missing cells to
-     the left of filled ones (matching XLS), and optionally output
-     missing rows on all 3 formats if requested via the
-     OfficeParserContext (TIKA-2479)
-
 Release 1.17 - 12/8/2017
 
   ***NOTE: THIS IS THE LAST VERSION OF TIKA THAT WILL RUN


[tika] 25/30: Formatted columns in the columnar test Excel files

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 65cf9f2e16e3662b886d10a44e0fb005d5827b73
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri May 18 15:13:43 2018 +0100

    Formatted columns in the columnar test Excel files
---
 .../test/resources/test-documents/test-columnar.xls | Bin 66048 -> 32768 bytes
 .../resources/test-documents/test-columnar.xlsb     | Bin 0 -> 9691 bytes
 .../resources/test-documents/test-columnar.xlsx     | Bin 6603 -> 10556 bytes
 .../src/test/resources/test-documents/testSAS2.sas  |   3 +++
 4 files changed, 3 insertions(+)

diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xls b/tika-parsers/src/test/resources/test-documents/test-columnar.xls
index cc45372..3f1009c 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.xls and b/tika-parsers/src/test/resources/test-documents/test-columnar.xls differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xlsb b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsb
new file mode 100644
index 0000000..0ce5139
Binary files /dev/null and b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsb differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx
index 22483f1..f1f4dc4 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx and b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx differ
diff --git a/tika-parsers/src/test/resources/test-documents/testSAS2.sas b/tika-parsers/src/test/resources/test-documents/testSAS2.sas
index 96a9121..df52b1a 100644
--- a/tika-parsers/src/test/resources/test-documents/testSAS2.sas
+++ b/tika-parsers/src/test/resources/test-documents/testSAS2.sas
@@ -57,6 +57,9 @@ proc export data=testing label
 putnames=yes;
 run;
 
+/* Due to SAS Limitations, you will need to manually */
+/* style the % and Date/Datetime columns in Excel */
+/* You will also need to save-as XLSB to generate that */
 proc export data=testing label 
   outfile="&outpath./testing.xls"
   dbms=XLS;


[tika] 12/30: Add a test .sas7bdat file with labels, and generate the columnar/tabular test file in a few more formats

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 95a247c7c670b823afec4002127abb31eb8019b4
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Wed May 9 18:19:34 2018 +0100

    Add a test .sas7bdat file with labels, and generate the columnar/tabular test file in a few more formats
---
 .../apache/tika/parser/sas/SAS7BDATParserTest.java |  51 +++++++----
 .../resources/test-documents/test-columnar.sas.xml | 102 +++++++++++++++++++++
 .../test-documents/test-columnar.sas7bdat          | Bin 9216 -> 17408 bytes
 .../resources/test-documents/test-columnar.xpt     | Bin 0 -> 4560 bytes
 .../src/test/resources/test-documents/testSAS2.sas |  48 ++++++++++
 5 files changed, 182 insertions(+), 19 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index 2657ac2..3bb3e01 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -82,36 +82,36 @@ public class SAS7BDATParserTest extends TikaTest {
         Metadata metadata = new Metadata();
 
         try (InputStream stream = SAS7BDATParserTest.class.getResourceAsStream(
-                "/test-documents/test-columnar.sas7bdat")) {
+                "/test-documents/test-columnar.sas7bdat")) {            
             parser.parse(stream, handler, metadata, new ParseContext());
         }
 
         assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
-        assertEquals("SHEET1", metadata.get(TikaCoreProperties.TITLE));
+        assertEquals("TESTING", metadata.get(TikaCoreProperties.TITLE));
 
-        // Fri Mar 06 19:10:19 GMT 2015
-        assertEquals("2015-03-06T19:10:19Z", metadata.get(TikaCoreProperties.CREATED));
-        assertEquals("2015-03-06T19:10:19Z", metadata.get(TikaCoreProperties.MODIFIED));
+        assertEquals("2018-05-09T16:42:04Z", metadata.get(TikaCoreProperties.CREATED));
+        assertEquals("2018-05-09T16:42:04Z", metadata.get(TikaCoreProperties.MODIFIED));
         
         assertEquals("1", metadata.get(PagedText.N_PAGES));
-        assertEquals("5", metadata.get(Database.COLUMN_COUNT));
-        assertEquals("31", metadata.get(Database.ROW_COUNT));
+        assertEquals("7", metadata.get(Database.COLUMN_COUNT));
+        assertEquals("11", metadata.get(Database.ROW_COUNT));
         assertEquals("windows-1252", metadata.get(HttpHeaders.CONTENT_ENCODING));
-        assertEquals("XP_PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
-        assertEquals("9.0101M3", metadata.get(OfficeOpenXMLExtended.APP_VERSION));
+        assertEquals("W32_7PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
+        assertEquals("9.0301M2", metadata.get(OfficeOpenXMLExtended.APP_VERSION));
         assertEquals("32", metadata.get(MachineMetadata.ARCHITECTURE_BITS));
         assertEquals("Little", metadata.get(MachineMetadata.ENDIAN));
-        assertEquals(Arrays.asList("A","B","C","D","E"),
+        assertEquals(Arrays.asList("Record Number","Square of the Record Number",
+                                   "Description of the Row","Percent Done",
+                                   "Percent Increment","date","datetime"),
                      Arrays.asList(metadata.getValues(Database.COLUMN_NAME)));
         
         String content = handler.toString();
-        assertContains("SHEET1", content);
-        assertContains("A\tB\tC", content);
-        assertContains("Num=0\t", content);
-        assertContains("Num=404242\t", content);
-        assertContains("\t0\t", content);
-        assertContains("\t404242\t", content);
-        assertContains("\t08Feb1904\t", content);
+        assertContains("TESTING", content);
+        assertContains("0\t0\tThis", content);
+        assertContains("2\t4\tThis", content);
+        assertContains("4\t16\tThis", content);
+        assertContains("\t01-01-1960\t", content);
+        assertContains("\t01Jan1960:00:00", content);
     }
 
     @Test
@@ -129,7 +129,20 @@ public class SAS7BDATParserTest extends TikaTest {
         assertContains("<td>This is row", xml);
         assertContains("10</td>", xml);
     }
+    
+    @Test
+    public void testHTML2() throws Exception {
+        XMLResult result = getXML("test-columnar.sas7bdat");
+        String xml = result.xml;
 
-    // TODO Column names vs labels, with a different test file
-    // TODO Columnar consistency test
+        // Check the title came through
+        assertContains("<h1>TESTING</h1>", xml);
+        // Check the headings
+        assertContains("<th title=\"recnum\">Record Number</th>", xml);
+        assertContains("<th title=\"square\">Square of the Record Number</th>", xml);
+        assertContains("<th title=\"date\">date</th>", xml);
+        // Check formatting of dates
+        assertContains("<td>01-01-1960</td>", xml);
+        assertContains("<td>01Jan1960:00:00:10.00</td>", xml);
+    }
 }
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml b/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml
new file mode 100644
index 0000000..ae12fc5
--- /dev/null
+++ b/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml
@@ -0,0 +1,102 @@
+<?xml version="1.0" encoding="windows-1252" ?>
+<TABLE>
+   <TESTXML>
+      <recnum>0</recnum>
+      <square>0</square>
+      <desc>This is row            0 of           10</desc>
+      <pctdone>0</pctdone>
+      <pctincr missing="M" />
+      <date>0</date>
+      <datetime>1960-01-01T00:00:01</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>1</recnum>
+      <square>1</square>
+      <desc>This is row            1 of           10</desc>
+      <pctdone>0.1</pctdone>
+      <pctincr>0</pctincr>
+      <date>1</date>
+      <datetime>1960-01-01T00:00:10</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>2</recnum>
+      <square>4</square>
+      <desc>This is row            2 of           10</desc>
+      <pctdone>0.2</pctdone>
+      <pctincr>0.5</pctincr>
+      <date>16</date>
+      <datetime>1960-01-01T00:01:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>3</recnum>
+      <square>9</square>
+      <desc>This is row            3 of           10</desc>
+      <pctdone>0.3</pctdone>
+      <pctincr>0.6666666667</pctincr>
+      <date>81</date>
+      <datetime>1960-01-01T00:16:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>4</recnum>
+      <square>16</square>
+      <desc>This is row            4 of           10</desc>
+      <pctdone>0.4</pctdone>
+      <pctincr>0.75</pctincr>
+      <date>256</date>
+      <datetime>1960-01-01T02:46:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>5</recnum>
+      <square>25</square>
+      <desc>This is row            5 of           10</desc>
+      <pctdone>0.5</pctdone>
+      <pctincr>0.8</pctincr>
+      <date>625</date>
+      <datetime>1960-01-02T03:46:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>6</recnum>
+      <square>36</square>
+      <desc>This is row            6 of           10</desc>
+      <pctdone>0.6</pctdone>
+      <pctincr>0.8333333333</pctincr>
+      <date>1296</date>
+      <datetime>1960-01-12T13:46:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>7</recnum>
+      <square>49</square>
+      <desc>This is row            7 of           10</desc>
+      <pctdone>0.7</pctdone>
+      <pctincr>0.8571428571</pctincr>
+      <date>2401</date>
+      <datetime>1960-04-25T17:46:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>8</recnum>
+      <square>64</square>
+      <desc>This is row            8 of           10</desc>
+      <pctdone>0.8</pctdone>
+      <pctincr>0.875</pctincr>
+      <date>4096</date>
+      <datetime>1963-03-03T09:46:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>9</recnum>
+      <square>81</square>
+      <desc>This is row            9 of           10</desc>
+      <pctdone>0.9</pctdone>
+      <pctincr>0.8888888889</pctincr>
+      <date>6561</date>
+      <datetime>1991-09-09T01:46:40</datetime>
+   </TESTXML>
+   <TESTXML>
+      <recnum>10</recnum>
+      <square>100</square>
+      <desc>This is row           10 of           10</desc>
+      <pctdone>1</pctdone>
+      <pctincr>0.9</pctincr>
+      <date>10000</date>
+      <datetime>2276-11-19T17:46:40</datetime>
+   </TESTXML>
+</TABLE>
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat
index 250b3b8..553c45c 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat and b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xpt b/tika-parsers/src/test/resources/test-documents/test-columnar.xpt
new file mode 100644
index 0000000..d908228
Binary files /dev/null and b/tika-parsers/src/test/resources/test-documents/test-columnar.xpt differ
diff --git a/tika-parsers/src/test/resources/test-documents/testSAS2.sas b/tika-parsers/src/test/resources/test-documents/testSAS2.sas
new file mode 100644
index 0000000..bc8c1fe
--- /dev/null
+++ b/tika-parsers/src/test/resources/test-documents/testSAS2.sas
@@ -0,0 +1,48 @@
+data testing;
+begin=0;
+end=10;
+msg="This is row %x of %y";
+do i = begin to end by 1;
+drop msg begin end i;
+recnum=i;
+square=i*i;
+desc=tranwrd(tranwrd(msg,"%x",i),"%y",end);
+format pctdone percent8.0;
+format pctincr percent7.1;
+pctdone=divide(i,end);
+pctincr=divide(i-1,i);
+format date ddmmyyd10.;
+format datetime datetime.;
+date=i**4;
+datetime=10**i;
+output;
+end;
+label recnum="Record Number"
+      square="Square of the Record Number"
+	  desc="Description of the Row"
+	  pctdone="Percent Done"
+	  pctincr="Percent Increment";
+run;
+
+libname out          '/home/tika/testing/sas';
+libname outxpt XPORT '/home/tika/testing/sas/testing.xpt';
+libname outv6 v6     '/home/tika/testing/sas';
+libname outxml xmlv2 '/home/tika/testing/sas';
+
+data out.testing;
+set testing;
+run;
+data outv6.testv6;
+set testing;
+run;
+data outxml.testxml;
+set testing;
+run;
+proc copy in=out out=outxpt;
+select testing;
+run;
+
+
+proc print data=testing;
+run;
+


[tika] 26/30: TIKA-2479 Update XLS missing cell/row handling to match XLSX and XLSB, add unit test for missing rows, and enable the Columnar tests for the Excel formats

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 8ea6b22f58feb73e3399fc82811cbd738a6f3cd1
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri May 18 15:15:32 2018 +0100

    TIKA-2479 Update XLS missing cell/row handling to match XLSX and XLSB, add unit test for missing rows, and enable the Columnar tests for the Excel formats
---
 .../tika/parser/microsoft/ExcelExtractor.java      | 26 ++++++------
 .../org/apache/tika/parser/TabularFormatsTest.java | 47 ++++++++++------------
 .../tika/parser/microsoft/ExcelParserTest.java     | 25 +++++++++++-
 3 files changed, 60 insertions(+), 38 deletions(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ExcelExtractor.java b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ExcelExtractor.java
index 0dc33ee..ff5971a 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ExcelExtractor.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ExcelExtractor.java
@@ -16,7 +16,7 @@
  */
 package org.apache.tika.parser.microsoft;
 
-import java.awt.*;
+import java.awt.Point;
 import java.io.IOException;
 import java.text.NumberFormat;
 import java.util.ArrayList;
@@ -42,7 +42,6 @@ import org.apache.poi.hssf.record.CountryRecord;
 import org.apache.poi.hssf.record.DateWindow1904Record;
 import org.apache.poi.hssf.record.DrawingGroupRecord;
 import org.apache.poi.hssf.record.EOFRecord;
-import org.apache.poi.hssf.record.ExtSSTRecord;
 import org.apache.poi.hssf.record.ExtendedFormatRecord;
 import org.apache.poi.hssf.record.FooterRecord;
 import org.apache.poi.hssf.record.FormatRecord;
@@ -281,7 +280,6 @@ public class ExcelExtractor extends AbstractPOIFSExtractor {
 
         public void processFile(DirectoryNode root, boolean listenForAllRecords)
                 throws IOException, SAXException, TikaException {
-
             // Set up listener and register the records we want to process
             HSSFRequest hssfRequest = new HSSFRequest();
             if (listenForAllRecords) {
@@ -494,15 +492,14 @@ public class ExcelExtractor extends AbstractPOIFSExtractor {
                         HeaderRecord headerRecord = (HeaderRecord) record;
                         addTextCell(record, headerRecord.getText());
                     }
-                	break;
+                    break;
                 	
                 case FooterRecord.sid:
                     if (extractor.officeParserConfig.getIncludeHeadersAndFooters()) {
                         FooterRecord footerRecord = (FooterRecord) record;
                         addTextCell(record, footerRecord.getText());
                     }
-                	break;
-
+                    break;
             }
 
             previousSid = record.getSid();
@@ -599,12 +596,17 @@ public class ExcelExtractor extends AbstractPOIFSExtractor {
             handler.startElement("tr");
             handler.startElement("td");
             for (Map.Entry<Point, Cell> entry : currentSheet.entrySet()) {
-                while (currentRow < entry.getKey().y) {
-                    handler.endElement("td");
-                    handler.endElement("tr");
-                    handler.startElement("tr");
-                    handler.startElement("td");
-                    currentRow++;
+                if (currentRow != entry.getKey().y) {
+                    // We've moved onto a new row, possibly skipping some
+                    do {
+                        handler.endElement("td");
+                        handler.endElement("tr");
+                        handler.startElement("tr");
+                        handler.startElement("td");
+                        currentRow++;
+                    } while (officeParserConfig.getIncludeMissingRows() &&
+                             currentRow < entry.getKey().y);
+                    currentRow = entry.getKey().y;
                     currentColumn = 0;
                 }
 
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 41139e2..4a52118 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -64,8 +64,8 @@ public class TabularFormatsTest extends TikaTest {
                 "87.5%","88.9%","90.0%"
         },
         new Pattern[] {
-                Pattern.compile("01-(01|JAN|Jan)-(60|1960)"),
-                Pattern.compile("02-01-1960"),
+                Pattern.compile("0?1-01-1960"),
+                Pattern.compile("0?2-01-1960"),
                 Pattern.compile("17-01-1960"),
                 Pattern.compile("22-03-1960"),
                 Pattern.compile("13-09-1960"),
@@ -77,17 +77,17 @@ public class TabularFormatsTest extends TikaTest {
                 Pattern.compile("19-05-1987"),
         },
         new Pattern[] {
-             Pattern.compile("01(JAN|Jan)(60|1960):00:00:01(.00)?"),
-             Pattern.compile("01(JAN|Jan)(60|1960):00:00:10(.00)?"),
-             Pattern.compile("01(JAN|Jan)(60|1960):00:01:40(.00)?"),
-             Pattern.compile("01(JAN|Jan)(60|1960):00:16:40(.00)?"),
-             Pattern.compile("01(JAN|Jan)(60|1960):02:46:40(.00)?"),
-             Pattern.compile("02(JAN|Jan)(60|1960):03:46:40(.00)?"),
-             Pattern.compile("12(JAN|Jan)(60|1960):13:46:40(.00)?"),
-             Pattern.compile("25(APR|Apr)(60|1960):17:46:40(.00)?"),
-             Pattern.compile("03(MAR|Mar)(63|1963):09:46:40(.00)?"),
-             Pattern.compile("09(SEP|Sep)(91|1991):01:46:40(.00)?"),
-             Pattern.compile("19(NOV|Nov)(76|2276):17:46:40(.00)?")
+             Pattern.compile("01(JAN|Jan)(60|1960)[:\\s]00:00:01(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960)[:\\s]00:00:10(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960)[:\\s]00:01:40(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960)[:\\s]00:16:40(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960)[:\\s]02:46:40(.00)?"),
+             Pattern.compile("02(JAN|Jan)(60|1960)[:\\s]03:46:40(.00)?"),
+             Pattern.compile("12(JAN|Jan)(60|1960)[:\\s]13:46:40(.00)?"),
+             Pattern.compile("25(APR|Apr)(60|1960)[:\\s]17:46:40(.00)?"),
+             Pattern.compile("03(MAR|Mar)(63|1963)[:\\s]09:46:40(.00)?"),
+             Pattern.compile("09(SEP|Sep)(91|1991)[:\\s]01:46:40(.00)?"),
+             Pattern.compile("19(NOV|Nov)(76|2276)[:\\s]17:46:40(.00)?")
         },
         new Pattern[] {
              Pattern.compile("0?0:00:01(.\\d\\d)?"),
@@ -226,25 +226,22 @@ public class TabularFormatsTest extends TikaTest {
         XMLResult result = getXML("test-columnar.xls");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        // TODO Correctly handle empty cells then enable this test
-        //assertContents(xml, true, false);
+        assertContents(xml, true, false);
     }
     @Test
     public void testXLSX() throws Exception {
         XMLResult result = getXML("test-columnar.xlsx");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        // TODO Fix formatting in export then enable this test
-        //assertContents(xml, true, false);
+        assertContents(xml, true, false);
+    }
+    @Test
+    public void testXLSB() throws Exception {
+        XMLResult result = getXML("test-columnar.xlsb");
+        String xml = result.xml;
+        assertHeaders(xml, false, true, false);
+        assertContents(xml, true, false);
     }
-    // Get a test XLSB file, then enable this unit test
-//    @Test
-//    public void testXLSB() throws Exception {
-//        XMLResult result = getXML("test-columnar.xlsb");
-//        String xml = result.xml;
-//        assertHeaders(xml, false, true, false);
-//        assertContents(xml, true, false);
-//    }
 
     // TODO Fix the ODS test - currently failing with
     // org.xml.sax.SAXException: Namespace http://www.w3.org/1999/xhtml not declared
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/microsoft/ExcelParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/microsoft/ExcelParserTest.java
index 75c972b..818230f 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/microsoft/ExcelParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/microsoft/ExcelParserTest.java
@@ -20,7 +20,6 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
-import java.io.File;
 import java.io.InputStream;
 import java.text.DecimalFormatSymbols;
 import java.util.List;
@@ -81,6 +80,30 @@ public class ExcelParserTest extends TikaTest {
             assertNotContained("9.0", content);
             assertContains("196", content);
             assertNotContained("196.0", content);
+
+
+            // Won't include missing rows by default
+            assertContains("Numbers and their Squares\n\t\tNumber", content);
+            assertContains("\tSquare\n\t\t1", content);
+        }
+
+        // Request with missing rows
+        try (InputStream input = ExcelParserTest.class.getResourceAsStream(
+                "/test-documents/testEXCEL.xls")) {
+            OfficeParserConfig config = new OfficeParserConfig();
+            config.setIncludeMissingRows(true);
+
+            Metadata metadata = new Metadata();
+            ContentHandler handler = new BodyContentHandler();
+            ParseContext context = new ParseContext();
+            context.set(Locale.class, Locale.US);
+            context.set(OfficeParserConfig.class, config);
+            new OfficeParser().parse(input, handler, metadata, context);
+
+            // Will now have the missing rows, each with a single empty cell
+            String content = handler.toString();
+            assertContains("Numbers and their Squares\n\t\n\t\n\t\tNumber", content);
+            assertContains("\tSquare\n\t\n\t\t1", content);
         }
     }
 


[tika] 02/30: Add parso to the OSGi bundle

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 4c5bbaece63d4f60b4c4920a105f5cca3de6be58
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu Apr 26 22:47:17 2018 +0100

    Add parso to the OSGi bundle
---
 tika-bundle/pom.xml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tika-bundle/pom.xml b/tika-bundle/pom.xml
index 584d8e8..691d436 100644
--- a/tika-bundle/pom.xml
+++ b/tika-bundle/pom.xml
@@ -199,6 +199,7 @@
               netcdf4|
               grib|
               cdm|
+              parso|
               httpservices|
               jcip-annotations|
               jmatio|
@@ -233,6 +234,7 @@
               com.github.openjson;resolution:=optional,
               com.google.protobuf;resolution:=optional,
               com.ibm.icu.text;resolution:=optional,
+              com.parso;resolution:=optional,
               com.sleepycat.je;resolution:=optional,
               com.sun.javadoc;resolution:=optional,
               com.sun.xml.bind.marshaller;resolution:=optional,


[tika] 05/30: XHTML improvements

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit f3508f2ff4cb827be315de95dacf3e59ab23c7c8
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri Apr 27 00:06:21 2018 +0100

    XHTML improvements
---
 .../org/apache/tika/parser/sas/SAS7BDATParser.java |  8 +--
 .../apache/tika/parser/sas/SAS7BDATParserTest.java | 72 ++++++++++++++++++++++
 2 files changed, 75 insertions(+), 5 deletions(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
index 1cef3cd..4944c12 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
@@ -63,7 +63,7 @@ public class SAS7BDATParser extends AbstractParser {
         xhtml.startDocument();
         
         SasFileReader sas = new SasFileReaderImpl(stream);
-        
+
         // TODO Metadata
 
         // Output as a table
@@ -85,9 +85,7 @@ public class SAS7BDATParser extends AbstractParser {
         while ((row = sas.readNext()) != null) {
             xhtml.startElement("tr");
             for (String val : DataWriterUtil.getRowValues(sas.getColumns(), row)) {
-                xhtml.startElement("td");
-                xhtml.characters(val);
-                xhtml.endElement("td");
+                xhtml.element("td", val);
             }
             xhtml.endElement("tr");
             xhtml.newline();
@@ -95,6 +93,6 @@ public class SAS7BDATParser extends AbstractParser {
 
         // Finish
         xhtml.endElement("table");
-        xhtml.newline();
+        xhtml.endDocument();
     }
 }
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
new file mode 100644
index 0000000..9f57c95
--- /dev/null
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.tika.parser.sas;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.tika.TikaTest;
+import org.apache.tika.metadata.Metadata;
+import org.apache.tika.metadata.TikaCoreProperties;
+import org.apache.tika.parser.AutoDetectParser;
+import org.apache.tika.parser.ParseContext;
+import org.apache.tika.parser.Parser;
+import org.apache.tika.sax.BodyContentHandler;
+import org.apache.tika.sax.WriteOutContentHandler;
+import org.junit.Test;
+import org.xml.sax.ContentHandler;
+import org.xml.sax.helpers.DefaultHandler;
+
+public class SAS7BDATParserTest extends TikaTest {
+    private Parser parser = new SAS7BDATParser();
+    
+    @Test
+    public void testSimpleFile() throws Exception {
+        ContentHandler handler = new BodyContentHandler();
+        Metadata metadata = new Metadata();
+
+        try (InputStream stream = SAS7BDATParserTest.class.getResourceAsStream(
+                "/test-documents/testSAS.sas7bdat")) {
+            parser.parse(stream, handler, metadata, new ParseContext());
+        }
+
+        assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
+        // TODO Test the metadata
+        // TODO Test the contents
+    }
+    
+    @Test
+    public void testMultiColumns() throws Exception {
+        Parser parser = new AutoDetectParser(); // Should auto-detect!
+        ContentHandler handler = new BodyContentHandler();
+        Metadata metadata = new Metadata();
+
+        try (InputStream stream = SAS7BDATParserTest.class.getResourceAsStream(
+                "/test-documents/test-columnar.sas7bdat")) {
+            parser.parse(stream, handler, metadata, new ParseContext());
+        }
+
+        assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
+        // TODO Test the metadata
+        // TODO Test the contents
+    }
+
+    // TODO HTML contents unit test
+}


[tika] 10/30: Stub a unit test for TIKA-2641

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit aaa78a3d665d8c120e8eadbc26f3d86958042c05
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 3 21:56:07 2018 +0100

    Stub a unit test for TIKA-2641
---
 .../org/apache/tika/parser/TabularFormatsTest.java | 71 ++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
new file mode 100644
index 0000000..61fcca2
--- /dev/null
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.tika.parser;
+
+
+import org.apache.tika.TikaTest;
+import org.junit.Test;
+
+/**
+ * Ensure that our various Table-based formats produce consistent,
+ *  broadly similar output.
+ * This is mostly focused on the XHTML output
+ */
+public class TabularFormatsTest extends TikaTest {
+    protected static final String[] headers = new String[] {
+        "String (Num=)","Number","Date","Datetime","Number"
+    };
+    /**
+     * Expected values, by <em>column</em>
+     */
+    protected static final String[][] table = new String[][] {
+        // TODO All values
+        new String[] {
+                "Num=0"
+        },
+        new String[] {
+                "0.0"
+        },
+        new String[] {
+                "1899-12-30"
+        },
+        new String[] {
+                "1900-01-01 11:00:00"
+        },
+        new String[] {
+                ""
+        }
+    };
+
+    protected void assertHeaders(String xml, boolean isTH) {
+        // TODO Check for the first row, then TR or TH
+    }
+    protected void assertContents(String xml, boolean hasHeader) {
+        // TODO Check the rows
+    }
+
+    @Test
+    public void testCSV() throws Exception {
+        XMLResult result = getXML("test-columnar.csv");
+        String xml = result.xml;
+
+        assertHeaders(xml, false);
+        assertContents(xml, true);
+    }
+    // TODO SAS7BDAT
+    // TODO Other formats
+}


[tika] 18/30: Not all formats know about %s, dates not completely consistent either...

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 507f59ff2df6e3bcded201700c284d37a3b4cc62
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 16:33:45 2018 +0100

    Not all formats know about %s, dates not completely consistent either...
---
 .../org/apache/tika/parser/TabularFormatsTest.java | 33 ++++++++++++++++++----
 1 file changed, 27 insertions(+), 6 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 7330f6a..80a7f56 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -20,6 +20,8 @@ package org.apache.tika.parser;
 import static org.junit.Assert.assertEquals;
 
 import java.util.Arrays;
+import java.util.List;
+import java.util.Locale;
 
 import org.apache.tika.TikaTest;
 import org.junit.Test;
@@ -56,7 +58,7 @@ public class TabularFormatsTest extends TikaTest {
                 "60%","70%","80%","90%","100%"
         },
         new String[] {
-                "M","0.0%","50.0%","66.7%",
+                "","0.0%","50.0%","66.7%",
                 "75.0%","80.0%","83.3%","85.7%",
                 "87.5%","88.9%","90.0%"
         },
@@ -100,6 +102,15 @@ public class TabularFormatsTest extends TikaTest {
             table[2][i] = "This is row " + i + " of 10";
         }
     }
+    // Which columns hold percentages? Not all parsers
+    //  correctly format these...
+    protected static final List<Integer> percentageColumns = 
+            Arrays.asList(new Integer[] { 3, 4 });
+    // Which columns hold dates? Some parsers output
+    //  bits of the month in lower case, some all upper, eg JAN vs Jan
+    protected static final List<Integer> dateColumns = 
+            Arrays.asList(new Integer[] { 5, 6 });
+    // TODO Handle 60 vs 1960
     
     protected static String[] toCells(String row, boolean isTH) {
         // Split into cells, ignoring stuff before first cell
@@ -152,7 +163,7 @@ public class TabularFormatsTest extends TikaTest {
             }
         }
     }
-    protected void assertContents(String xml, boolean hasHeader) {
+    protected void assertContents(String xml, boolean hasHeader, boolean doesPercents) {
         // Ignore anything before the first <tr>
         // Ignore the header row if there is one
         int ignores = 1;
@@ -178,8 +189,14 @@ public class TabularFormatsTest extends TikaTest {
                          table.length, cells.length);
 
             for (int cn=0; cn<table.length; cn++) {
+                String val = cells[cn];
+
+                // If the parser doesn't know about % formats,
+                //  skip the cell if the column in a % one
+                if (!doesPercents && percentageColumns.contains(cn)) continue;
+                if (dateColumns.contains(cn)) val = val.toUpperCase(Locale.ROOT);
+
                 // Ignore cell attributes
-                String val = cells.length > (cn-1) ? cells[cn] : "";
                 if (! val.isEmpty()) val = val.split(">")[1];
                 // Check
                 assertEquals("Wrong text in row " + (rn+1) + " and column " + (cn+1),
@@ -193,21 +210,25 @@ public class TabularFormatsTest extends TikaTest {
         XMLResult result = getXML("test-columnar.sas7bdat");
         String xml = result.xml;
         assertHeaders(xml, true, true, true);
-        //assertContents(xml, true);
+        // TODO Wait for https://github.com/epam/parso/issues/28 to be fixed
+        //  then check the % formats again
+//        assertContents(xml, true, false);
     }
     @Test
     public void testXLS() throws Exception {
         XMLResult result = getXML("test-columnar.xls");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        //assertContents(xml, true);
+        // TODO Correctly handle empty cells then test
+        //assertContents(xml, true, false);
     }
     @Test
     public void testXLSX() throws Exception {
         XMLResult result = getXML("test-columnar.xlsx");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        //assertContents(xml, true);
+        // TODO Correctly handle empty cells then test
+        //assertContents(xml, true, false);
     }
     // TODO Test ODS
     


[tika] 17/30: Ensure that empty cells are still output

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 5d3dd69442605d4cec45a60cf46545f0baf8186a
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 16:26:22 2018 +0100

    Ensure that empty cells are still output
---
 .../src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java    | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
index 121d958..8b28644 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
@@ -134,7 +134,11 @@ public class SAS7BDATParser extends AbstractParser {
         while ((row = sas.readNext()) != null) {
             xhtml.startElement("tr");
             for (String val : DataWriterUtil.getRowValues(sas.getColumns(), row)) {
-                xhtml.element("td", val);
+                // Use explicit start/end, rather than element, to 
+                //  ensure that empty cells still get output
+                xhtml.startElement("td");
+                xhtml.characters(val);
+                xhtml.endElement("td");
             }
             xhtml.endElement("tr");
             xhtml.newline();


[tika] 03/30: Test Columnar files - SAS7BDAT and CSV (other spreadsheet+DB formats still required)

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit fa5f282bdf2b0c7a57e83254fe4a547c9c4a76bd
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu Apr 26 18:37:14 2018 +0100

    Test Columnar files - SAS7BDAT and CSV (other spreadsheet+DB formats still required)
---
 .../resources/test-documents/test-columnar.csv     |  25 +++++++++++++++++++++
 .../test-documents/test-columnar.sas7bdat          | Bin 0 -> 9216 bytes
 2 files changed, 25 insertions(+)

diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.csv b/tika-parsers/src/test/resources/test-documents/test-columnar.csv
new file mode 100644
index 0000000..8de4097
--- /dev/null
+++ b/tika-parsers/src/test/resources/test-documents/test-columnar.csv
@@ -0,0 +1,25 @@
+"String (Num=)","Number","Date","Datetime","Number"
+Num=0,0.0,1899-12-30,1900-01-01 11:00:00,
+Num=0.1,0.1,1899-12-30,1899-12-30 02:24:00,0.1
+Num=0.25,0.25,1899-12-30,1899-12-30 06:00:00,0.25
+Num=0.5,0.5,1899-12-30,1899-12-30 12:00:00,0.5
+Num=1,1.0,1900-01-01,1900-01-01 00:00:00,
+Num=1.1,1.1,1900-01-01,1900-01-01 02:24:00,1.1
+Num=1.2,1.2,1900-01-01,1900-01-01 04:48:00,1.2
+Num=1.5,1.5,1900-01-01,1900-01-01 12:00:00,1.5
+Num=2,2.0,1900-01-02,1900-01-02 00:00:00,2.0
+Num=2.5,2.5,1900-01-02,1900-01-02 12:00:00,2.5
+Num=3,3.0,1900-01-03,1900-01-03 00:00:00,3.0
+Num=4,4.0,1900-01-04,1900-01-04 00:00:00,4.0
+Num=5,5.0,1900-01-05,1900-01-05 00:00:00,5.0
+Num=10,10.0,1900-01-10,1900-01-10 00:00:00,10.0
+Num=15,15.0,1900-01-15,1900-01-15 00:00:00,15.0
+Num=25,25.0,1900-01-25,1900-01-25 00:00:00,25.0
+Num=50,50.0,1900-02-19,1900-02-19 00:00:00,50.0
+Num=60,60.0,1900-02-28,1900-02-28 00:00:00,60.0
+Num=65,65.0,1900-03-05,1900-03-05 00:00:00,65.0
+Num=100,100.0,1900-04-09,1900-04-09 00:00:00,100.0
+Num=120,120.0,1900-04-29,1900-04-29 00:00:00,120.0
+Num=1500,1500.0,1904-02-08,1904-02-08 00:00:00,1500.0
+Num=20222,20222.0,1955-05-13,1955-05-13 00:00:00,20222.0
+Num=404242,404242.0,3006-10-10,3006-10-10 00:00:00,404242.0
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat
new file mode 100644
index 0000000..250b3b8
Binary files /dev/null and b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat differ


[tika] 20/30: Add disabled, currently failing ODS test

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit d871b1f7f0703a8fe57cfc0a79218c24d16cfad3
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 17:13:24 2018 +0100

    Add disabled, currently failing ODS test
---
 .../java/org/apache/tika/parser/TabularFormatsTest.java |  14 +++++++++++---
 .../src/test/resources/test-documents/test-columnar.ods | Bin 0 -> 12854 bytes
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 119c9cd..ea326bd 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -226,7 +226,7 @@ public class TabularFormatsTest extends TikaTest {
         XMLResult result = getXML("test-columnar.xls");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        // TODO Correctly handle empty cells then test
+        // TODO Correctly handle empty cells then enable this test
         //assertContents(xml, true, false);
     }
     @Test
@@ -234,10 +234,18 @@ public class TabularFormatsTest extends TikaTest {
         XMLResult result = getXML("test-columnar.xlsx");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        // TODO Correctly handle empty cells then test
+        // TODO Correctly handle empty cells then enable this test
         //assertContents(xml, true, false);
     }
-    // TODO Test OpenDocument ODS test
+    // TODO Fix the ODS test - currently failing with
+    // org.xml.sax.SAXException: Namespace http://www.w3.org/1999/xhtml not declared
+//    @Test
+//    public void testODS() throws Exception {
+//        XMLResult result = getXML("test-columnar.ods");
+//        String xml = result.xml;
+//        assertHeaders(xml, false, true, false);
+//        assertContents(xml, true, true);
+//    }
     
     // TODO Test other formats, eg Database formats
 
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.ods b/tika-parsers/src/test/resources/test-documents/test-columnar.ods
new file mode 100644
index 0000000..067ca18
Binary files /dev/null and b/tika-parsers/src/test/resources/test-documents/test-columnar.ods differ


[tika] 08/30: SAS7BDAT html tests

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit c31d40f4478c6a7a83d535c3bb3cc2fe4678b3d6
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 3 18:58:14 2018 +0100

    SAS7BDAT html tests
---
 .../org/apache/tika/parser/sas/SAS7BDATParser.java     |  5 ++++-
 .../org/apache/tika/parser/sas/SAS7BDATParserTest.java | 18 +++++++++++++++++-
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
index 56260ca..121d958 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
@@ -119,8 +119,11 @@ public class SAS7BDATParser extends AbstractParser {
         // Do the column headings
         xhtml.startElement("tr");
         for (Column c : sas.getColumns()) {
+            String label = c.getLabel();
+            if (label == null || label.isEmpty()) label = c.getName();
+
             xhtml.startElement("th", "title", c.getName());
-            xhtml.characters(c.getLabel());
+            xhtml.characters(label);
             xhtml.endElement("th");
         }
         xhtml.endElement("tr");
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index c2a74a7..37be73b 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -118,6 +118,22 @@ public class SAS7BDATParserTest extends TikaTest {
         assertContains("\t08Feb1904\t", content);
     }
 
-    // TODO HTML contents unit test
+    @Test
+    public void testHTML() throws Exception {
+        XMLResult result = getXML("testSAS.sas7bdat");
+        String xml = result.xml;
+
+        // Check the title came through
+        assertContains("<h1>TESTING</h1>", xml);
+        // Check the headings
+        assertContains("<th title=\"recnum\">recnum</th>", xml);
+        assertContains("<th title=\"label\">label</th>", xml);
+        // Check some rows
+        assertContains("<td>3</td>", xml);
+        assertContains("<td>This is row", xml);
+        assertContains("10</td>", xml);
+    }
+
     // TODO Column names vs labels, with a different test file
+    // TODO Columnar consistency test
 }


[tika] 15/30: Check header contents, check data rows count, add XLSX test

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 65af2d99be50c00fedd6261e11df9f60bd05d7ad
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 15:13:43 2018 +0100

    Check header contents, check data rows count, add XLSX test
---
 .../org/apache/tika/parser/TabularFormatsTest.java | 77 +++++++++++++++++-----
 1 file changed, 61 insertions(+), 16 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 8574d37..023f49d 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -31,7 +31,7 @@ import org.junit.Test;
  */
 public class TabularFormatsTest extends TikaTest {
     protected static final String[] columnNames = new String[] {
-         "recnum","square","desc","pctdone","pctinc",
+         "recnum","square","desc","pctdone","pctincr",
          "date","datetime","time"
     };
     protected static final String[] columnLabels = new String[] {
@@ -49,8 +49,9 @@ public class TabularFormatsTest extends TikaTest {
              "0","1","2","3","4","5","6","7","8","9","10"
         },
         new String[] {
-             "0","1","4" // etc
+             "0","1","4","9","16","25","36","49","64","81","100"
         },
+/*        
         new String[] {  // etc
                 "01-01-1960"
         },
@@ -59,37 +60,72 @@ public class TabularFormatsTest extends TikaTest {
         new String[] {
                 ""
         }
+*/
     };
-
-    protected void assertHeaders(String xml, boolean isTH, boolean hasLabel, boolean hasName) {
-        // Find the first row
-        int splitAt = xml.indexOf("</tr>");
-        String hRow = xml.substring(0, splitAt);
-        splitAt = xml.indexOf("<tr>");
-        hRow = hRow.substring(splitAt+4);
-
+    
+    protected static String[] toCells(String row, boolean isTH) {
         // Split into cells, ignoring stuff before first cell
         String[] cells;
         if (isTH) {
-            cells = hRow.split("<th");
+            cells = row.split("<th");
         } else {
-            cells = hRow.split("<td");
+            cells = row.split("<td");
         }
         cells = Arrays.copyOfRange(cells, 1, cells.length);
         for (int i=0; i<cells.length; i++) {
-            splitAt = cells[i].lastIndexOf("</");
+            int splitAt = cells[i].lastIndexOf("</");
             cells[i] = cells[i].substring(0, splitAt).trim();
         }
+        return cells;
+    }
+
+    protected void assertHeaders(String xml, boolean isTH, boolean hasLabel, boolean hasName) {
+        // Find the first row
+        int splitAt = xml.indexOf("</tr>");
+        String hRow = xml.substring(0, splitAt);
+        splitAt = xml.indexOf("<tr>");
+        hRow = hRow.substring(splitAt+4);
+
+        // Split into cells, ignoring stuff before first cell
+        String[] cells = toCells(hRow, isTH);
 
         // Check we got the right number
         assertEquals("Wrong number of cells in header row " + hRow,
                      columnLabels.length, cells.length);
 
         // Check we got the right stuff
-        // TODO
+        for (int i=0; i<cells.length; i++) {
+            if (hasLabel && hasName) {
+                assertContains("title=\"" + columnNames[i] + "\"", cells[i]); 
+                assertContains(">" + columnLabels[i], cells[i]); 
+            } else if (hasName) {
+                assertContains(">" + columnNames[i], cells[i]); 
+            } else {
+                assertContains(">" + columnLabels[i], cells[i]); 
+            }
+        }
     }
     protected void assertContents(String xml, boolean hasHeader) {
-        // TODO Check the rows
+        // Ignore anything before the first <tr>
+        // Ignore the header row if there is one
+        int ignores = 1;
+        if (hasHeader) ignores++;
+
+        // Split into rows, and discard the row closing (and anything after)
+        String[] rows = xml.split("<tr>");
+        rows = Arrays.copyOfRange(rows, ignores, rows.length);
+        for (int i=0; i<rows.length; i++) {
+            rows[i] = rows[i].split("</tr>")[0].trim();
+        }
+
+        // Check we got the right number of rows
+        for (int cn=0; cn<table.length; cn++) {
+            assertEquals("Wrong number of rows found compared to column " + (cn+1),
+                         table[cn].length, rows.length);
+        }
+
+        // Check each row's values
+        // TODO
     }
 
     @Test
@@ -106,7 +142,16 @@ public class TabularFormatsTest extends TikaTest {
         assertHeaders(xml, false, true, false);
         assertContents(xml, true);
     }
-    // TODO Other formats
+    @Test
+    public void testXLSX() throws Exception {
+        XMLResult result = getXML("test-columnar.xlsx");
+        String xml = result.xml;
+        assertHeaders(xml, false, true, false);
+        assertContents(xml, true);
+    }
+    // TODO Test ODS
+    
+    // TODO Test other formats, eg Database formats
 
     /**
      * Note - we don't have a dedicated CSV parser


[tika] 23/30: TIKA-2479 Output missing left/mid cells in XLSX and XLSB, and optionally also missing rows

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit dcfbe5a2d578696dfba309dec400e977f047cfb2
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 17 23:07:04 2018 +0100

    TIKA-2479 Output missing left/mid cells in XLSX and XLSB, and optionally also missing rows
---
 .../ooxml/XSSFBExcelExtractorDecorator.java        |  2 +-
 .../ooxml/XSSFExcelExtractorDecorator.java         | 35 ++++++++++++++++++----
 .../org/apache/tika/parser/TabularFormatsTest.java | 11 ++++++-
 3 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFBExcelExtractorDecorator.java b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFBExcelExtractorDecorator.java
index 0a511c2..0367afc 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFBExcelExtractorDecorator.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFBExcelExtractorDecorator.java
@@ -118,7 +118,7 @@ public class XSSFBExcelExtractorDecorator extends XSSFExcelExtractorDecorator {
             addDrawingHyperLinks(sheetPart);
             sheetParts.add(sheetPart);
 
-            SheetTextAsHTML sheetExtractor = new SheetTextAsHTML(config.getIncludeHeadersAndFooters(), xhtml);
+            SheetTextAsHTML sheetExtractor = new SheetTextAsHTML(config, xhtml);
             XSSFBCommentsTable comments = iter.getXSSFBSheetComments();
 
             // Start, and output the sheet name
diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFExcelExtractorDecorator.java b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFExcelExtractorDecorator.java
index bf6505b..3141148 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFExcelExtractorDecorator.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/microsoft/ooxml/XSSFExcelExtractorDecorator.java
@@ -25,7 +25,6 @@ import java.util.List;
 import java.util.Locale;
 import java.util.Map;
 
-import org.apache.poi.POIXMLDocument;
 import org.apache.poi.POIXMLTextExtractor;
 import org.apache.poi.hssf.extractor.ExcelExtractor;
 import org.apache.poi.openxml4j.exceptions.InvalidFormatException;
@@ -39,6 +38,7 @@ import org.apache.poi.openxml4j.opc.PackagingURIHelper;
 import org.apache.poi.openxml4j.opc.TargetMode;
 import org.apache.poi.ss.usermodel.DataFormatter;
 import org.apache.poi.ss.usermodel.HeaderFooter;
+import org.apache.poi.ss.util.CellReference;
 import org.apache.poi.xssf.eventusermodel.ReadOnlySharedStringsTable;
 import org.apache.poi.xssf.eventusermodel.XSSFReader;
 import org.apache.poi.xssf.eventusermodel.XSSFSheetXMLHandler;
@@ -57,6 +57,7 @@ import org.apache.tika.metadata.Metadata;
 import org.apache.tika.metadata.TikaCoreProperties;
 import org.apache.tika.metadata.TikaMetadataKeys;
 import org.apache.tika.parser.ParseContext;
+import org.apache.tika.parser.microsoft.OfficeParserConfig;
 import org.apache.tika.parser.microsoft.TikaExcelDataFormatter;
 import org.apache.tika.sax.OfflineContentHandler;
 import org.apache.tika.sax.XHTMLContentHandler;
@@ -146,8 +147,7 @@ public class XSSFExcelExtractorDecorator extends AbstractOOXMLExtractor {
         }
 
         while (iter.hasNext()) {
-
-            SheetTextAsHTML sheetExtractor = new SheetTextAsHTML(config.getIncludeHeadersAndFooters(), xhtml);
+            SheetTextAsHTML sheetExtractor = new SheetTextAsHTML(config, xhtml);
             PackagePart sheetPart = null;
             try (InputStream stream = iter.next()) {
                 sheetPart = iter.getSheetPart();
@@ -396,11 +396,15 @@ public class XSSFExcelExtractorDecorator extends AbstractOOXMLExtractor {
     protected static class SheetTextAsHTML implements SheetContentsHandler {
         private XHTMLContentHandler xhtml;
         private final boolean includeHeadersFooters;
+        private final boolean includeMissingRows;
         protected List<String> headers;
         protected List<String> footers;
+        private int lastSeenRow = -1;
+        private int lastSeenCol = -1;
 
-        protected SheetTextAsHTML(boolean includeHeaderFooters, XHTMLContentHandler xhtml) {
-            this.includeHeadersFooters = includeHeaderFooters;
+        protected SheetTextAsHTML(OfficeParserConfig config, XHTMLContentHandler xhtml) {
+            this.includeHeadersFooters = config.getIncludeHeadersAndFooters();
+            this.includeMissingRows = config.getIncludeMissingRows();
             this.xhtml = xhtml;
             headers = new ArrayList<String>();
             footers = new ArrayList<String>();
@@ -408,7 +412,19 @@ public class XSSFExcelExtractorDecorator extends AbstractOOXMLExtractor {
 
         public void startRow(int rowNum) {
             try {
+                // Missing rows, if desired, with a single empty row
+                if (includeMissingRows && rowNum > (lastSeenRow+1)) {
+                    for (int rn=lastSeenRow+1; rn<rowNum; rn++) {
+                        xhtml.startElement("tr");
+                        xhtml.startElement("td");
+                        xhtml.endElement("td");
+                        xhtml.endElement("tr");
+                    }
+                }
+
+                // Start the new row
                 xhtml.startElement("tr");
+                lastSeenCol = -1;
             } catch (SAXException e) {
             }
         }
@@ -422,6 +438,15 @@ public class XSSFExcelExtractorDecorator extends AbstractOOXMLExtractor {
 
         public void cell(String cellRef, String formattedValue, XSSFComment comment) {
             try {
+                // Handle any missing cells
+                int colNum = (new CellReference(cellRef)).getCol();
+                for (int cn=lastSeenCol+1; cn<colNum; cn++) {
+                    xhtml.startElement("td");
+                    xhtml.endElement("td");
+                }
+                lastSeenCol = colNum;
+
+                // Start this cell
                 xhtml.startElement("td");
 
                 // Main cell contents
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index ea326bd..41139e2 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -234,9 +234,18 @@ public class TabularFormatsTest extends TikaTest {
         XMLResult result = getXML("test-columnar.xlsx");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        // TODO Correctly handle empty cells then enable this test
+        // TODO Fix formatting in export then enable this test
         //assertContents(xml, true, false);
     }
+    // Get a test XLSB file, then enable this unit test
+//    @Test
+//    public void testXLSB() throws Exception {
+//        XMLResult result = getXML("test-columnar.xlsb");
+//        String xml = result.xml;
+//        assertHeaders(xml, false, true, false);
+//        assertContents(xml, true, false);
+//    }
+
     // TODO Fix the ODS test - currently failing with
     // org.xml.sax.SAXException: Namespace http://www.w3.org/1999/xhtml not declared
 //    @Test


[tika] 27/30: Move some fixes that didn't make it into 1.18 into 1.19

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 060bfa5571293d2aafef62747b0f14a7c71042c2
Author: TALLISON <ta...@apache.org>
AuthorDate: Fri Jul 27 09:49:10 2018 -0400

    Move some fixes that didn't make it into 1.18 into 1.19
---
 CHANGES.txt | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index ddb7aec..ae7627d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -19,6 +19,18 @@ Release 1.19 ???
    * Add the RecursiveParserWrapperHandler to improve the RecursiveParserWrapper
      API slightly (TIKA-2644).
 
+   * Support for SAS7BDAT data files (TIKA-2462)
+
+   * Handle .epub files using .htm rather than .html extensions for the
+     embedded contents (TIKA-1288)
+
+   * Mime magic for ACES Images (TIKA-2628) and DPX Images (TIKA-2629)
+
+   * For sparse XLSX and XLSB files, always output missing cells to
+     the left of filled ones (matching XLS), and optionally output
+     missing rows on all 3 formats if requested via the
+     OfficeParserContext (TIKA-2479)
+
 
 Release 1.18 - 4/20/2018
 
@@ -88,17 +100,6 @@ Release 1.18 - 4/20/2018
    * Added local Docker image build using dockerfile-maven-plugin to allow
      images to be built from source (TIKA-1518).
 
-   * Support for SAS7BDAT data files (TIKA-2462)
-
-   * Handle .epub files using .htm rather than .html extensions for the
-     embedded contents (TIKA-1288)
-
-   * Mime magic for ACES Images (TIKA-2628) and DPX Images (TIKA-2629)
-
-   * For sparse XLSX and XLSB files, always output missing cells to
-     the left of filled ones (matching XLS), and optionally output
-     missing rows on all 3 formats if requested via the
-     OfficeParserContext (TIKA-2479)
 
 Release 1.17 - 12/8/2017
 


[tika] 01/30: Depend on Parso for SAS7BDAT support

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 6afdf198d33beb12553957fcbc3f47830a45a942
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu Apr 26 22:24:10 2018 +0100

    Depend on Parso for SAS7BDAT support
---
 tika-parsers/pom.xml | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tika-parsers/pom.xml b/tika-parsers/pom.xml
index 6b51f30..e9e5b83 100644
--- a/tika-parsers/pom.xml
+++ b/tika-parsers/pom.xml
@@ -176,6 +176,11 @@
       <version>${tukaani.version}</version>
     </dependency>
     <dependency>
+      <groupId>com.epam</groupId>
+      <artifactId>parso</artifactId>
+      <version>${parso.version}</version>
+    </dependency>
+    <dependency>
       <groupId>org.brotli</groupId>
       <artifactId>dec</artifactId>
       <version>${brotli.version}</version>


[tika] 14/30: CSV assert as best we can (no dedicated parser), start on XLS and SAS7BDAT consistency tests

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit b92f752d358a2942bfda81829eae506cb584f715
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 13:48:03 2018 +0100

    CSV assert as best we can (no dedicated parser), start on XLS and SAS7BDAT consistency tests
---
 .../org/apache/tika/parser/TabularFormatsTest.java | 65 ++++++++++++++++++++--
 1 file changed, 59 insertions(+), 6 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 4dc7336..8574d37 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -17,6 +17,10 @@
 package org.apache.tika.parser;
 
 
+import static org.junit.Assert.assertEquals;
+
+import java.util.Arrays;
+
 import org.apache.tika.TikaTest;
 import org.junit.Test;
 
@@ -57,21 +61,70 @@ public class TabularFormatsTest extends TikaTest {
         }
     };
 
-    protected void assertHeaders(String xml, boolean isTH) {
-        // TODO Check for the first row, then TR or TH
+    protected void assertHeaders(String xml, boolean isTH, boolean hasLabel, boolean hasName) {
+        // Find the first row
+        int splitAt = xml.indexOf("</tr>");
+        String hRow = xml.substring(0, splitAt);
+        splitAt = xml.indexOf("<tr>");
+        hRow = hRow.substring(splitAt+4);
+
+        // Split into cells, ignoring stuff before first cell
+        String[] cells;
+        if (isTH) {
+            cells = hRow.split("<th");
+        } else {
+            cells = hRow.split("<td");
+        }
+        cells = Arrays.copyOfRange(cells, 1, cells.length);
+        for (int i=0; i<cells.length; i++) {
+            splitAt = cells[i].lastIndexOf("</");
+            cells[i] = cells[i].substring(0, splitAt).trim();
+        }
+
+        // Check we got the right number
+        assertEquals("Wrong number of cells in header row " + hRow,
+                     columnLabels.length, cells.length);
+
+        // Check we got the right stuff
+        // TODO
     }
     protected void assertContents(String xml, boolean hasHeader) {
         // TODO Check the rows
     }
 
     @Test
+    public void testSAS7BDAT() throws Exception {
+        XMLResult result = getXML("test-columnar.sas7bdat");
+        String xml = result.xml;
+        assertHeaders(xml, true, true, true);
+        assertContents(xml, true);
+    }
+    @Test
+    public void testXLS() throws Exception {
+        XMLResult result = getXML("test-columnar.xls");
+        String xml = result.xml;
+        assertHeaders(xml, false, true, false);
+        assertContents(xml, true);
+    }
+    // TODO Other formats
+
+    /**
+     * Note - we don't have a dedicated CSV parser
+     * 
+     * This means we don't get proper HTML out...
+     */
+    @Test
     public void testCSV() throws Exception {
         XMLResult result = getXML("test-columnar.csv");
         String xml = result.xml;
 
-        assertHeaders(xml, false);
-        assertContents(xml, true);
+        for (String label : columnLabels) {
+            assertContains(label, xml);
+        }
+        for (String[] vals : table) {
+            for (String val : vals) {
+                assertContains(val, xml);
+            }
+        }
     }
-    // TODO SAS7BDAT
-    // TODO Other formats
 }


[tika] 19/30: Use patterns to handle the date format variations

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 81caa71785d3e76df7bee93e71627c4f90a29323
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 16:59:09 2018 +0100

    Use patterns to handle the date format variations
---
 .../org/apache/tika/parser/TabularFormatsTest.java | 101 ++++++++++++---------
 1 file changed, 56 insertions(+), 45 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 80a7f56..119c9cd 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -18,10 +18,11 @@ package org.apache.tika.parser;
 
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 import java.util.Arrays;
 import java.util.List;
-import java.util.Locale;
+import java.util.regex.Pattern;
 
 import org.apache.tika.TikaTest;
 import org.junit.Test;
@@ -45,14 +46,14 @@ public class TabularFormatsTest extends TikaTest {
     /**
      * Expected values, by <em>column</em>
      */
-    protected static final String[][] table = new String[][] {
+    protected static final Object[][] table = new Object[][] {
         new String[] {
              "0","1","2","3","4","5","6","7","8","9","10"
         },
         new String[] {
              "0","1","4","9","16","25","36","49","64","81","100"
         },
-        new String[] {}, // Done later
+        new String[] {}, // Generated later
         new String[] {
                 "0%","10%","20%","30%","40%","50%",
                 "60%","70%","80%","90%","100%"
@@ -62,37 +63,44 @@ public class TabularFormatsTest extends TikaTest {
                 "75.0%","80.0%","83.3%","85.7%",
                 "87.5%","88.9%","90.0%"
         },
-        new String[] {
-             "01-01-1960", "02-01-1960", "17-01-1960",
-             "22-03-1960", "13-09-1960", "17-09-1961",
-             "20-07-1963", "29-07-1966", "20-03-1971",
-             "18-12-1977", "19-05-1987"
+        new Pattern[] {
+                Pattern.compile("01-(01|JAN|Jan)-(60|1960)"),
+                Pattern.compile("02-01-1960"),
+                Pattern.compile("17-01-1960"),
+                Pattern.compile("22-03-1960"),
+                Pattern.compile("13-09-1960"),
+                Pattern.compile("17-09-1961"),
+                Pattern.compile("20-07-1963"),
+                Pattern.compile("29-07-1966"),
+                Pattern.compile("20-03-1971"),
+                Pattern.compile("18-12-1977"),
+                Pattern.compile("19-05-1987"),
         },
-        new String[] {
-             "01JAN60:00:00:01",
-             "01JAN60:00:00:10",
-             "01JAN60:00:01:40",
-             "01JAN60:00:16:40",
-             "01JAN60:02:46:40",
-             "02JAN60:03:46:40",
-             "12JAN60:13:46:40",
-             "25APR60:17:46:40",
-             "03MAR63:09:46:40",
-             "09SEP91:01:46:40",
-             "19NOV76:17:46:40"
+        new Pattern[] {
+             Pattern.compile("01(JAN|Jan)(60|1960):00:00:01(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960):00:00:10(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960):00:01:40(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960):00:16:40(.00)?"),
+             Pattern.compile("01(JAN|Jan)(60|1960):02:46:40(.00)?"),
+             Pattern.compile("02(JAN|Jan)(60|1960):03:46:40(.00)?"),
+             Pattern.compile("12(JAN|Jan)(60|1960):13:46:40(.00)?"),
+             Pattern.compile("25(APR|Apr)(60|1960):17:46:40(.00)?"),
+             Pattern.compile("03(MAR|Mar)(63|1963):09:46:40(.00)?"),
+             Pattern.compile("09(SEP|Sep)(91|1991):01:46:40(.00)?"),
+             Pattern.compile("19(NOV|Nov)(76|2276):17:46:40(.00)?")
         },
-        new String[] {
-             "0:00:01",
-             "0:00:03",
-             "0:00:09",
-             "0:00:27",
-             "0:01:21",
-             "0:04:03",
-             "0:12:09",
-             "0:36:27",
-             "1:49:21",
-             "5:28:03",
-             "16:24:09"
+        new Pattern[] {
+             Pattern.compile("0?0:00:01(.\\d\\d)?"),
+             Pattern.compile("0?0:00:03(.\\d\\d)?"),
+             Pattern.compile("0?0:00:09(.\\d\\d)?"),
+             Pattern.compile("0?0:00:27(.\\d\\d)?"),
+             Pattern.compile("0?0:01:21(.\\d\\d)?"),
+             Pattern.compile("0?0:04:03(.\\d\\d)?"),
+             Pattern.compile("0?0:12:09(.\\d\\d)?"),
+             Pattern.compile("0?0:36:27(.\\d\\d)?"),
+             Pattern.compile("0?1:49:21(.\\d\\d)?"),
+             Pattern.compile("0?5:28:03(.\\d\\d)?"),
+             Pattern.compile("16:24:09(.\\d\\d)?")
         }
     };
     static {
@@ -106,11 +114,6 @@ public class TabularFormatsTest extends TikaTest {
     //  correctly format these...
     protected static final List<Integer> percentageColumns = 
             Arrays.asList(new Integer[] { 3, 4 });
-    // Which columns hold dates? Some parsers output
-    //  bits of the month in lower case, some all upper, eg JAN vs Jan
-    protected static final List<Integer> dateColumns = 
-            Arrays.asList(new Integer[] { 5, 6 });
-    // TODO Handle 60 vs 1960
     
     protected static String[] toCells(String row, boolean isTH) {
         // Split into cells, ignoring stuff before first cell
@@ -194,13 +197,17 @@ public class TabularFormatsTest extends TikaTest {
                 // If the parser doesn't know about % formats,
                 //  skip the cell if the column in a % one
                 if (!doesPercents && percentageColumns.contains(cn)) continue;
-                if (dateColumns.contains(cn)) val = val.toUpperCase(Locale.ROOT);
 
                 // Ignore cell attributes
                 if (! val.isEmpty()) val = val.split(">")[1];
                 // Check
-                assertEquals("Wrong text in row " + (rn+1) + " and column " + (cn+1),
-                             table[cn][rn], val);
+                String error = "Wrong text in row " + (rn+1) + " and column " + 
+                               (cn+1) + " - " + table[cn][rn] + " vs " + val;
+                if (table[cn][rn] instanceof String) {
+                    assertEquals(error, table[cn][rn], val);
+                } else {
+                    assertTrue(error, ((Pattern)table[cn][rn]).matcher(val).matches());
+                }
             }
         }
     }
@@ -212,7 +219,7 @@ public class TabularFormatsTest extends TikaTest {
         assertHeaders(xml, true, true, true);
         // TODO Wait for https://github.com/epam/parso/issues/28 to be fixed
         //  then check the % formats again
-//        assertContents(xml, true, false);
+        assertContents(xml, true, false);
     }
     @Test
     public void testXLS() throws Exception {
@@ -230,7 +237,7 @@ public class TabularFormatsTest extends TikaTest {
         // TODO Correctly handle empty cells then test
         //assertContents(xml, true, false);
     }
-    // TODO Test ODS
+    // TODO Test OpenDocument ODS test
     
     // TODO Test other formats, eg Database formats
 
@@ -249,9 +256,13 @@ public class TabularFormatsTest extends TikaTest {
         for (String label : columnLabels) {
             assertContains(label, xml);
         }
-        for (String[] vals : table) {
-            for (String val : vals) {
-                assertContains(val, xml);
+        for (Object[] vals : table) {
+            for (Object val : vals) {
+                if (val instanceof String)
+                    assertContains((String)val, xml);
+                else if (val instanceof Pattern)
+                    assertTrue("Not matched: " + val, 
+                            ((Pattern)val).matcher(xml).find());
             }
         }
     }


[tika] 16/30: Remaining values to check

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 3f2b7a5b390176ecf4e3a7f6e258e1ed87523396
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 15:41:16 2018 +0100

    Remaining values to check
---
 .../org/apache/tika/parser/TabularFormatsTest.java | 84 +++++++++++++++++++---
 1 file changed, 73 insertions(+), 11 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 023f49d..7330f6a 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -44,24 +44,62 @@ public class TabularFormatsTest extends TikaTest {
      * Expected values, by <em>column</em>
      */
     protected static final String[][] table = new String[][] {
-        // TODO All values
         new String[] {
              "0","1","2","3","4","5","6","7","8","9","10"
         },
         new String[] {
              "0","1","4","9","16","25","36","49","64","81","100"
         },
-/*        
-        new String[] {  // etc
-                "01-01-1960"
+        new String[] {}, // Done later
+        new String[] {
+                "0%","10%","20%","30%","40%","50%",
+                "60%","70%","80%","90%","100%"
+        },
+        new String[] {
+                "M","0.0%","50.0%","66.7%",
+                "75.0%","80.0%","83.3%","85.7%",
+                "87.5%","88.9%","90.0%"
         },
-        new String[] {  // etc
+        new String[] {
+             "01-01-1960", "02-01-1960", "17-01-1960",
+             "22-03-1960", "13-09-1960", "17-09-1961",
+             "20-07-1963", "29-07-1966", "20-03-1971",
+             "18-12-1977", "19-05-1987"
         },
         new String[] {
-                ""
+             "01JAN60:00:00:01",
+             "01JAN60:00:00:10",
+             "01JAN60:00:01:40",
+             "01JAN60:00:16:40",
+             "01JAN60:02:46:40",
+             "02JAN60:03:46:40",
+             "12JAN60:13:46:40",
+             "25APR60:17:46:40",
+             "03MAR63:09:46:40",
+             "09SEP91:01:46:40",
+             "19NOV76:17:46:40"
+        },
+        new String[] {
+             "0:00:01",
+             "0:00:03",
+             "0:00:09",
+             "0:00:27",
+             "0:01:21",
+             "0:04:03",
+             "0:12:09",
+             "0:36:27",
+             "1:49:21",
+             "5:28:03",
+             "16:24:09"
         }
-*/
     };
+    static {
+        // Row text in 3rd column
+        table[2] = new String[table[0].length];
+        for (int i=0; i<table[0].length; i++) {
+            table[2][i] = "This is row " + i + " of 10";
+        }
+    }
     
     protected static String[] toCells(String row, boolean isTH) {
         // Split into cells, ignoring stuff before first cell
@@ -72,9 +110,18 @@ public class TabularFormatsTest extends TikaTest {
             cells = row.split("<td");
         }
         cells = Arrays.copyOfRange(cells, 1, cells.length);
+
+        // Ignore the closing tag onwards, and normalise whitespace
         for (int i=0; i<cells.length; i++) {
+            cells[i] = cells[i].trim();
+            if (cells[i].equals("/>")) {
+                cells[i] = "";
+                continue;
+            }
+
             int splitAt = cells[i].lastIndexOf("</");
             cells[i] = cells[i].substring(0, splitAt).trim();
+            cells[i] = cells[i].replaceAll("\\s+", " ");
         }
         return cells;
     }
@@ -125,7 +172,20 @@ public class TabularFormatsTest extends TikaTest {
         }
 
         // Check each row's values
-        // TODO
+        for (int rn=0; rn<rows.length; rn++) {
+            String[] cells = toCells(rows[rn], false);
+            assertEquals("Wrong number of values in row " + (rn+1),
+                         table.length, cells.length);
+
+            for (int cn=0; cn<table.length; cn++) {
+                // Ignore cell attributes
+                String val = cells.length > (cn-1) ? cells[cn] : "";
+                if (! val.isEmpty()) val = val.split(">")[1];
+                // Check
+                assertEquals("Wrong text in row " + (rn+1) + " and column " + (cn+1),
+                             table[cn][rn], val);
+            }
+        }
     }
 
     @Test
@@ -133,21 +193,21 @@ public class TabularFormatsTest extends TikaTest {
         XMLResult result = getXML("test-columnar.sas7bdat");
         String xml = result.xml;
         assertHeaders(xml, true, true, true);
-        assertContents(xml, true);
+        //assertContents(xml, true);
     }
     @Test
     public void testXLS() throws Exception {
         XMLResult result = getXML("test-columnar.xls");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        assertContents(xml, true);
+        //assertContents(xml, true);
     }
     @Test
     public void testXLSX() throws Exception {
         XMLResult result = getXML("test-columnar.xlsx");
         String xml = result.xml;
         assertHeaders(xml, false, true, false);
-        assertContents(xml, true);
+        //assertContents(xml, true);
     }
     // TODO Test ODS
     
@@ -162,6 +222,8 @@ public class TabularFormatsTest extends TikaTest {
     public void testCSV() throws Exception {
         XMLResult result = getXML("test-columnar.csv");
         String xml = result.xml;
+        // Normalise whitespace before testing
+        xml = xml.replaceAll("\\s+", " ");
 
         for (String label : columnLabels) {
             assertContains(label, xml);


[tika] 29/30: Add the other jackcess jar to the bundle

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 3da39b8b5e03fb834894349275ed35db8af41bd9
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri May 18 15:35:06 2018 +0100

    Add the other jackcess jar to the bundle
---
 tika-bundle/pom.xml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tika-bundle/pom.xml b/tika-bundle/pom.xml
index 691d436..57d2ba4 100644
--- a/tika-bundle/pom.xml
+++ b/tika-bundle/pom.xml
@@ -171,6 +171,7 @@
               curvesapi|
               xmlbeans|
               jackcess|
+              jackcess-encrypt|
               commons-lang|
               tagsoup|
               asm|


[tika] 21/30: Mime magic for DPX and ACES, thanks to Andreas Meier (TIKA-2628 and TIKA-2629)

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit de53df98a9523955fbdbaeefce00c13eb1b719b3
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 22:18:36 2018 +0100

    Mime magic for DPX and ACES, thanks to Andreas Meier (TIKA-2628 and TIKA-2629)
    
    # Conflicts:
    #	CHANGES.txt
---
 .../resources/org/apache/tika/mime/tika-mimetypes.xml | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/tika-core/src/main/resources/org/apache/tika/mime/tika-mimetypes.xml b/tika-core/src/main/resources/org/apache/tika/mime/tika-mimetypes.xml
index 77db752..3c4b4ca 100644
--- a/tika-core/src/main/resources/org/apache/tika/mime/tika-mimetypes.xml
+++ b/tika-core/src/main/resources/org/apache/tika/mime/tika-mimetypes.xml
@@ -5019,6 +5019,15 @@
     <glob pattern="*.xyz"/>
   </mime-type>
 
+  <mime-type type="image/aces">
+    <_comment>ACES Image Container File</_comment>
+    <magic priority="50">
+      <match value="0x762F310102000000" type="string" offset="0"/>
+      <match value="0x762F310102040000" type="string" offset="0"/>
+    </magic>
+    <glob pattern="*.exr"/>
+  </mime-type> 
+
   <mime-type type="image/bmp">
     <alias type="image/x-bmp"/>
     <alias type="image/x-ms-bmp"/>
@@ -5068,6 +5077,16 @@
     <glob pattern="*.cgm"/>
   </mime-type>
 
+  <mime-type type="image/x-dpx">
+    <acronym>DPX</acronym>
+    <_comment>Digital Picture Exchange from SMPTE</_comment>
+    <magic priority="50">
+      <match value="SDPX" type="string" offset="0" />
+      <match value="XPDS" type="string" offset="0" />
+    </magic>
+    <glob pattern="*.dpx"/>
+  </mime-type>
+
   <mime-type type="image/emf">
     <alias type="image/x-emf"/>
     <alias type="application/x-emf"/>


[tika] 13/30: Add a time column to the test columnar files

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 7f68ebb4b80e4ef6437796d592d19f0f354adb92
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 10 11:35:04 2018 +0100

    Add a time column to the test columnar files
---
 .../org/apache/tika/parser/TabularFormatsTest.java |  22 +++++++-----
 .../apache/tika/parser/sas/SAS7BDATParserTest.java |   8 ++---
 .../resources/test-documents/test-columnar.csv     |  37 +++++++--------------
 .../resources/test-documents/test-columnar.sas.xml |  11 ++++++
 .../test-documents/test-columnar.sas7bdat          | Bin 17408 -> 17408 bytes
 .../resources/test-documents/test-columnar.xls     | Bin 0 -> 6656 bytes
 .../resources/test-documents/test-columnar.xlsx    | Bin 0 -> 4941 bytes
 .../resources/test-documents/test-columnar.xpt     | Bin 4560 -> 4720 bytes
 .../src/test/resources/test-documents/testSAS2.sas |  27 ++++++++++++---
 9 files changed, 64 insertions(+), 41 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
index 61fcca2..4dc7336 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/TabularFormatsTest.java
@@ -26,25 +26,31 @@ import org.junit.Test;
  * This is mostly focused on the XHTML output
  */
 public class TabularFormatsTest extends TikaTest {
-    protected static final String[] headers = new String[] {
-        "String (Num=)","Number","Date","Datetime","Number"
+    protected static final String[] columnNames = new String[] {
+         "recnum","square","desc","pctdone","pctinc",
+         "date","datetime","time"
     };
+    protected static final String[] columnLabels = new String[] {
+        "Record Number","Square of the Record Number",
+        "Description of the Row","Percent Done",
+        "Percent Increment","date","datetime","time"    
+    };
+
     /**
      * Expected values, by <em>column</em>
      */
     protected static final String[][] table = new String[][] {
         // TODO All values
         new String[] {
-                "Num=0"
+             "0","1","2","3","4","5","6","7","8","9","10"
         },
         new String[] {
-                "0.0"
+             "0","1","4" // etc
         },
-        new String[] {
-                "1899-12-30"
+        new String[] {  // etc
+                "01-01-1960"
         },
-        new String[] {
-                "1900-01-01 11:00:00"
+        new String[] {  // etc
         },
         new String[] {
                 ""
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index 3bb3e01..610ffc3 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -89,11 +89,11 @@ public class SAS7BDATParserTest extends TikaTest {
         assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
         assertEquals("TESTING", metadata.get(TikaCoreProperties.TITLE));
 
-        assertEquals("2018-05-09T16:42:04Z", metadata.get(TikaCoreProperties.CREATED));
-        assertEquals("2018-05-09T16:42:04Z", metadata.get(TikaCoreProperties.MODIFIED));
+        assertEquals("2018-05-09T17:59:33Z", metadata.get(TikaCoreProperties.CREATED));
+        assertEquals("2018-05-09T17:59:33Z", metadata.get(TikaCoreProperties.MODIFIED));
         
         assertEquals("1", metadata.get(PagedText.N_PAGES));
-        assertEquals("7", metadata.get(Database.COLUMN_COUNT));
+        assertEquals("8", metadata.get(Database.COLUMN_COUNT));
         assertEquals("11", metadata.get(Database.ROW_COUNT));
         assertEquals("windows-1252", metadata.get(HttpHeaders.CONTENT_ENCODING));
         assertEquals("W32_7PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
@@ -102,7 +102,7 @@ public class SAS7BDATParserTest extends TikaTest {
         assertEquals("Little", metadata.get(MachineMetadata.ENDIAN));
         assertEquals(Arrays.asList("Record Number","Square of the Record Number",
                                    "Description of the Row","Percent Done",
-                                   "Percent Increment","date","datetime"),
+                                   "Percent Increment","date","datetime","time"),
                      Arrays.asList(metadata.getValues(Database.COLUMN_NAME)));
         
         String content = handler.toString();
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.csv b/tika-parsers/src/test/resources/test-documents/test-columnar.csv
index 8de4097..5ef57bb 100644
--- a/tika-parsers/src/test/resources/test-documents/test-columnar.csv
+++ b/tika-parsers/src/test/resources/test-documents/test-columnar.csv
@@ -1,25 +1,12 @@
-"String (Num=)","Number","Date","Datetime","Number"
-Num=0,0.0,1899-12-30,1900-01-01 11:00:00,
-Num=0.1,0.1,1899-12-30,1899-12-30 02:24:00,0.1
-Num=0.25,0.25,1899-12-30,1899-12-30 06:00:00,0.25
-Num=0.5,0.5,1899-12-30,1899-12-30 12:00:00,0.5
-Num=1,1.0,1900-01-01,1900-01-01 00:00:00,
-Num=1.1,1.1,1900-01-01,1900-01-01 02:24:00,1.1
-Num=1.2,1.2,1900-01-01,1900-01-01 04:48:00,1.2
-Num=1.5,1.5,1900-01-01,1900-01-01 12:00:00,1.5
-Num=2,2.0,1900-01-02,1900-01-02 00:00:00,2.0
-Num=2.5,2.5,1900-01-02,1900-01-02 12:00:00,2.5
-Num=3,3.0,1900-01-03,1900-01-03 00:00:00,3.0
-Num=4,4.0,1900-01-04,1900-01-04 00:00:00,4.0
-Num=5,5.0,1900-01-05,1900-01-05 00:00:00,5.0
-Num=10,10.0,1900-01-10,1900-01-10 00:00:00,10.0
-Num=15,15.0,1900-01-15,1900-01-15 00:00:00,15.0
-Num=25,25.0,1900-01-25,1900-01-25 00:00:00,25.0
-Num=50,50.0,1900-02-19,1900-02-19 00:00:00,50.0
-Num=60,60.0,1900-02-28,1900-02-28 00:00:00,60.0
-Num=65,65.0,1900-03-05,1900-03-05 00:00:00,65.0
-Num=100,100.0,1900-04-09,1900-04-09 00:00:00,100.0
-Num=120,120.0,1900-04-29,1900-04-29 00:00:00,120.0
-Num=1500,1500.0,1904-02-08,1904-02-08 00:00:00,1500.0
-Num=20222,20222.0,1955-05-13,1955-05-13 00:00:00,20222.0
-Num=404242,404242.0,3006-10-10,3006-10-10 00:00:00,404242.0
+"Record Number","Square of the Record Number","Description of the Row","Percent Done","Percent Increment","date","datetime","time"
+0,0,This is row            0 of           10,0%,M,01-01-1960,01JAN60:00:00:01,0:00:01
+1,1,This is row            1 of           10,10%,0.0%,02-01-1960,01JAN60:00:00:10,0:00:03
+2,4,This is row            2 of           10,20%,50.0%,17-01-1960,01JAN60:00:01:40,0:00:09
+3,9,This is row            3 of           10,30%,66.7%,22-03-1960,01JAN60:00:16:40,0:00:27
+4,16,This is row            4 of           10,40%,75.0%,13-09-1960,01JAN60:02:46:40,0:01:21
+5,25,This is row            5 of           10,50%,80.0%,17-09-1961,02JAN60:03:46:40,0:04:03
+6,36,This is row            6 of           10,60%,83.3%,20-07-1963,12JAN60:13:46:40,0:12:09
+7,49,This is row            7 of           10,70%,85.7%,29-07-1966,25APR60:17:46:40,0:36:27
+8,64,This is row            8 of           10,80%,87.5%,20-03-1971,03MAR63:09:46:40,1:49:21
+9,81,This is row            9 of           10,90%,88.9%,18-12-1977,09SEP91:01:46:40,5:28:03
+10,100,This is row           10 of           10,100%,90.0%,19-05-1987,19NOV76:17:46:40,16:24:09
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml b/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml
index ae12fc5..45df965 100644
--- a/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml
+++ b/tika-parsers/src/test/resources/test-documents/test-columnar.sas.xml
@@ -8,6 +8,7 @@
       <pctincr missing="M" />
       <date>0</date>
       <datetime>1960-01-01T00:00:01</datetime>
+      <time>00:00:01</time>
    </TESTXML>
    <TESTXML>
       <recnum>1</recnum>
@@ -17,6 +18,7 @@
       <pctincr>0</pctincr>
       <date>1</date>
       <datetime>1960-01-01T00:00:10</datetime>
+      <time>00:00:03</time>
    </TESTXML>
    <TESTXML>
       <recnum>2</recnum>
@@ -26,6 +28,7 @@
       <pctincr>0.5</pctincr>
       <date>16</date>
       <datetime>1960-01-01T00:01:40</datetime>
+      <time>00:00:09</time>
    </TESTXML>
    <TESTXML>
       <recnum>3</recnum>
@@ -35,6 +38,7 @@
       <pctincr>0.6666666667</pctincr>
       <date>81</date>
       <datetime>1960-01-01T00:16:40</datetime>
+      <time>00:00:27</time>
    </TESTXML>
    <TESTXML>
       <recnum>4</recnum>
@@ -44,6 +48,7 @@
       <pctincr>0.75</pctincr>
       <date>256</date>
       <datetime>1960-01-01T02:46:40</datetime>
+      <time>00:01:21</time>
    </TESTXML>
    <TESTXML>
       <recnum>5</recnum>
@@ -53,6 +58,7 @@
       <pctincr>0.8</pctincr>
       <date>625</date>
       <datetime>1960-01-02T03:46:40</datetime>
+      <time>00:04:03</time>
    </TESTXML>
    <TESTXML>
       <recnum>6</recnum>
@@ -62,6 +68,7 @@
       <pctincr>0.8333333333</pctincr>
       <date>1296</date>
       <datetime>1960-01-12T13:46:40</datetime>
+      <time>00:12:09</time>
    </TESTXML>
    <TESTXML>
       <recnum>7</recnum>
@@ -71,6 +78,7 @@
       <pctincr>0.8571428571</pctincr>
       <date>2401</date>
       <datetime>1960-04-25T17:46:40</datetime>
+      <time>00:36:27</time>
    </TESTXML>
    <TESTXML>
       <recnum>8</recnum>
@@ -80,6 +88,7 @@
       <pctincr>0.875</pctincr>
       <date>4096</date>
       <datetime>1963-03-03T09:46:40</datetime>
+      <time>01:49:21</time>
    </TESTXML>
    <TESTXML>
       <recnum>9</recnum>
@@ -89,6 +98,7 @@
       <pctincr>0.8888888889</pctincr>
       <date>6561</date>
       <datetime>1991-09-09T01:46:40</datetime>
+      <time>05:28:03</time>
    </TESTXML>
    <TESTXML>
       <recnum>10</recnum>
@@ -98,5 +108,6 @@
       <pctincr>0.9</pctincr>
       <date>10000</date>
       <datetime>2276-11-19T17:46:40</datetime>
+      <time>16:24:09</time>
    </TESTXML>
 </TABLE>
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat
index 553c45c..33ee412 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat and b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xls b/tika-parsers/src/test/resources/test-documents/test-columnar.xls
new file mode 100644
index 0000000..1d7b2cf
Binary files /dev/null and b/tika-parsers/src/test/resources/test-documents/test-columnar.xls differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx
new file mode 100644
index 0000000..58ffd47
Binary files /dev/null and b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xpt b/tika-parsers/src/test/resources/test-documents/test-columnar.xpt
index d908228..bbb59b5 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.xpt and b/tika-parsers/src/test/resources/test-documents/test-columnar.xpt differ
diff --git a/tika-parsers/src/test/resources/test-documents/testSAS2.sas b/tika-parsers/src/test/resources/test-documents/testSAS2.sas
index bc8c1fe..96a9121 100644
--- a/tika-parsers/src/test/resources/test-documents/testSAS2.sas
+++ b/tika-parsers/src/test/resources/test-documents/testSAS2.sas
@@ -2,6 +2,7 @@ data testing;
 begin=0;
 end=10;
 msg="This is row %x of %y";
+
 do i = begin to end by 1;
 drop msg begin end i;
 recnum=i;
@@ -11,10 +12,13 @@ format pctdone percent8.0;
 format pctincr percent7.1;
 pctdone=divide(i,end);
 pctincr=divide(i-1,i);
+/* Days / Seconds since Epoc / Seconds since midnight */
 format date ddmmyyd10.;
 format datetime datetime.;
+format time time.;
 date=i**4;
 datetime=10**i;
+time=3**i;
 output;
 end;
 label recnum="Record Number"
@@ -24,10 +28,11 @@ label recnum="Record Number"
 	  pctincr="Percent Increment";
 run;
 
-libname out          '/home/tika/testing/sas';
-libname outxpt XPORT '/home/tika/testing/sas/testing.xpt';
-libname outv6 v6     '/home/tika/testing/sas';
-libname outxml xmlv2 '/home/tika/testing/sas';
+%let outpath = /home/tika/testing/sas;
+libname out          "&outpath";
+libname outxpt XPORT "&outpath./testing.xpt";
+libname outv6 v6     "&outpath";
+libname outxml xmlv2 "&outpath";
 
 data out.testing;
 set testing;
@@ -46,3 +51,17 @@ run;
 proc print data=testing;
 run;
 
+proc export data=testing label
+  outfile="&outpath./testing.csv"
+  dbms=CSV REPLACE;
+putnames=yes;
+run;
+
+proc export data=testing label 
+  outfile="&outpath./testing.xls"
+  dbms=XLS;
+run;
+proc export data=testing label
+  outfile="&outpath./testing.xlsx"
+  dbms=XLSX;
+run;


[tika] 07/30: More SAS7BDAT metadata

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit 39e119413717cd618a203d4889214d044a5dd222
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Thu May 3 16:52:27 2018 +0100

    More SAS7BDAT metadata
---
 .../java/org/apache/tika/metadata/Database.java    |  5 +-
 .../org/apache/tika/parser/sas/SAS7BDATParser.java | 82 ++++++++++------------
 .../apache/tika/parser/sas/SAS7BDATParserTest.java | 28 +++++++-
 3 files changed, 67 insertions(+), 48 deletions(-)

diff --git a/tika-core/src/main/java/org/apache/tika/metadata/Database.java b/tika-core/src/main/java/org/apache/tika/metadata/Database.java
index 7f91a37..190fe89 100644
--- a/tika-core/src/main/java/org/apache/tika/metadata/Database.java
+++ b/tika-core/src/main/java/org/apache/tika/metadata/Database.java
@@ -20,6 +20,7 @@ public interface Database {
     final static String PREFIX = "database"+Metadata.NAMESPACE_PREFIX_DELIMITER;
 
     Property TABLE_NAME = Property.externalTextBag(PREFIX+"table_name");
-    Property COLUMN_COUNT = Property.externalText(PREFIX+"column_count");
+    Property ROW_COUNT = Property.externalInteger(PREFIX+"row_count");
+    Property COLUMN_COUNT = Property.externalInteger(PREFIX+"column_count");
     Property COLUMN_NAME = Property.externalTextBag(PREFIX+"column_name");
-}
\ No newline at end of file
+}
diff --git a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
index 5992e15..56260ca 100644
--- a/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
+++ b/tika-parsers/src/main/java/org/apache/tika/parser/sas/SAS7BDATParser.java
@@ -22,11 +22,16 @@ import java.util.Collections;
 import java.util.Set;
 
 import org.apache.tika.exception.TikaException;
+import org.apache.tika.metadata.Database;
+import org.apache.tika.metadata.HttpHeaders;
 import org.apache.tika.metadata.Metadata;
+import org.apache.tika.metadata.OfficeOpenXMLExtended;
+import org.apache.tika.metadata.PagedText;
 import org.apache.tika.metadata.TikaCoreProperties;
 import org.apache.tika.mime.MediaType;
 import org.apache.tika.parser.AbstractParser;
 import org.apache.tika.parser.ParseContext;
+import org.apache.tika.parser.executable.MachineMetadata;
 import org.apache.tika.sax.XHTMLContentHandler;
 import org.xml.sax.ContentHandler;
 import org.xml.sax.SAXException;
@@ -71,51 +76,40 @@ public class SAS7BDATParser extends AbstractParser {
         metadata.set(TikaCoreProperties.CREATED, props.getDateCreated());
         metadata.set(TikaCoreProperties.MODIFIED, props.getDateModified());
 
-        // TODO What about these?
-/*
-u64 - false
-compressionMethod - null
-endianness - 1
-encoding - windows-1252
-sessionEncoding - null
-fileType - DATA
-sasRelease - 9.0101M3
-serverType - XP_PRO
-osName - 
-osType - 
-headerLength - 1024
-pageLength - 8192
-pageCount - 1
-rowLength - 96
-rowCount - 31
-mixPageRowCount - 69
-columnsCount - 5
-*/
+        metadata.set(PagedText.N_PAGES,     (int)props.getPageCount());
+        metadata.set(Database.COLUMN_COUNT, (int)props.getColumnsCount());
+        metadata.set(Database.ROW_COUNT,    (int)props.getRowCount());
+
+        // TODO Can we find more general properties for these / move
+        //  these to more general places?
+        metadata.set(HttpHeaders.CONTENT_ENCODING, props.getEncoding());
+        metadata.set(OfficeOpenXMLExtended.APPLICATION, props.getServerType());
+        metadata.set(OfficeOpenXMLExtended.APP_VERSION, props.getSasRelease());
+        metadata.set(MachineMetadata.ARCHITECTURE_BITS, 
+                     props.isU64() ? "64" : "32");
+        metadata.set(MachineMetadata.ENDIAN, props.getEndianness() == 1 ? 
+                     MachineMetadata.Endian.LITTLE.getName() : 
+                     MachineMetadata.Endian.BIG.getName());
+
+        // The following SAS Metadata fields are currently ignored:
+        // compressionMethod
+        // sessionEncoding
+        // fileType
+        // osName - 
+        // osType - 
+        // mixPageRowCount
+        // headerLength
+        // pageLength
+        // rowLength
+
+        // Process the column metadata
+        // TODO Find keys to record the format and the type
+        for (Column c : sas.getColumns()) {
+            String name = c.getLabel();
+            if (name == null || name.isEmpty()) name = c.getName();
+            metadata.add(Database.COLUMN_NAME, name);
+        }
 
-        // TODO Should we output more Column info as metadata?
-/*
-5 Columns defined:
- 1 - A
-  Label: A
-  Format: $58.
-  Size 58 of java.lang.String
- 2 - B
-  Label: B
-  Format: 
-  Size 8 of java.lang.Number
- 3 - C
-  Label: C
-  Format: DATE8.
-  Size 8 of java.lang.Number
- 4 - D
-  Label: D
-  Format: DATETIME17.
-  Size 8 of java.lang.Number
- 5 - E
-  Label: E
-  Format: 
-  Size 8 of java.lang.Number
-*/
 
         // Output file contents as a table
         xhtml.element("h1", props.getName());
diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index 2f29a13..c2a74a7 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -21,13 +21,19 @@ import static org.junit.Assert.assertNull;
 
 import java.io.IOException;
 import java.io.InputStream;
+import java.util.Arrays;
 
 import org.apache.tika.TikaTest;
+import org.apache.tika.metadata.Database;
+import org.apache.tika.metadata.HttpHeaders;
 import org.apache.tika.metadata.Metadata;
+import org.apache.tika.metadata.OfficeOpenXMLExtended;
+import org.apache.tika.metadata.PagedText;
 import org.apache.tika.metadata.TikaCoreProperties;
 import org.apache.tika.parser.AutoDetectParser;
 import org.apache.tika.parser.ParseContext;
 import org.apache.tika.parser.Parser;
+import org.apache.tika.parser.executable.MachineMetadata;
 import org.apache.tika.sax.BodyContentHandler;
 import org.apache.tika.sax.WriteOutContentHandler;
 import org.junit.Test;
@@ -54,7 +60,16 @@ public class SAS7BDATParserTest extends TikaTest {
         assertEquals("2017-01-30T07:31:47Z", metadata.get(TikaCoreProperties.CREATED));
         assertEquals("2017-01-30T07:31:47Z", metadata.get(TikaCoreProperties.MODIFIED));
         
-        // TODO Test the rest of the metadata
+        assertEquals("1", metadata.get(PagedText.N_PAGES));
+        assertEquals("2", metadata.get(Database.COLUMN_COUNT));
+        assertEquals("11", metadata.get(Database.ROW_COUNT));
+        assertEquals("windows-1252", metadata.get(HttpHeaders.CONTENT_ENCODING));
+        assertEquals("W32_7PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
+        assertEquals("9.0301M2", metadata.get(OfficeOpenXMLExtended.APP_VERSION));
+        assertEquals("32", metadata.get(MachineMetadata.ARCHITECTURE_BITS));
+        assertEquals("Little", metadata.get(MachineMetadata.ENDIAN));
+        assertEquals(Arrays.asList("recnum","label"),
+                     Arrays.asList(metadata.getValues(Database.COLUMN_NAME)));
         
         String content = handler.toString();
         assertContains("TESTING", content);
@@ -82,7 +97,16 @@ public class SAS7BDATParserTest extends TikaTest {
         assertEquals("2015-03-06T19:10:19Z", metadata.get(TikaCoreProperties.CREATED));
         assertEquals("2015-03-06T19:10:19Z", metadata.get(TikaCoreProperties.MODIFIED));
         
-        // TODO Test the rest of the metadata
+        assertEquals("1", metadata.get(PagedText.N_PAGES));
+        assertEquals("5", metadata.get(Database.COLUMN_COUNT));
+        assertEquals("31", metadata.get(Database.ROW_COUNT));
+        assertEquals("windows-1252", metadata.get(HttpHeaders.CONTENT_ENCODING));
+        assertEquals("XP_PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
+        assertEquals("9.0101M3", metadata.get(OfficeOpenXMLExtended.APP_VERSION));
+        assertEquals("32", metadata.get(MachineMetadata.ARCHITECTURE_BITS));
+        assertEquals("Little", metadata.get(MachineMetadata.ENDIAN));
+        assertEquals(Arrays.asList("A","B","C","D","E"),
+                     Arrays.asList(metadata.getValues(Database.COLUMN_NAME)));
         
         String content = handler.toString();
         assertContains("SHEET1", content);


[tika] 24/30: Updated Columnar output from SAS with better formats

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tallison pushed a commit to branch branch_1x
in repository https://gitbox.apache.org/repos/asf/tika.git

commit b33636035185476b818de06a7feb0f2e684b3cc9
Author: Nick Burch <ni...@gagravarr.org>
AuthorDate: Fri May 18 11:43:47 2018 +0100

    Updated Columnar output from SAS with better formats
---
 .../apache/tika/parser/sas/SAS7BDATParserTest.java |   8 ++++----
 .../test-documents/test-columnar.sas7bdat          | Bin 17408 -> 131072 bytes
 .../resources/test-documents/test-columnar.xls     | Bin 6656 -> 66048 bytes
 .../resources/test-documents/test-columnar.xlsx    | Bin 4941 -> 6603 bytes
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
index 610ffc3..00a2aaa 100644
--- a/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
+++ b/tika-parsers/src/test/java/org/apache/tika/parser/sas/SAS7BDATParserTest.java
@@ -89,15 +89,15 @@ public class SAS7BDATParserTest extends TikaTest {
         assertEquals("application/x-sas-data", metadata.get(Metadata.CONTENT_TYPE));
         assertEquals("TESTING", metadata.get(TikaCoreProperties.TITLE));
 
-        assertEquals("2018-05-09T17:59:33Z", metadata.get(TikaCoreProperties.CREATED));
-        assertEquals("2018-05-09T17:59:33Z", metadata.get(TikaCoreProperties.MODIFIED));
+        assertEquals("2018-05-18T11:38:30Z", metadata.get(TikaCoreProperties.CREATED));
+        assertEquals("2018-05-18T11:38:30Z", metadata.get(TikaCoreProperties.MODIFIED));
         
         assertEquals("1", metadata.get(PagedText.N_PAGES));
         assertEquals("8", metadata.get(Database.COLUMN_COUNT));
         assertEquals("11", metadata.get(Database.ROW_COUNT));
         assertEquals("windows-1252", metadata.get(HttpHeaders.CONTENT_ENCODING));
-        assertEquals("W32_7PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
-        assertEquals("9.0301M2", metadata.get(OfficeOpenXMLExtended.APP_VERSION));
+        assertEquals("X64_7PRO", metadata.get(OfficeOpenXMLExtended.APPLICATION));
+        assertEquals("9.0401M5", metadata.get(OfficeOpenXMLExtended.APP_VERSION));
         assertEquals("32", metadata.get(MachineMetadata.ARCHITECTURE_BITS));
         assertEquals("Little", metadata.get(MachineMetadata.ENDIAN));
         assertEquals(Arrays.asList("Record Number","Square of the Record Number",
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat
index 33ee412..f6cab63 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat and b/tika-parsers/src/test/resources/test-documents/test-columnar.sas7bdat differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xls b/tika-parsers/src/test/resources/test-documents/test-columnar.xls
index 1d7b2cf..cc45372 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.xls and b/tika-parsers/src/test/resources/test-documents/test-columnar.xls differ
diff --git a/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx
index 58ffd47..22483f1 100644
Binary files a/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx and b/tika-parsers/src/test/resources/test-documents/test-columnar.xlsx differ