You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tika.apache.org by "Gerard Bouchar (JIRA)" <ji...@apache.org> on 2018/06/15 15:56:00 UTC
[jira] [Updated] (TIKA-2671) HtmlEncodingDetector doesnt take
provided metadata into account
[ https://issues.apache.org/jira/browse/TIKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Gerard Bouchar updated TIKA-2671:
---------------------------------
Description:
org.apache.tika.parser.html.HtmlEncodingDetector ignores the document's metadata. So when using it to detect the charset of an HTML document that came with a conflicting charset specified at the transport layer level, the encoding specified inside the file is used instead.
This behavior does not conform to what is [specified by the W3C for determining the character encoding of HTML pages|https://html.spec.whatwg.org/multipage/parsing.html#determining-the-character-encoding]. This causes bugs similar to NUTCH-2599.
If HtmlEncodingDetector is not meant to take into account meta-information about the document, then maybe another detector should be provided, that would be a CompositeDetector including, in that order:
* a new, simple, MetadataEncodingDetector, that would simply return the encoding
* the existing HtmlEncodingDetector
* a generic detector, like UniversalEncodingDetector
was:
org.apache.tika.parser.html.HtmlEncodingDetector ignores the document's metadata. So when using it to detect the charset of an HTML document that came with a conflicting charset specified at the transport layer level, the encoding specified inside the file is used instead.
This behavior does not conform to what is [specified by the W3C for determining the character encoding of HTML pages|https://html.spec.whatwg.org/multipage/parsing.html#determining-the-character-encoding]. This causes bugs like NUTCH-2599.
If HtmlEncodingDetector is not meant to take into account meta-information about the document, then maybe another detector should be provided, that would be a CompositeDetector including, in that order:
* a new, simple, MetadataEncodingDetector, that would simply return the encoding
* the existing HtmlEncodingDetector
* a generic detector, like UniversalEncodingDetector
> HtmlEncodingDetector doesnt take provided metadata into account
> ---------------------------------------------------------------
>
> Key: TIKA-2671
> URL: https://issues.apache.org/jira/browse/TIKA-2671
> Project: Tika
> Issue Type: Bug
> Reporter: Gerard Bouchar
> Priority: Major
>
> org.apache.tika.parser.html.HtmlEncodingDetector ignores the document's metadata. So when using it to detect the charset of an HTML document that came with a conflicting charset specified at the transport layer level, the encoding specified inside the file is used instead.
> This behavior does not conform to what is [specified by the W3C for determining the character encoding of HTML pages|https://html.spec.whatwg.org/multipage/parsing.html#determining-the-character-encoding]. This causes bugs similar to NUTCH-2599.
>
> If HtmlEncodingDetector is not meant to take into account meta-information about the document, then maybe another detector should be provided, that would be a CompositeDetector including, in that order:
> * a new, simple, MetadataEncodingDetector, that would simply return the encoding
> * the existing HtmlEncodingDetector
> * a generic detector, like UniversalEncodingDetector
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)