You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2016/10/05 18:37:20 UTC
[jira] [Resolved] (HADOOP-12010) Inter-operable between Java RS
erasure coder and native RS erasure coder
[ https://issues.apache.org/jira/browse/HADOOP-12010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Andrew Wang resolved HADOOP-12010.
----------------------------------
Resolution: Not A Problem
Thanks Kai, let's close it out then. I wasn't sure what "Java coder" here meant, but as long as the native coder and new Java coder are compatible, then we're in a good spot.
> Inter-operable between Java RS erasure coder and native RS erasure coder
> ------------------------------------------------------------------------
>
> Key: HADOOP-12010
> URL: https://issues.apache.org/jira/browse/HADOOP-12010
> Project: Hadoop Common
> Issue Type: Sub-task
> Affects Versions: 3.0.0-alpha1
> Reporter: Kai Zheng
> Assignee: Kai Zheng
> Labels: hdfs-ec-3.0-must-do
>
> It's natural and desired to support inter-operable between the two implemented RS raw erasure coders, in other words, data encoded in one coder (like {{RSRawEncoder}}) can be decoded using the other coder (like {{NativeRSRawDecoder}}). Without this, raw erasure coder will not be transparent to HDFS data. As such support isn't trivial and involves challenge work because the two implementations use different encode/decode matrix generation algorithms, better to have this separately.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org