You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@s2graph.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/04/02 03:34:00 UTC

[jira] [Commented] (S2GRAPH-183) Provide batch job to dump data stored in HBase into file.

    [ https://issues.apache.org/jira/browse/S2GRAPH-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16421908#comment-16421908 ] 

ASF GitHub Bot commented on S2GRAPH-183:
----------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/incubator-s2graph/pull/137


> Provide batch job to dump data stored in HBase into file.
> ---------------------------------------------------------
>
>                 Key: S2GRAPH-183
>                 URL: https://issues.apache.org/jira/browse/S2GRAPH-183
>             Project: S2Graph
>          Issue Type: New Feature
>          Components: s2jobs
>            Reporter: DOYUNG YOON
>            Assignee: DOYUNG YOON
>            Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Since s2graph provide batch job to read the file and bulk load into HBase, it also would be helpful to provide batch job to dump data stored in HBase into the file.
> I think once we have the dump(deserializer) and loader(serializer), then adding the index on existing data, or change HBase schema version can be achieved by this offline process.
> Also, data migration from external HBase cluster into s2graph HBase cluster can be possible which can be useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)