You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Beam JIRA Bot (Jira)" <ji...@apache.org> on 2020/09/02 17:07:01 UTC

[jira] [Commented] (BEAM-10111) Create methods in fileio to read from / write to archive files

    [ https://issues.apache.org/jira/browse/BEAM-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189383#comment-17189383 ] 

Beam JIRA Bot commented on BEAM-10111:
--------------------------------------

This issue is assigned but has not received an update in 30 days so it has been labeled "stale-assigned". If you are still working on the issue, please give an update and remove the label. If you are no longer working on the issue, please unassign so someone else may work on it. In 7 days the issue will be automatically unassigned.

> Create methods in fileio to read from / write to archive files
> --------------------------------------------------------------
>
>                 Key: BEAM-10111
>                 URL: https://issues.apache.org/jira/browse/BEAM-10111
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-py-files
>            Reporter: Ashwin Ramaswami
>            Assignee: Ashwin Ramaswami
>            Priority: P2
>              Labels: stale-assigned
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Discussion here: https://lists.apache.org/thread.html/r784701bda9edf9a52d5ee593f44a8870aab96b6df1dc8eedd2c8a249%40%3Cdev.beam.apache.org%3E
> It would be good to be able to read from / write to archive files (.zip, .tar) using fileio. The difference between this proposal and what we already have with CompressionTypes is that this would allow converting one file -> multiple files and vice versa. Here's how it might look like:
> *Reading all contents from archive files:*
> {code:python}
>     files = (
>         p
>         | fileio.MatchFiles('hdfs://path/to/*.zip')
>         | fileio.ExtractMatches()
>         | fileio.MatchAll()
>         | fileio.ReadMatches()
>         | beam.Map(lambda x: (x.metadata.path, x.metadata._parent_archive_paths, x.read_utf8()))
>     )
> {code}
> *Nested archive example:* (look for all inside of .tar inside of .zip)
> {code:python}
>     files = (
>         p
>         | fileio.MatchFiles('hdfs://path/to/*.zip')
>         | fileio.ExtractMatches()
>         | fileio.MatchAll('*.tar')
>         | fileio.Extract()
>         | fileio.MatchAll() # gets all entries
>         | fileio.ReadMatches()
>         | beam.Map(lambda x: (x.metadata.path, x.read_utf8()))
>     )
> {code}
> Note that in this case, this would involve modifying MatchAll() to take an argument, which would filter the files in the pcollection in the earlier stage of the pipeline.
> *Reading from archive files and explicitly specifying the archive type (when it can't be inferred by the file extension):*
> {code:python}
>     files = (
>         p
>         | fileio.MatchFiles('hdfs://path/to/archive')
>         | fileio.ExtractMatches(archivesystem=ArchiveSystem.TAR)
>         | fileio.MatchAll(archive_path='*.txt')
>         | fileio.ReadMatches()
>         | beam.Map(lambda x: (x.metadata.path, x.read_utf8()))
>     )
> {code}
> `ArchiveSystem` would be a generic class, just like `FileSystem`, which would allow for different implementations of methods such as `list()` and `extract()`. It would be implemented for .zip, .tar, etc.
> *Writing multiple files to an archive file:*
> {code:python}
>     files = (
>         p
>         | fileio.MatchFiles('hdfs://path/to/files/*.txt')
>         | fileio.CompressMatches(archivesystem=ArchiveSystem.ZIP)
>         | fileio.WriteToArchive("output.zip")
>     )
> {code}
> *Writing to a .tar.gz file:*
> {code:python}
>     files = (
>         p
>         | fileio.MatchFiles('hdfs://path/to/files/*.txt')
>         | fileio.CompressMatches(archivesystem=ArchiveSystem.TAR)
>         | fileio.WriteToArchive("output.tar.gz")
>     )
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)