You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@submarine.apache.org by "Yu-Tang Lin (Jira)" <ji...@apache.org> on 2022/05/27 02:38:00 UTC

[jira] [Created] (SUBMARINE-1278) Fetching data to k8s cluster before experiment's execution

Yu-Tang Lin created SUBMARINE-1278:
--------------------------------------

             Summary: Fetching data to k8s cluster before experiment's execution
                 Key: SUBMARINE-1278
                 URL: https://issues.apache.org/jira/browse/SUBMARINE-1278
             Project: Apache Submarine
          Issue Type: New Feature
            Reporter: Yu-Tang Lin


Per the discussion with Didi's users,

they think it might be useful if submarine fetches the data from external file system(ex: hdfs, s3... etc) into cluster first and then the following executions could  read the data from local environment.

After couple discussions, we have couple proposes for above scenario.
 # Once the external data source had been set, submarine launches another container as initialize container  of experiment; In this container, we leverage fsspec to fetch data, and persist into apache arrow, then the workers in the execution read the data from arrow directly; In the termination phase, submarine launches another container to clean up the data in arrow.
 # the flow is quite similar to option1, instead of we replace fsspec with alluxio. But due to we're not a hybird cloud environment focusing solution, I think  the tech stack of alluxio is too thick for us, so I prefer the option 1 more.

About the external file system integrating, we'll try to integrate hdfs(w/ kerberos) as our first step.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@submarine.apache.org
For additional commands, e-mail: dev-help@submarine.apache.org