You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2021/08/13 08:50:51 UTC

[GitHub] [arrow-datafusion] houqp commented on a change in pull request #811: Add support for reading remote storage systems

houqp commented on a change in pull request #811:
URL: https://github.com/apache/arrow-datafusion/pull/811#discussion_r688299568



##########
File path: ballista/rust/core/src/utils.rs
##########
@@ -252,6 +252,11 @@ pub fn create_datafusion_context(
     ExecutionContext::with_config(config)
 }
 
+/// Create a DataFusion context that is compatible with Ballista in concurrency
+pub fn create_datafusion_context_concurrency(concurrency: usize) -> ExecutionContext {

Review comment:
       Nitpick, I think `ExecutionContext::with_concurrency(24)` is as readable as `create_datafusion_context_concurrency(24)` but incurs less indirection/abstraction, so IMHO think this helper function doesn't provide a lot of value.

##########
File path: ballista/rust/scheduler/src/lib.rs
##########
@@ -285,24 +286,19 @@ impl SchedulerGrpc for SchedulerServer {
 
         match file_type {
             FileType::Parquet => {
-                let parquet_exec =
-                    ParquetExec::try_from_path(&path, None, None, 1024, 1, None)
-                        .map_err(|e| {
-                            let msg = format!("Error opening parquet files: {}", e);
-                            error!("{}", msg);
-                            tonic::Status::internal(msg)
-                        })?;
+                let ctx = create_datafusion_context_concurrency(1);
+                let parquet_desc = ParquetRootDesc::new(&path, ctx).map_err(|e| {
+                    let msg = format!("Error opening parquet files: {}", e);
+                    error!("{}", msg);
+                    tonic::Status::internal(msg)
+                })?;
 
                 //TODO include statistics and any other info needed to reconstruct ParquetExec
                 Ok(Response::new(GetFileMetadataResult {
-                    schema: Some(parquet_exec.schema().as_ref().into()),
-                    partitions: parquet_exec
-                        .partitions()
-                        .iter()
-                        .map(|part| FilePartitionMetadata {
-                            filename: part.filenames().to_vec(),
-                        })
-                        .collect(),
+                    schema: Some(parquet_desc.schema().as_ref().into()),
+                    partitions: vec![FilePartitionMetadata {
+                        filename: vec![path],

Review comment:
       We are always returning a single path for the partitions field? This changes the behavior doesn't it? 

##########
File path: datafusion/src/physical_plan/parquet.rs
##########
@@ -871,8 +552,10 @@ fn build_row_group_predicate(
     }
 }
 
+#[allow(clippy::too_many_arguments)]
 fn read_files(

Review comment:
       perhaps this function should be renamed to `read_partition`.

##########
File path: datafusion/src/datasource/object_store.rs
##########
@@ -0,0 +1,108 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//! Object Store abstracts access to an underlying file/object storage.
+
+use crate::datasource::local::LocalFileSystem;
+use crate::error::Result;
+use std::any::Any;
+use std::collections::HashMap;
+use std::fmt::Debug;
+use std::io::Read;
+use std::sync::{Arc, RwLock};
+
+/// Objct Reader for one file in a object store
+pub trait ObjectReader {
+    /// Get reader for a part [start, start + length] in the file
+    fn get_reader(&self, start: u64, length: usize) -> Box<dyn Read>;
+
+    /// Get lenght for the file
+    fn length(&self) -> u64;
+}
+
+/// A ObjectStore abstracts access to an underlying file/object storage.
+/// It maps strings (e.g. URLs, filesystem paths, etc) to sources of bytes
+pub trait ObjectStore: Sync + Send + Debug {
+    /// Returns the object store as [`Any`](std::any::Any)
+    /// so that it can be downcast to a specific implementation.
+    fn as_any(&self) -> &dyn Any;
+
+    /// Returns all the files with filename extension `ext` in path `prefix`
+    fn list_all_files(&self, prefix: &str, ext: &str) -> Result<Vec<String>>;

Review comment:
       nitpick, IMHO, `list_files` or simply `list` would be a simpler method name here.
   
   One thing I would like to point out is for different object stores, object listing will actually give us more information than just the file path, for example, last updated time and file size are often returned as part of the api/sys call. These extra metadata might be useful for other purposes. I don't think we need to take this into account in this PR, just something to keep in mind since we might need to change the return type here in the future.

##########
File path: datafusion/src/datasource/local.rs
##########
@@ -0,0 +1,100 @@
+// Licensed to the Apache Software Foundation (ASF) under one

Review comment:
       nitpick, code organization wise, I recommend creating an object_store module, move the existing object_store.rs code into `object_store/mod.rs`, then move local module into the object_store module as a submodule.

##########
File path: datafusion/src/logical_plan/builder.rs
##########
@@ -137,20 +138,20 @@ impl LogicalPlanBuilder {
     pub fn scan_parquet(
         path: impl Into<String>,
         projection: Option<Vec<usize>>,
-        max_concurrency: usize,
+        context: ExecutionContext,

Review comment:
       I don't have a better solution on how to handle this, but it strikes me a bit odd to couple execution context with logical plan builder here. I will think more about this later this week.

##########
File path: datafusion/src/datasource/object_store.rs
##########
@@ -0,0 +1,108 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//! Object Store abstracts access to an underlying file/object storage.
+
+use crate::datasource::local::LocalFileSystem;
+use crate::error::Result;
+use std::any::Any;
+use std::collections::HashMap;
+use std::fmt::Debug;
+use std::io::Read;
+use std::sync::{Arc, RwLock};
+
+/// Objct Reader for one file in a object store
+pub trait ObjectReader {
+    /// Get reader for a part [start, start + length] in the file
+    fn get_reader(&self, start: u64, length: usize) -> Box<dyn Read>;
+
+    /// Get lenght for the file
+    fn length(&self) -> u64;
+}
+
+/// A ObjectStore abstracts access to an underlying file/object storage.
+/// It maps strings (e.g. URLs, filesystem paths, etc) to sources of bytes
+pub trait ObjectStore: Sync + Send + Debug {
+    /// Returns the object store as [`Any`](std::any::Any)
+    /// so that it can be downcast to a specific implementation.
+    fn as_any(&self) -> &dyn Any;
+
+    /// Returns all the files with filename extension `ext` in path `prefix`
+    fn list_all_files(&self, prefix: &str, ext: &str) -> Result<Vec<String>>;
+
+    /// Get object reader for one file
+    fn get_reader(&self, file_path: &str) -> Result<Arc<dyn ObjectReader>>;
+}
+
+static LOCAL_SCHEME: &str = "file";
+
+/// A Registry holds all the object stores at runtime with a scheme for each store.
+/// This allows the user to extend DataFusion with different storage systems such as S3 or HDFS
+/// and query data inside these systems.
+pub struct ObjectStoreRegistry {
+    /// A map from scheme to object store that serve list / read operations for the store
+    pub object_stores: RwLock<HashMap<String, Arc<dyn ObjectStore>>>,
+}
+
+impl ObjectStoreRegistry {
+    /// Create the registry that object stores can registered into.
+    /// ['LocalFileSystem'] store is registered in by default to support read from localfs natively.
+    pub fn new() -> Self {
+        let mut map: HashMap<String, Arc<dyn ObjectStore>> = HashMap::new();
+        map.insert(LOCAL_SCHEME.to_string(), Arc::new(LocalFileSystem));
+
+        Self {
+            object_stores: RwLock::new(map),
+        }
+    }
+
+    /// Adds a new store to this registry.
+    /// If a store of the same prefix existed before, it is replaced in the registry and returned.
+    pub fn register_store(
+        &self,
+        scheme: String,
+        store: Arc<dyn ObjectStore>,
+    ) -> Option<Arc<dyn ObjectStore>> {
+        let mut stores = self.object_stores.write().unwrap();
+        stores.insert(scheme, store)
+    }
+
+    /// Get the store registered for scheme
+    pub fn get(&self, scheme: &str) -> Option<Arc<dyn ObjectStore>> {
+        let stores = self.object_stores.read().unwrap();
+        stores.get(scheme).cloned()
+    }
+
+    /// Get a suitable store for the path based on it's scheme. For example:
+    /// path with prefix file:/// or no prefix will return the default LocalFS store,
+    /// path with prefix s3:/// will return the S3 store if it's registered,
+    /// and will always return LocalFS store when a prefix is not registered in the path.
+    pub fn store_for_path(&self, path: &str) -> Arc<dyn ObjectStore> {
+        if let Some((scheme, _)) = path.split_once(':') {
+            let stores = self.object_stores.read().unwrap();
+            if let Some(store) = stores.get(&*scheme.to_lowercase()) {
+                return store.clone();
+            }

Review comment:
       we should return an error here if no matching store is found for the particular scheme right?

##########
File path: datafusion/src/datasource/object_store.rs
##########
@@ -0,0 +1,108 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//! Object Store abstracts access to an underlying file/object storage.
+
+use crate::datasource::local::LocalFileSystem;
+use crate::error::Result;
+use std::any::Any;
+use std::collections::HashMap;
+use std::fmt::Debug;
+use std::io::Read;
+use std::sync::{Arc, RwLock};
+
+/// Objct Reader for one file in a object store
+pub trait ObjectReader {
+    /// Get reader for a part [start, start + length] in the file
+    fn get_reader(&self, start: u64, length: usize) -> Box<dyn Read>;
+
+    /// Get lenght for the file
+    fn length(&self) -> u64;
+}
+
+/// A ObjectStore abstracts access to an underlying file/object storage.
+/// It maps strings (e.g. URLs, filesystem paths, etc) to sources of bytes
+pub trait ObjectStore: Sync + Send + Debug {
+    /// Returns the object store as [`Any`](std::any::Any)
+    /// so that it can be downcast to a specific implementation.
+    fn as_any(&self) -> &dyn Any;
+
+    /// Returns all the files with filename extension `ext` in path `prefix`
+    fn list_all_files(&self, prefix: &str, ext: &str) -> Result<Vec<String>>;
+
+    /// Get object reader for one file
+    fn get_reader(&self, file_path: &str) -> Result<Arc<dyn ObjectReader>>;
+}
+
+static LOCAL_SCHEME: &str = "file";
+
+/// A Registry holds all the object stores at runtime with a scheme for each store.
+/// This allows the user to extend DataFusion with different storage systems such as S3 or HDFS
+/// and query data inside these systems.
+pub struct ObjectStoreRegistry {
+    /// A map from scheme to object store that serve list / read operations for the store
+    pub object_stores: RwLock<HashMap<String, Arc<dyn ObjectStore>>>,
+}
+
+impl ObjectStoreRegistry {
+    /// Create the registry that object stores can registered into.
+    /// ['LocalFileSystem'] store is registered in by default to support read from localfs natively.
+    pub fn new() -> Self {
+        let mut map: HashMap<String, Arc<dyn ObjectStore>> = HashMap::new();
+        map.insert(LOCAL_SCHEME.to_string(), Arc::new(LocalFileSystem));
+
+        Self {
+            object_stores: RwLock::new(map),
+        }
+    }
+
+    /// Adds a new store to this registry.
+    /// If a store of the same prefix existed before, it is replaced in the registry and returned.
+    pub fn register_store(
+        &self,
+        scheme: String,
+        store: Arc<dyn ObjectStore>,
+    ) -> Option<Arc<dyn ObjectStore>> {
+        let mut stores = self.object_stores.write().unwrap();
+        stores.insert(scheme, store)
+    }
+
+    /// Get the store registered for scheme
+    pub fn get(&self, scheme: &str) -> Option<Arc<dyn ObjectStore>> {
+        let stores = self.object_stores.read().unwrap();
+        stores.get(scheme).cloned()
+    }
+
+    /// Get a suitable store for the path based on it's scheme. For example:
+    /// path with prefix file:/// or no prefix will return the default LocalFS store,
+    /// path with prefix s3:/// will return the S3 store if it's registered,
+    /// and will always return LocalFS store when a prefix is not registered in the path.
+    pub fn store_for_path(&self, path: &str) -> Arc<dyn ObjectStore> {

Review comment:
       maybe `get_by_path` is a better name here?

##########
File path: datafusion/src/datasource/parquet.rs
##########
@@ -120,14 +132,303 @@ impl TableProvider for ParquetTable {
     }
 
     fn statistics(&self) -> Statistics {
-        self.statistics.clone()
+        self.desc.statistics()
     }
 
     fn has_exact_statistics(&self) -> bool {
         true
     }
 }
 
+#[derive(Debug)]
+/// Descriptor for a parquet root path
+pub struct ParquetRootDesc {
+    /// object store for reading files inside the root path
+    pub object_store: Arc<dyn ObjectStore>,
+    /// metadata for files inside the root path
+    pub descriptor: SourceRootDescriptor,
+}
+
+impl ParquetRootDesc {
+    /// Construct a new parquet descriptor for a root path
+    pub fn new(root_path: &str, context: ExecutionContext) -> Result<Self> {
+        let object_store = context
+            .state
+            .lock()
+            .unwrap()
+            .object_store_registry
+            .store_for_path(root_path);
+        let root_desc = Self::get_source_desc(root_path, object_store.clone(), "parquet");

Review comment:
       I agree with @rdettai , I think we can also address this as a quick follow up PR since this is also the old behavior.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org