You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "wjones127 (via GitHub)" <gi...@apache.org> on 2023/03/19 18:38:57 UTC

[GitHub] [arrow-adbc] wjones127 commented on a diff in pull request #478: feat(rust): define the rust adbc api

wjones127 commented on code in PR #478:
URL: https://github.com/apache/arrow-adbc/pull/478#discussion_r1141442917


##########
rust/src/lib.rs:
##########
@@ -0,0 +1,380 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//! Arrow Database Connectivity (ADBC) allows efficient connections to databases
+//! for OLAP workloads:
+//!
+//!  * Uses the Arrow [C Data interface](https://arrow.apache.org/docs/format/CDataInterface.html)
+//!    and [C Stream Interface](https://arrow.apache.org/docs/format/CStreamInterface.html)
+//!    for efficient data interchange.
+//!  * Supports partitioned result sets for multi-threaded or distributed
+//!    applications.
+//!  * Support for [Substrait](https://substrait.io/) plans in addition to SQL queries.
+//!
+//! When implemented for remote databases, [Flight SQL](https://arrow.apache.org/docs/format/FlightSql.html)
+//! can be used as the communication protocol. This means data can be in Arrow
+//! format through the whole connection, minimizing serialization and deserialization
+//! overhead.
+//!
+//! Read more about ADBC at <https://arrow.apache.org/adbc/>
+//!
+//! There are two flavors of ADBC that this library supports:
+//!
+//!  * **Native Rust implementations**. These implement the traits at the top level of
+//!    this crate, starting with [AdbcDatabase].
+//!  * **C API ADBC drivers**. These can be implemented in any language (that compiles
+//!    to native code) and can be used by any language.
+//!
+//! # Native Rust drivers
+//!
+//! Native Rust drivers will implement the traits:
+//!
+//!  * [AdbcDatabase]
+//!  * [AdbcConnection]
+//!  * [AdbcStatement]
+//!
+//! For drivers implemented in Rust, using these will be more efficient and safe,
+//! since it avoids the overhead of going through C FFI.
+//!
+//! # Using C API drivers
+//!
+//! 🚧 TODO
+//!
+//! # Creating C API drivers
+//!
+//! 🚧 TODO
+//!
+pub mod error;
+pub mod info;
+pub mod objects;
+
+use arrow_array::{RecordBatch, RecordBatchReader};
+use arrow_schema::Schema;
+
+use crate::error::AdbcError;
+use crate::info::InfoData;
+
+/// Databases hold state shared by multiple connections. This typically means
+/// configuration and caches. For in-memory databases, it provides a place to
+/// hold ownership of the in-memory database.
+pub trait AdbcDatabase {
+    type ConnectionType: AdbcConnection;
+
+    /// Set an option on the database.
+    ///
+    /// Some databases may not allow setting options after it has been initialized.
+    fn set_option(&self, key: &str, value: &str) -> Result<(), AdbcError>;
+
+    /// Initialize a connection to the database.
+    ///
+    /// `options` provided will configure the connection, including the isolation
+    /// level. See standard options in [options].
+    fn connect<K, V>(
+        &self,
+        options: impl IntoIterator<Item = (K, V)>,
+    ) -> Result<Self::ConnectionType, AdbcError>
+    where
+        K: AsRef<str>,
+        V: AsRef<str>;
+}
+
+/// A connection is a single connection to a database.
+///
+/// It is never accessed concurrently from multiple threads.
+///
+/// # Autocommit
+///
+/// Connections should start in autocommit mode. They can be moved out by
+/// setting `"adbc.connection.autocommit"` to `"false"` (using
+/// [AdbcConnection::set_option]). Turning off autocommit allows customizing
+/// the isolation level. Read more in [adbc.h](https://github.com/apache/arrow-adbc/blob/main/adbc.h).
+pub trait AdbcConnection {
+    type StatementType: AdbcStatement;
+    type ObjectCollectionType: objects::DatabaseCatalogCollection;
+
+    /// Set an option on the connection.
+    ///
+    /// Some connections may not allow setting options after it has been initialized.
+    fn set_option(&self, key: &str, value: &str) -> Result<(), AdbcError>;
+
+    /// Create a new [AdbcStatement].
+    fn new_statement(&self) -> Result<Self::StatementType, AdbcError>;
+
+    /// Get metadata about the database/driver.
+    ///
+    /// If None is passed for `info_codes`, the method will return all info.
+    /// Otherwise will return the specified info, in any order. If an unrecognized
+    /// code is passed, it will return an error.
+    ///
+    /// Each metadatum is identified by an integer code.  The recognized
+    /// codes are defined as constants.  Codes [0, 10_000) are reserved
+    /// for ADBC usage.  Drivers/vendors will ignore requests for
+    /// unrecognized codes (the row will be omitted from the result).
+    /// Known codes are provided in [info::codes].
+    fn get_info(&self, info_codes: Option<&[u32]>) -> Result<Vec<(u32, InfoData)>, AdbcError>;
+
+    /// Get a hierarchical view of all catalogs, database schemas, tables, and columns.
+    ///
+    /// # Parameters
+    ///
+    /// * **depth**: The level of nesting to display. If [AdbcObjectDepth::All], display
+    ///   all levels. If [AdbcObjectDepth::Catalogs], display only catalogs (i.e.  `catalog_schemas`
+    ///   will be null). If [AdbcObjectDepth::DBSchemas], display only catalogs and schemas
+    ///   (i.e. `db_schema_tables` will be null), and so on.
+    /// * **catalog**: Only show tables in the given catalog. If None,
+    ///   do not filter by catalog. If an empty string, only show tables
+    ///   without a catalog.  May be a search pattern (see next section).
+    /// * **db_schema**: Only show tables in the given database schema. If
+    ///   None, do not filter by database schema. If an empty string, only show
+    ///   tables without a database schema. May be a search pattern (see next section).
+    /// * **table_name**: Only show tables with the given name. If None, do not
+    ///   filter by name. May be a search pattern (see next section).
+    /// * **table_type**: Only show tables matching one of the given table
+    ///   types. If None, show tables of any type. Valid table types should
+    ///   match those returned by [AdbcConnection::get_table_schema].
+    /// * **column_name**: Only show columns with the given name. If
+    ///   None, do not filter by name.  May be a search pattern (see next section).
+    ///
+    /// # Search patterns
+    ///
+    /// Some parameters accept "search patterns", which are
+    /// strings that can contain the special character `"%"` to match zero
+    /// or more characters, or `"_"` to match exactly one character.  (See
+    /// the documentation of DatabaseMetaData in JDBC or "Pattern Value
+    /// Arguments" in the ODBC documentation.)
+    fn get_objects(
+        &self,
+        depth: AdbcObjectDepth,
+        catalog: Option<&str>,
+        db_schema: Option<&str>,
+        table_name: Option<&str>,
+        table_type: Option<&[&str]>,
+        column_name: Option<&str>,
+    ) -> Result<Self::ObjectCollectionType, AdbcError>;
+
+    /// Get the Arrow schema of a table.
+    ///
+    /// `catalog` or `db_schema` may be `None` when not applicable.
+    fn get_table_schema(
+        &self,
+        catalog: Option<&str>,
+        db_schema: Option<&str>,
+        table_name: &str,
+    ) -> Result<Schema, AdbcError>;
+
+    /// Get a list of table types in the database.
+    ///
+    /// The result is an Arrow dataset with the following schema:
+    ///
+    /// Field Name       | Field Type
+    /// -----------------|--------------
+    /// `table_type`     | `utf8 not null`
+    fn get_table_types(&self) -> Result<Vec<String>, AdbcError>;
+
+    /// Read part of a partitioned result set.
+    fn read_partition(&self, partition: &[u8]) -> Result<Box<dyn RecordBatchReader>, AdbcError>;
+
+    /// Commit any pending transactions. Only used if autocommit is disabled.
+    fn commit(&self) -> Result<(), AdbcError>;
+
+    /// Roll back any pending transactions. Only used if autocommit is disabled.
+    fn rollback(&self) -> Result<(), AdbcError>;
+}
+
+/// Depth parameter for GetObjects method.
+#[derive(Debug)]
+#[repr(i32)]
+pub enum AdbcObjectDepth {
+    /// Metadata on catalogs, schemas, tables, and columns.
+    All = 0,
+    /// Metadata on catalogs only.
+    Catalogs = 1,
+    /// Metadata on catalogs and schemas.
+    DBSchemas = 2,
+    /// Metadata on catalogs, schemas, and tables.
+    Tables = 3,
+}
+
+/// A container for all state needed to execute a database query, such as the
+/// query itself, parameters for prepared statements, driver parameters, etc.
+///
+/// Statements may represent queries or prepared statements.
+///
+/// Statements may be used multiple times and can be reconfigured
+/// (e.g. they can be reused to execute multiple different queries).
+/// However, executing a statement (and changing certain other state)
+/// will invalidate result sets obtained prior to that execution.
+///
+/// Multiple statements may be created from a single connection.
+/// However, the driver may block or error if they are used
+/// concurrently (whether from a single thread or multiple threads).
+pub trait AdbcStatement {
+    /// Turn this statement into a prepared statement to be executed multiple time.
+    ///
+    /// This should return an error if called before [AdbcStatement::set_sql_query].
+    fn prepare(&mut self) -> Result<(), AdbcError>;
+
+    /// Set a string option on a statement.
+    fn set_option(&mut self, key: &str, value: &str) -> Result<(), AdbcError>;
+
+    /// Set the SQL query to execute.
+    fn set_sql_query(&mut self, query: &str) -> Result<(), AdbcError>;
+
+    /// Set the Substrait plan to execute.
+    fn set_substrait_plan(&mut self, plan: &[u8]) -> Result<(), AdbcError>;
+
+    /// Get the schema for bound parameters.
+    ///
+    /// This retrieves an Arrow schema describing the number, names, and
+    /// types of the parameters in a parameterized statement.  The fields
+    /// of the schema should be in order of the ordinal position of the
+    /// parameters; named parameters should appear only once.
+    ///
+    /// If the parameter does not have a name, or the name cannot be
+    /// determined, the name of the corresponding field in the schema will
+    /// be an empty string.  If the type cannot be determined, the type of
+    /// the corresponding field will be NA (NullType).
+    ///
+    /// This should return an error if this was called before [AdbcStatement::prepare].
+    fn get_param_schema(&mut self) -> Result<Schema, AdbcError>;

Review Comment:
   ```suggestion
       fn get_param_schema(&self) -> Result<Schema, AdbcError>;
   ```
   
   It shouldn't, but the C API uses a non-const pointer here, so if we change to `&self` the driver manager will just have to assume this is safe. ([See discussion](https://github.com/apache/arrow-adbc/pull/417#discussion_r1097581799)). We could regard this quirk of the C API as just "C programmers being C programmers" and consider it a `const` pointer for Rust, or be safety-paranoid and keep as `&mut self`. It's unclear to me if this will be a big impact, as I expect `AdbcStatement` to only be used in a context where someone has a mutable reference to it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org