You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by ze...@apache.org on 2023/04/28 16:17:16 UTC

[arrow-adbc] branch main updated: feat(go/adbc/driver): Adbc Driver for Snowflake (#586)

This is an automated email from the ASF dual-hosted git repository.

zeroshade pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow-adbc.git


The following commit(s) were added to refs/heads/main by this push:
     new 50a9e89  feat(go/adbc/driver): Adbc Driver for Snowflake (#586)
50a9e89 is described below

commit 50a9e89cce420108b5f4add485b635f342c1d681
Author: Matt Topol <zo...@gmail.com>
AuthorDate: Fri Apr 28 12:17:11 2023 -0400

    feat(go/adbc/driver): Adbc Driver for Snowflake (#586)
    
    Snowflake ADBC driver which we can package up like we do for the Flight SQL driver.
---
 .github/workflows/integration.yml                  |  45 ++
 .github/workflows/native-unix.yml                  |   4 +
 CONTRIBUTING.md                                    |  20 +
 LICENSE.txt                                        | 577 +++++++++++++--
 c/CMakeLists.txt                                   |   4 +
 c/driver/snowflake/CMakeLists.txt                  |  60 ++
 .../driver/snowflake/adbc-driver-snowflake.pc.in   |  32 +-
 c/driver/snowflake/snowflake_test.cc               | 182 +++++
 c/validation/adbc_validation.cc                    |  54 +-
 c/validation/adbc_validation.h                     |   3 +
 c/validation/adbc_validation_util.h                |   6 +-
 ci/conda/build-cpp.sh                              |   5 +
 ci/linux-packages/debian/control                   |  24 +
 .../debian/libadbc-driver-snowflake-dev.install    |   3 +
 .../debian/libadbc-driver-snowflake004.install     |   1 +
 ci/linux-packages/debian/rules                     |   5 +-
 ci/linux-packages/yum/apache-arrow-adbc.spec.in    |  34 +-
 ci/scripts/cpp_build.sh                            |   2 +
 ci/scripts/cpp_test.sh                             |   1 +
 ci/scripts/go_build.ps1                            |   7 +
 ci/scripts/go_build.sh                             |   8 +-
 dev/release/verify-apt.sh                          |   3 +
 dev/release/verify-release-candidate.ps1           |   2 +
 dev/release/verify-release-candidate.sh            |   5 +-
 dev/release/verify-yum.sh                          |   4 +
 docs/source/driver/go/flight_sql.rst               |   2 +-
 docs/source/driver/go/snowflake.rst                | 325 +++++++++
 go/adbc/driver/flightsql/flightsql_adbc.go         | 280 +------
 go/adbc/driver/flightsql/flightsql_adbc_test.go    |  39 +-
 go/adbc/driver/flightsql/flightsql_statement.go    |   4 +-
 go/adbc/driver/flightsql/record_reader.go          |  52 +-
 go/adbc/driver/internal/shared_utils.go            | 306 ++++++++
 go/adbc/driver/snowflake/connection.go             | 805 +++++++++++++++++++++
 go/adbc/driver/snowflake/driver.go                 | 433 +++++++++++
 go/adbc/driver/snowflake/driver_test.go            | 226 ++++++
 go/adbc/driver/snowflake/record_reader.go          | 392 ++++++++++
 go/adbc/driver/snowflake/statement.go              | 582 +++++++++++++++
 go/adbc/go.mod                                     |  74 +-
 go/adbc/go.sum                                     | 186 +++--
 go/adbc/pkg/Makefile                               |   3 +-
 go/adbc/pkg/doc.go                                 |   1 +
 go/adbc/pkg/gen/main.go                            |   5 +-
 go/adbc/pkg/snowflake/driver.go                    | 728 +++++++++++++++++++
 go/adbc/pkg/snowflake/utils.c                      | 200 +++++
 go/adbc/pkg/snowflake/utils.h                      |  97 +++
 go/adbc/standard_schemas.go                        |   8 +
 go/adbc/utils/utils.go                             |  67 ++
 go/adbc/validation/validation.go                   | 317 +++++++-
 license.tpl                                        | 321 ++++++++
 .../adbc_driver_flightsql/__init__.py              |   2 +-
 50 files changed, 6047 insertions(+), 499 deletions(-)

diff --git a/.github/workflows/integration.yml b/.github/workflows/integration.yml
index 41213cc..c71a9b6 100644
--- a/.github/workflows/integration.yml
+++ b/.github/workflows/integration.yml
@@ -208,3 +208,48 @@ jobs:
           ADBC_POSTGRESQL_TEST_URI: "postgres://localhost:5432/postgres?user=postgres&password=password"
         run: |
           ./ci/scripts/python_test.sh "$(pwd)" "$(pwd)/build"
+
+  snowflake:
+    name: "Snowflake Integration Tests"
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+        with:
+          fetch-depth: 0
+          persist-credentials: false
+      - name: Get Date
+        id: get-date
+        shell: bash
+        run: |
+          echo "today=$(/bin/date -u '+%Y%m%d')" >> $GITHUB_OUTPUT
+      - name: Cache Conda
+        uses: actions/cache/restore@v3
+        with:
+          path: ~/conda_pkgs_dir
+          key: conda-${{ runner.os }}-${{ steps.get-date.outputs.today }}-${{ env.CACHE_NUMBER }}-${{ hashFiles('ci/**') }}
+      - uses: conda-incubator/setup-miniconda@v2
+        with:
+          miniforge-variant: Mambaforge
+          miniforge-version: latest
+          use-only-tar-bz2: false
+          use-mamba: true
+      - name: Install Dependencies
+        shell: bash -l {0}
+        run: |
+          mamba install -c conda-forge \
+            --file ci/conda_env_cpp.txt
+      - uses: actions/setup-go@v3
+        with:
+          go-version: 1.18.6
+          check-latest: true
+          cache: true
+          cache-dependency-path: go/adbc/go.sum
+      - name: Build and Test Snowflake Driver
+        shell: bash -l {0}
+        env:
+          BUILD_ALL: "0"
+          BUILD_DRIVER_SNOWFLAKE: "1"
+          ADBC_SNOWFLAKE_URI: ${{ secrets.SNOWFLAKE_URI }}
+        run: |
+          ./ci/scripts/cpp_build.sh "$(pwd)" "$(pwd)/build"
+          ./ci/scripts/cpp_test.sh "$(pwd)" "$(pwd)/build"
diff --git a/.github/workflows/native-unix.yml b/.github/workflows/native-unix.yml
index 82c99bd..c31b529 100644
--- a/.github/workflows/native-unix.yml
+++ b/.github/workflows/native-unix.yml
@@ -280,6 +280,8 @@ jobs:
           staticcheck -f stylish ./...
           popd
       - name: Go Test
+        env:
+          SNOWFLAKE_URI: ${{ secrets.SNOWFLAKE_URI }}
         run: |
           ./ci/scripts/go_test.sh "$(pwd)" "$(pwd)/build" "$HOME/local"
 
@@ -349,6 +351,8 @@ jobs:
           popd
       - name: Go Test
         shell: bash -l {0}
+        env:
+          SNOWFLAKE_URI: ${{ secrets.SNOWFLAKE_URI }}
         run: |
           export PATH=$RUNNER_TOOL_CACHE/go/1.18.6/x64/bin:$PATH
           ./ci/scripts/go_test.sh "$(pwd)" "$(pwd)/build" "$HOME/local"
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 59b06e5..e97110a 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -254,4 +254,24 @@ ci(go/adbc/drivermgr): pass through DYLD_LIBRARY_PATH in tests
 fix(java/driver/jdbc): adjust SQL type mapping for JDBC driver
 ```
 
+## Re-generating 3rd Party Licenses
+
+In order to collect the licenses for our Go-dependencies we leverage the
+tool `github.com/google/go-licenses`. We have a template containing the
+non-go licenses, and then you can install `go-licenses` with:
+
+```shell
+$ go install github.com/google/go-licenses@latest
+```
+
+Finally, you can generate the LICENSE.txt with the following command:
+
+```shell
+$ cd go/adbc && go-licenses report ./... \
+  --ignore github.com/apache/arrow-adbc/go/adbc \
+  --ignore github.com/apache/arrow/go/v11 \
+  --ignore github.com/apache/arrow/go/v12 \
+  --template ../../license.tpl > ../../LICENSE.txt 2> /dev/null
+```
+
 [conventional-commits]: https://www.conventionalcommits.org/en/v1.0.0/
diff --git a/LICENSE.txt b/LICENSE.txt
index 22c9f15..e0fc358 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -311,6 +311,142 @@ distributions, like the Python wheels. SQLite is public domain.
 
 --------------------------------------------------------------------------------
 
+3rdparty dependency github.com/99designs/keyring
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/99designs/keyring is under the MIT license.
+The MIT License (MIT)
+
+Copyright (c) 2015 99designs
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/Azure/azure-sdk-for-go/sdk/azcore
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/Azure/azure-sdk-for-go/sdk/azcore is under the MIT license.
+MIT License
+
+Copyright (c) Microsoft Corporation.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/Azure/azure-sdk-for-go/sdk/internal
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/Azure/azure-sdk-for-go/sdk/internal is under the MIT license.
+MIT License
+
+Copyright (c) Microsoft Corporation.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/Azure/azure-sdk-for-go/sdk/storage/azblob is under the MIT license.
+MIT License
+
+Copyright (c) Microsoft Corporation. All rights reserved.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/JohnCGriffin/overflow
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/JohnCGriffin/overflow is under the MIT license.
+MIT License
+
+Copyright (c) 2017 John C. Griffin,
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+--------------------------------------------------------------------------------
+
 3rdparty dependency github.com/andybalholm/brotli
 is statically linked in certain binary distributions, like the Python wheels.
 github.com/andybalholm/brotli is under the MIT license.
@@ -340,6 +476,152 @@ THE SOFTWARE.
 is statically linked in certain binary distributions, like the Python wheels.
 github.com/apache/thrift/lib/go/thrift is under the Apache-2.0 license.
 
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2 is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/credentials
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/credentials is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/feature/s3/manager
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/feature/s3/manager is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/internal/configsources
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/internal/configsources is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/internal/endpoints/v2
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/internal/sync/singleflight
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/internal/sync/singleflight is under the BSD-3-Clause license.
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/internal/v4a
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/internal/v4a is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/service/internal/checksum
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/service/internal/checksum is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/service/internal/presigned-url
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/service/internal/s3shared
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/aws-sdk-go-v2/service/s3
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/aws-sdk-go-v2/service/s3 is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/smithy-go
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/smithy-go is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/aws/smithy-go/internal/sync/singleflight
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/aws/smithy-go/internal/sync/singleflight is under the BSD-3-Clause license.
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
 --------------------------------------------------------------------------------
 
 3rdparty dependency github.com/bluele/gcache
@@ -390,6 +672,74 @@ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 
 --------------------------------------------------------------------------------
 
+3rdparty dependency github.com/dvsekhvalnov/jose2go
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/dvsekhvalnov/jose2go is under the MIT license.
+The MIT License (MIT)
+
+Copyright (c) 2014
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/form3tech-oss/jwt-go
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/form3tech-oss/jwt-go is under the MIT license.
+Copyright (c) 2012 Dave Grijalva
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/gabriel-vasile/mimetype
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/gabriel-vasile/mimetype is under the MIT license.
+MIT License
+
+Copyright (c) 2018-2020 Gabriel Vasile
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+--------------------------------------------------------------------------------
+
 3rdparty dependency github.com/goccy/go-json
 is statically linked in certain binary distributions, like the Python wheels.
 github.com/goccy/go-json is under the MIT license.
@@ -417,6 +767,37 @@ SOFTWARE.
 
 --------------------------------------------------------------------------------
 
+3rdparty dependency github.com/godbus/dbus
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/godbus/dbus is under the BSD-2-Clause license.
+Copyright (c) 2013, Georg Reinke (<guelfey at gmail dot com>), Google
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions
+are met:
+
+1. Redistributions of source code must retain the above copyright notice,
+this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+notice, this list of conditions and the following disclaimer in the
+documentation and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
+TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+--------------------------------------------------------------------------------
+
 3rdparty dependency github.com/golang/protobuf
 is statically linked in certain binary distributions, like the Python wheels.
 github.com/golang/protobuf is under the BSD-3-Clause license.
@@ -490,6 +871,40 @@ github.com/google/flatbuffers/go is under the Apache-2.0 license.
 
 --------------------------------------------------------------------------------
 
+3rdparty dependency github.com/gsterjov/go-libsecret
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/gsterjov/go-libsecret is under the MIT license.
+The MIT License (MIT)
+
+Copyright (c) 2016 Goran Sterjov
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/jmespath/go-jmespath
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/jmespath/go-jmespath is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
 3rdparty dependency github.com/klauspost/compress
 is statically linked in certain binary distributions, like the Python wheels.
 github.com/klauspost/compress is under the Apache-2.0 license.
@@ -583,6 +998,33 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 SOFTWARE.
 
 
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/mtibben/percent
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/mtibben/percent is under the MIT license.
+MIT License
+
+Copyright (c) 2020 Michael Tibben
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
 --------------------------------------------------------------------------------
 
 3rdparty dependency github.com/pierrec/lz4/v4
@@ -617,6 +1059,35 @@ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/pkg/browser
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/pkg/browser is under the BSD-2-Clause license.
+Copyright (c) 2014, Dave Cheney <da...@cheney.net>
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this
+  list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice,
+  this list of conditions and the following disclaimer in the documentation
+  and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 --------------------------------------------------------------------------------
 
 3rdparty dependency github.com/pmezard/go-difflib/difflib
@@ -652,6 +1123,39 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 --------------------------------------------------------------------------------
 
+3rdparty dependency github.com/sirupsen/logrus
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/sirupsen/logrus is under the MIT license.
+The MIT License (MIT)
+
+Copyright (c) 2014 Simon Eskildsen
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency github.com/snowflakedb/gosnowflake
+is statically linked in certain binary distributions, like the Python wheels.
+github.com/snowflakedb/gosnowflake is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
 3rdparty dependency github.com/stretchr/testify
 is statically linked in certain binary distributions, like the Python wheels.
 github.com/stretchr/testify is under the MIT license.
@@ -710,9 +1214,25 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 --------------------------------------------------------------------------------
 
-3rdparty dependency golang.org/x/exp/maps
+3rdparty dependency google.golang.org/genproto/googleapis/rpc/status
+is statically linked in certain binary distributions, like the Python wheels.
+google.golang.org/genproto/googleapis/rpc/status is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency google.golang.org/grpc
 is statically linked in certain binary distributions, like the Python wheels.
-golang.org/x/exp/maps is under the BSD-3-Clause license.
+google.golang.org/grpc is under the Apache-2.0 license.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency golang.org/x/crypto/ocsp
+is statically linked in certain binary distributions, like the Python wheels.
+golang.org/x/crypto/ocsp is under the BSD-3-Clause license.
+
+3rdparty dependency golang.org/x/exp
+is statically linked in certain binary distributions, like the Python wheels.
+golang.org/x/exp is under the BSD-3-Clause license.
 
 3rdparty dependency golang.org/x/mod/semver
 is statically linked in certain binary distributions, like the Python wheels.
@@ -730,6 +1250,10 @@ golang.org/x/sync/errgroup is under the BSD-3-Clause license.
 is statically linked in certain binary distributions, like the Python wheels.
 golang.org/x/sys is under the BSD-3-Clause license.
 
+3rdparty dependency golang.org/x/term
+is statically linked in certain binary distributions, like the Python wheels.
+golang.org/x/term is under the BSD-3-Clause license.
+
 3rdparty dependency golang.org/x/text
 is statically linked in certain binary distributions, like the Python wheels.
 golang.org/x/text is under the BSD-3-Clause license.
@@ -738,56 +1262,9 @@ golang.org/x/text is under the BSD-3-Clause license.
 is statically linked in certain binary distributions, like the Python wheels.
 golang.org/x/tools is under the BSD-3-Clause license.
 
-3rdparty dependency golang.org/x/xerrors
-is statically linked in certain binary distributions, like the Python wheels.
-golang.org/x/xerrors is under the BSD-3-Clause license.
-
-Copyright (c) 2009 The Go Authors. All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
-   * Redistributions of source code must retain the above copyright
-notice, this list of conditions and the following disclaimer.
-   * Redistributions in binary form must reproduce the above
-copyright notice, this list of conditions and the following disclaimer
-in the documentation and/or other materials provided with the
-distribution.
-   * Neither the name of Google Inc. nor the names of its
-contributors may be used to endorse or promote products derived from
-this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
---------------------------------------------------------------------------------
-
-3rdparty dependency google.golang.org/genproto/googleapis/rpc/status
-is statically linked in certain binary distributions, like the Python wheels.
-google.golang.org/genproto/googleapis/rpc/status is under the Apache-2.0 license.
-
---------------------------------------------------------------------------------
-
-3rdparty dependency google.golang.org/grpc
-is statically linked in certain binary distributions, like the Python wheels.
-google.golang.org/grpc is under the Apache-2.0 license.
-
---------------------------------------------------------------------------------
-
 3rdparty dependency google.golang.org/protobuf
 is statically linked in certain binary distributions, like the Python wheels.
 google.golang.org/protobuf is under the BSD-3-Clause license.
-
 Copyright (c) 2018 The Go Authors. All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
@@ -820,6 +1297,8 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 3rdparty dependency gopkg.in/yaml.v3
 is statically linked in certain binary distributions, like the Python wheels.
+gopkg.in/yaml.v3 is under the MIT license.
+
 This project is covered by two different licenses: MIT and Apache.
 
 #### MIT License ####
diff --git a/c/CMakeLists.txt b/c/CMakeLists.txt
index a0df701..7f417b0 100644
--- a/c/CMakeLists.txt
+++ b/c/CMakeLists.txt
@@ -45,5 +45,9 @@ if(ADBC_DRIVER_SQLITE)
   add_subdirectory(driver/sqlite)
 endif()
 
+if(ADBC_DRIVER_SNOWFLAKE)
+  add_subdirectory(driver/snowflake)
+endif()
+
 validate_config()
 config_summary_message()
diff --git a/c/driver/snowflake/CMakeLists.txt b/c/driver/snowflake/CMakeLists.txt
new file mode 100644
index 0000000..ad03297
--- /dev/null
+++ b/c/driver/snowflake/CMakeLists.txt
@@ -0,0 +1,60 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+include(GoUtils)
+
+set(LDFLAGS "$<$<CONFIG:Release>:-s> $<$<CONFIG:Release>:-w>")
+add_go_lib("${REPOSITORY_ROOT}/go/adbc/pkg/snowflake/"
+           adbc_driver_snowflake
+           SOURCES
+           driver.go
+           utils.h
+           utils.c
+           BUILD_TAGS
+           driverlib
+           PKG_CONFIG_NAME
+           adbc-driver-snowflake
+           SHARED_LINK_FLAGS
+           ${LDFLAGS})
+
+include_directories(SYSTEM ${REPOSITORY_ROOT})
+include_directories(SYSTEM ${REPOSITORY_ROOT}/c/)
+include_directories(SYSTEM ${REPOSITORY_ROOT}/c/vendor)
+
+if(ADBC_TEST_LINKAGE STREQUAL "shared")
+  set(TEST_LINK_LIBS adbc_driver_snowflake_shared)
+else()
+  set(TEST_LINK_LIBS adbc_driver_snowflake_static)
+endif()
+
+if(ADBC_BUILD_TESTS)
+  add_test_case(driver_snowflake_test
+                PREFIX
+                adbc
+                SOURCES
+                snowflake_test.cc
+                ../../validation/adbc_validation.cc
+                ../../validation/adbc_validation_util.cc
+                EXTRA_LINK_LIBS
+                nanoarrow
+                ${TEST_LINK_LIBS})
+  target_compile_features(adbc-driver-snowflake-test PRIVATE cxx_std_17)
+  adbc_configure_target(adbc-driver-snowflake-test)
+endif()
+
+validate_config()
+config_summary_message()
diff --git a/ci/scripts/go_build.ps1 b/c/driver/snowflake/adbc-driver-snowflake.pc.in
similarity index 59%
copy from ci/scripts/go_build.ps1
copy to c/driver/snowflake/adbc-driver-snowflake.pc.in
index 3e4a341..5ab2cec 100644
--- a/ci/scripts/go_build.ps1
+++ b/c/driver/snowflake/adbc-driver-snowflake.pc.in
@@ -1,4 +1,3 @@
-#!/usr/bin/env bash
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,28 +15,11 @@
 # specific language governing permissions and limitations
 # under the License.
 
-$ErrorActionPreference = "Stop"
+prefix=@CMAKE_INSTALL_PREFIX@
+libdir=@ADBC_PKG_CONFIG_LIBDIR@
 
-$SourceDir = $Args[0]
-$BuildDir = $Args[1]
-$InstallDir = if ($Args[2] -ne $null) { $Args[2] } else { Join-Path $BuildDir "local/" }
-
-$GoDir = Join-Path $SourceDir "go" "adbc"
-
-Push-Location $GoDir
-
-go build -v ./...
-if (-not $?) { exit 1 }
-
-if ($env:CGO_ENABLED -eq "1") {
-    Push-Location pkg
-    go build `
-      -tags driverlib `
-      -o adbc_driver_flightsql.dll `
-      -buildmode=c-shared `
-      ./flightsql
-    if (-not $?) { exit 1 }
-    Pop-Location
-}
-
-Pop-Location
+Name: Apache Arrow Database Connectivity (ADBC) Snowflake driver
+Description: The ADBC Snowflake driver provides an ADBC driver for Snowflake.
+URL: https://github.com/apache/arrow-adbc
+Version: @ADBC_VERSION@
+Libs: -L${libdir} -ladbc_driver_snowflake
diff --git a/c/driver/snowflake/snowflake_test.cc b/c/driver/snowflake/snowflake_test.cc
new file mode 100644
index 0000000..021269c
--- /dev/null
+++ b/c/driver/snowflake/snowflake_test.cc
@@ -0,0 +1,182 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include <adbc.h>
+#include <gmock/gmock-matchers.h>
+#include <gtest/gtest-matchers.h>
+#include <gtest/gtest-param-test.h>
+#include <gtest/gtest.h>
+#include <nanoarrow/nanoarrow.h>
+#include <algorithm>
+#include <cstring>
+#include "validation/adbc_validation.h"
+#include "validation/adbc_validation_util.h"
+
+using adbc_validation::IsOkStatus;
+
+#define CHECK_OK(EXPR)                                              \
+  do {                                                              \
+    if (auto adbc_status = (EXPR); adbc_status != ADBC_STATUS_OK) { \
+      return adbc_status;                                           \
+    }                                                               \
+  } while (false)
+
+class SnowflakeQuirks : public adbc_validation::DriverQuirks {
+ public:
+  SnowflakeQuirks() {
+    uri_ = std::getenv("ADBC_SNOWFLAKE_URI");
+    if (uri_ == nullptr || std::strlen(uri_) == 0) {
+      skip_ = true;
+    }
+  }
+
+  AdbcStatusCode SetupDatabase(struct AdbcDatabase* database,
+                               struct AdbcError* error) const override {
+    EXPECT_THAT(AdbcDatabaseSetOption(database, "uri", uri_, error), IsOkStatus(error));
+    return ADBC_STATUS_OK;
+  }
+
+  AdbcStatusCode DropTable(struct AdbcConnection* connection, const std::string& name,
+                           struct AdbcError* error) const override {
+    adbc_validation::Handle<struct AdbcStatement> statement;
+    CHECK_OK(AdbcStatementNew(connection, &statement.value, error));
+
+    std::string drop = "DROP TABLE IF EXISTS ";
+    drop += name;
+    CHECK_OK(AdbcStatementSetSqlQuery(&statement.value, drop.c_str(), error));
+    CHECK_OK(AdbcStatementExecuteQuery(&statement.value, nullptr, nullptr, error));
+
+    CHECK_OK(AdbcStatementRelease(&statement.value, error));
+    return ADBC_STATUS_OK;
+  }
+
+  AdbcStatusCode CreateSampleTable(struct AdbcConnection* connection,
+                                   const std::string& name,
+                                   struct AdbcError* error) const override {
+    adbc_validation::Handle<struct AdbcStatement> statement;
+    CHECK_OK(AdbcStatementNew(connection, &statement.value, error));
+
+    std::string create = "CREATE TABLE ";
+    create += name;
+    create += " (int64s INT, strings TEXT)";
+    CHECK_OK(AdbcStatementSetSqlQuery(&statement.value, create.c_str(), error));
+    CHECK_OK(AdbcStatementExecuteQuery(&statement.value, nullptr, nullptr, error));
+
+    std::string insert = "INSERT INTO ";
+    insert += name;
+    insert += " VALUES (42, 'foo'), (-42, NULL), (NULL, '')";
+    CHECK_OK(AdbcStatementSetSqlQuery(&statement.value, insert.c_str(), error));
+    CHECK_OK(AdbcStatementExecuteQuery(&statement.value, nullptr, nullptr, error));
+
+    CHECK_OK(AdbcStatementRelease(&statement.value, error));
+    return ADBC_STATUS_OK;
+  }
+
+  ArrowType IngestSelectRoundTripType(ArrowType ingest_type) const override {
+    switch (ingest_type) {
+      case NANOARROW_TYPE_INT8:
+      case NANOARROW_TYPE_UINT8:
+      case NANOARROW_TYPE_INT16:
+      case NANOARROW_TYPE_UINT16:
+      case NANOARROW_TYPE_INT32:
+      case NANOARROW_TYPE_UINT32:
+      case NANOARROW_TYPE_INT64:
+      case NANOARROW_TYPE_UINT64:
+        return NANOARROW_TYPE_INT64;
+      case NANOARROW_TYPE_FLOAT:
+      case NANOARROW_TYPE_DOUBLE:
+        return NANOARROW_TYPE_DOUBLE;
+      default:
+        return ingest_type;
+    }
+  }
+
+  std::string BindParameter(int index) const override { return "?"; }
+  bool supports_concurrent_statements() const override { return true; }
+  bool supports_transactions() const override { return true; }
+  bool supports_get_sql_info() const override { return false; }
+  bool supports_get_objects() const override { return true; }
+  bool supports_bulk_ingest() const override { return true; }
+  bool supports_partitioned_data() const override { return false; }
+  bool supports_dynamic_parameter_binding() const override { return false; }
+  bool ddl_implicit_commit_txn() const override { return true; }
+
+  const char* uri_;
+  bool skip_{false};
+};
+
+class SnowflakeTest : public ::testing::Test, public adbc_validation::DatabaseTest {
+ public:
+  const adbc_validation::DriverQuirks* quirks() const override { return &quirks_; }
+  void SetUp() override {
+    if (quirks_.skip_) {
+      GTEST_SKIP();
+    }
+    ASSERT_NO_FATAL_FAILURE(SetUpTest());
+  }
+  void TearDown() override {
+    if (!quirks_.skip_) {
+      ASSERT_NO_FATAL_FAILURE(TearDownTest());
+    }
+  }
+
+ protected:
+  SnowflakeQuirks quirks_;
+};
+ADBCV_TEST_DATABASE(SnowflakeTest)
+
+class SnowflakeConnectionTest : public ::testing::Test,
+                                public adbc_validation::ConnectionTest {
+ public:
+  const adbc_validation::DriverQuirks* quirks() const override { return &quirks_; }
+  void SetUp() override {
+    if (quirks_.skip_) {
+      GTEST_SKIP();
+    }
+    ASSERT_NO_FATAL_FAILURE(SetUpTest());
+  }
+  void TearDown() override {
+    if (!quirks_.skip_) {
+      ASSERT_NO_FATAL_FAILURE(TearDownTest());
+    }
+  }
+
+ protected:
+  SnowflakeQuirks quirks_;
+};
+ADBCV_TEST_CONNECTION(SnowflakeConnectionTest)
+
+class SnowflakeStatementTest : public ::testing::Test,
+                               public adbc_validation::StatementTest {
+ public:
+  const adbc_validation::DriverQuirks* quirks() const override { return &quirks_; }
+  void SetUp() override {
+    if (quirks_.skip_) {
+      GTEST_SKIP();
+    }
+    ASSERT_NO_FATAL_FAILURE(SetUpTest());
+  }
+  void TearDown() override {
+    if (!quirks_.skip_) {
+      ASSERT_NO_FATAL_FAILURE(TearDownTest());
+    }
+  }
+
+ protected:
+  SnowflakeQuirks quirks_;
+};
+ADBCV_TEST_STATEMENT(SnowflakeStatementTest)
diff --git a/c/validation/adbc_validation.cc b/c/validation/adbc_validation.cc
index bfd1322..0dc17f5 100644
--- a/c/validation/adbc_validation.cc
+++ b/c/validation/adbc_validation.cc
@@ -17,6 +17,7 @@
 
 #include "adbc_validation.h"
 
+#include <algorithm>
 #include <cerrno>
 #include <cstring>
 #include <limits>
@@ -51,6 +52,15 @@ namespace {
       return adbc_status;                                           \
     }                                                               \
   } while (false)
+
+/// case insensitive string compare
+bool iequals(std::string_view s1, std::string_view s2) {
+  return std::equal(s1.begin(), s1.end(), s2.begin(), s2.end(),
+                    [](unsigned char a, unsigned char b) {
+                      return std::tolower(a) == std::tolower(b);
+                    });
+}
+
 }  // namespace
 
 //------------------------------------------------------------
@@ -594,19 +604,18 @@ void ConnectionTest::TestMetadataGetObjectsTables() {
              db_schemas_index <
              ArrowArrayViewGetOffsetUnsafe(catalog_db_schemas_list, row + 1);
              db_schemas_index++) {
-          ASSERT_FALSE(
-              ArrowArrayViewIsNull(db_schema_tables_list, row + db_schemas_index))
+          ASSERT_FALSE(ArrowArrayViewIsNull(db_schema_tables_list, db_schemas_index))
               << "Row " << row << " should have non-null db_schema_tables";
 
           for (int64_t tables_index = ArrowArrayViewGetOffsetUnsafe(
                    db_schema_tables_list, row + db_schemas_index);
-               tables_index < ArrowArrayViewGetOffsetUnsafe(db_schema_tables_list,
-                                                            row + db_schemas_index + 1);
+               tables_index <
+               ArrowArrayViewGetOffsetUnsafe(db_schema_tables_list, db_schemas_index + 1);
                tables_index++) {
             ArrowStringView table_name = ArrowArrayViewGetStringUnsafe(
                 db_schema_tables->children[0], tables_index);
-            if (std::string_view(table_name.data, table_name.size_bytes) ==
-                "bulk_ingest") {
+            if (iequals(std::string(table_name.data, table_name.size_bytes),
+                        "bulk_ingest")) {
               found_expected_table = true;
             }
 
@@ -766,8 +775,7 @@ void ConnectionTest::TestMetadataGetObjectsColumns() {
              db_schemas_index <
              ArrowArrayViewGetOffsetUnsafe(catalog_db_schemas_list, row + 1);
              db_schemas_index++) {
-          ASSERT_FALSE(
-              ArrowArrayViewIsNull(db_schema_tables_list, row + db_schemas_index))
+          ASSERT_FALSE(ArrowArrayViewIsNull(db_schema_tables_list, db_schemas_index))
               << "Row " << row << " should have non-null db_schema_tables";
 
           for (int64_t tables_index =
@@ -783,8 +791,8 @@ void ConnectionTest::TestMetadataGetObjectsColumns() {
             ASSERT_FALSE(ArrowArrayViewIsNull(table_constraints_list, tables_index))
                 << "Row " << row << " should have non-null table_constraints";
 
-            if (std::string_view(table_name.data, table_name.size_bytes) ==
-                "bulk_ingest") {
+            if (iequals(std::string(table_name.data, table_name.size_bytes),
+                        "bulk_ingest")) {
               found_expected_table = true;
 
               for (int64_t columns_index =
@@ -794,7 +802,10 @@ void ConnectionTest::TestMetadataGetObjectsColumns() {
                    columns_index++) {
                 ArrowStringView name = ArrowArrayViewGetStringUnsafe(
                     table_columns->children[0], columns_index);
-                column_names.push_back(std::string(name.data, name.size_bytes));
+                std::string temp(name.data, name.size_bytes);
+                std::transform(temp.begin(), temp.end(), temp.begin(),
+                               [](unsigned char c) { return std::tolower(c); });
+                column_names.push_back(std::move(temp));
                 ordinal_positions.push_back(
                     static_cast<int32_t>(ArrowArrayViewGetIntUnsafe(
                         table_columns->children[1], columns_index)));
@@ -897,7 +908,9 @@ void StatementTest::TestSqlIngestType(ArrowType type,
   ASSERT_THAT(rows_affected,
               ::testing::AnyOf(::testing::Eq(values.size()), ::testing::Eq(-1)));
 
-  ASSERT_THAT(AdbcStatementSetSqlQuery(&statement, "SELECT * FROM bulk_ingest", &error),
+  ASSERT_THAT(AdbcStatementSetSqlQuery(
+                  &statement,
+                  "SELECT * FROM bulk_ingest ORDER BY \"col\" ASC NULLS FIRST", &error),
               IsOkStatus(&error));
   {
     StreamReader reader;
@@ -989,7 +1002,7 @@ void StatementTest::TestSqlIngestFloat64() {
 
 void StatementTest::TestSqlIngestString() {
   ASSERT_NO_FATAL_FAILURE(TestSqlIngestType<std::string>(
-      NANOARROW_TYPE_STRING, {std::nullopt, "", "1234", "", "例"}));
+      NANOARROW_TYPE_STRING, {std::nullopt, "", "", "1234", "例"}));
 }
 
 void StatementTest::TestSqlIngestBinary() {
@@ -1191,8 +1204,11 @@ void StatementTest::TestSqlIngestMultipleConnections() {
     ASSERT_THAT(AdbcConnectionInit(&connection2, &database, &error), IsOkStatus(&error));
     ASSERT_THAT(AdbcStatementNew(&connection2, &statement, &error), IsOkStatus(&error));
 
-    ASSERT_THAT(AdbcStatementSetSqlQuery(&statement, "SELECT * FROM bulk_ingest", &error),
-                IsOkStatus(&error));
+    ASSERT_THAT(
+        AdbcStatementSetSqlQuery(
+            &statement, "SELECT * FROM bulk_ingest ORDER BY \"int64s\" DESC NULLS LAST",
+            &error),
+        IsOkStatus(&error));
 
     {
       StreamReader reader;
@@ -1423,7 +1439,8 @@ void StatementTest::TestSqlPrepareSelectParams() {
 }
 
 void StatementTest::TestSqlPrepareUpdate() {
-  if (!quirks()->supports_bulk_ingest()) {
+  if (!quirks()->supports_bulk_ingest() ||
+      !quirks()->supports_dynamic_parameter_binding()) {
     GTEST_SKIP();
   }
 
@@ -1501,7 +1518,8 @@ void StatementTest::TestSqlPrepareUpdateNoParams() {
 }
 
 void StatementTest::TestSqlPrepareUpdateStream() {
-  if (!quirks()->supports_bulk_ingest()) {
+  if (!quirks()->supports_bulk_ingest() ||
+      !quirks()->supports_dynamic_parameter_binding()) {
     GTEST_SKIP();
   }
 
@@ -1771,7 +1789,7 @@ void StatementTest::TestSqlQueryErrors() {
 }
 
 void StatementTest::TestTransactions() {
-  if (!quirks()->supports_transactions()) {
+  if (!quirks()->supports_transactions() || quirks()->ddl_implicit_commit_txn()) {
     GTEST_SKIP();
   }
 
diff --git a/c/validation/adbc_validation.h b/c/validation/adbc_validation.h
index 03d6bf8..400e8f1 100644
--- a/c/validation/adbc_validation.h
+++ b/c/validation/adbc_validation.h
@@ -79,6 +79,9 @@ class DriverQuirks {
   /// \brief Whether transaction methods are implemented
   virtual bool supports_transactions() const { return true; }
 
+  /// \brief Whether or not DDL implicitly commits a transaction
+  virtual bool ddl_implicit_commit_txn() const { return false; }
+
   /// \brief Whether GetSqlInfo is implemented
   virtual bool supports_get_sql_info() const { return true; }
 
diff --git a/c/validation/adbc_validation_util.h b/c/validation/adbc_validation_util.h
index 9086068..f64f64b 100644
--- a/c/validation/adbc_validation_util.h
+++ b/c/validation/adbc_validation_util.h
@@ -244,10 +244,10 @@ int MakeArray(struct ArrowArray* parent, struct ArrowArray* array,
           return errno_res;
         }
       } else if constexpr (std::is_same<T, std::string>::value) {
-        struct ArrowStringView view;
-        view.data = v->c_str();
+        struct ArrowBufferView view;
+        view.data.as_char = v->c_str();
         view.size_bytes = v->size();
-        if (int errno_res = ArrowArrayAppendString(array, view); errno_res != 0) {
+        if (int errno_res = ArrowArrayAppendBytes(array, view); errno_res != 0) {
           return errno_res;
         }
       } else {
diff --git a/ci/conda/build-cpp.sh b/ci/conda/build-cpp.sh
index 42cb691..9ba1de9 100644
--- a/ci/conda/build-cpp.sh
+++ b/ci/conda/build-cpp.sh
@@ -32,6 +32,10 @@ case "${PKG_NAME}" in
     adbc-driver-sqlite-cpp)
         export BUILD_SQLITE=ON
         ;;
+    adbc-driver-snowflake-go)
+        export CGO_ENABLED=1
+        export BUILD_SNOWFLAKE=ON
+        ;;
     *)
         echo "Unknown package ${PKG_NAME}"
         exit 1
@@ -59,6 +63,7 @@ cmake "../c" \
       ${BUILD_FLIGHTSQL:+-DADBC_DRIVER_FLIGHTSQL="$BUILD_FLIGHTSQL" } \
       ${BUILD_POSTGRESQL:+-DADBC_DRIVER_POSTGRESQL="$BUILD_POSTGRESQL"} \
       ${BUILD_SQLITE:+-DADBC_DRIVER_SQLITE="$BUILD_SQLITE"} \
+      ${BUILD_SNOWFLAKE:+-DADBC_DRIVER_SNOWFLAKE="$BUILD_SNOWFLAKE"} \
       -DCMAKE_PREFIX_PATH="${PREFIX}"
 
 cmake --build . --target install -j
diff --git a/ci/linux-packages/debian/control b/ci/linux-packages/debian/control
index 46a54d0..9a1ee7d 100644
--- a/ci/linux-packages/debian/control
+++ b/ci/linux-packages/debian/control
@@ -125,6 +125,30 @@ Description: Apache Arrow Database Connectivity (ADBC) Flight SQL driver
  .
  This package provides CMake package, pkg-config package and so on.
 
+Package: libadbc-driver-snowflake004
+Section: libs
+Architecture: any
+Multi-Arch: same
+Pre-Depends: ${misc:Pre-Depends}
+Depends:
+  ${misc:Depends},
+  ${shlibs:Depends}
+Description: Apache Arrow Database Connectivity (ADBC) Snowflake driver
+ .
+ This package provides an ADBC driver for Snowflake
+
+Package: libadbc-driver-snowflake-dev
+Section: libdevel
+Architecture: any
+Multi-Arch: same
+Depends:
+  ${misc:Depends},
+  libadbc-driver-snowflake004 (= ${binary:Version})
+Description: Apache Arrow Database Connectivity (ADBC) Snowflake driver
+ .
+ This package provides CMake package, pkg-config package and so on.
+
+
 Package: libadbc-glib0
 Section: libs
 Architecture: any
diff --git a/ci/linux-packages/debian/libadbc-driver-snowflake-dev.install b/ci/linux-packages/debian/libadbc-driver-snowflake-dev.install
new file mode 100644
index 0000000..dff1b82
--- /dev/null
+++ b/ci/linux-packages/debian/libadbc-driver-snowflake-dev.install
@@ -0,0 +1,3 @@
+usr/lib/*/libadbc_driver_snowflake.a
+usr/lib/*/libadbc_driver_snowflake.so
+usr/lib/*/pkgconfig/adbc-driver-snowflake.pc
diff --git a/ci/linux-packages/debian/libadbc-driver-snowflake004.install b/ci/linux-packages/debian/libadbc-driver-snowflake004.install
new file mode 100644
index 0000000..9e23426
--- /dev/null
+++ b/ci/linux-packages/debian/libadbc-driver-snowflake004.install
@@ -0,0 +1 @@
+usr/lib/*/libadbc_driver_snowflake.so.*
diff --git a/ci/linux-packages/debian/rules b/ci/linux-packages/debian/rules
index 5f3d137..6f92b62 100755
--- a/ci/linux-packages/debian/rules
+++ b/ci/linux-packages/debian/rules
@@ -40,7 +40,8 @@ override_dh_auto_configure:
           -DADBC_DRIVER_MANAGER=ON                      \
           -DADBC_DRIVER_POSTGRESQL=ON                   \
           -DADBC_DRIVER_SQLITE=ON                       \
-          -DADBC_DRIVER_FLIGHTSQL=ON
+          -DADBC_DRIVER_FLIGHTSQL=ON                    \
+          -DADBC_DRIVER_SNOWFLAKE=ON
 
 override_dh_auto_build:
 	dh_auto_build					\
@@ -74,4 +75,4 @@ override_dh_auto_test:
 override_dh_dwz:
 	# libadbc_driver_flightsql.so.* has compressed DWARF.
 	# We can't use dwz for compressed DWARF.
-	dh_dwz --exclude=libadbc_driver_flightsql
+	dh_dwz --exclude=libadbc_driver_flightsql --exclude=libadbc_driver_snowflake
diff --git a/ci/linux-packages/yum/apache-arrow-adbc.spec.in b/ci/linux-packages/yum/apache-arrow-adbc.spec.in
index 3f3379c..4aa9ee3 100644
--- a/ci/linux-packages/yum/apache-arrow-adbc.spec.in
+++ b/ci/linux-packages/yum/apache-arrow-adbc.spec.in
@@ -66,7 +66,8 @@ cd c
   -DADBC_DRIVER_MANAGER=ON \
   -DADBC_DRIVER_POSTGRESQL=ON \
   -DADBC_DRIVER_SQLITE=ON \
-  -DADBC_DRIVER_FLIGHTSQL=ON
+  -DADBC_DRIVER_FLIGHTSQL=ON \
+  -DADBC_DRIVER_SNOWFLAKE=ON
 %adbc_cmake_build
 cd -
 
@@ -215,6 +216,35 @@ Libraries and header files for ADBC Flight SQL driver.
 %{_libdir}/libadbc_driver_flightsql.so
 %{_libdir}/pkgconfig/adbc-driver-flightsql.pc
 
+%package driver-snowflake%{major_version}-libs
+Summary: ADBC Snowflake driver
+License: Apache-2.0
+
+%description driver-snowflake%{major_version}-libs
+This package provides an ADBC driver for Snowflake
+
+%files driver-snowflake%{major_version}-libs
+%defattr(-,root,root,-)
+%doc README.md
+%license LICENSE.txt NOTICE.txt
+%{_libdir}/libadbc_driver_snowflake.so.*
+
+%package driver-snowflake-devel
+Summary:  Libraries and header files for ADBC Snowflake driver
+License:  Apache-2.0
+Requires: %{name}-driver-snowflake%{major_version}-libs = %{version}-%{release}
+
+%description driver-snowflake-devel
+Libraries and header files for ADBC Snowflake driver
+
+%files driver-snowflake-devel
+%defattr(-,root,root,-)
+%doc README.md
+%license LICENSE.txt NOTICE.txt
+%{_libdir}/libadbc_driver_snowflake.a
+%{_libdir}/libadbc_driver_snowflake.so
+%{_libdir}/pkgconfig/adbc-driver-snowflake.pc
+
 %package glib%{major_version}-libs
 Summary:	Runtime libraries for ADBC GLib
 License:	Apache-2.0
@@ -266,5 +296,7 @@ Documentation for ADBC GLib.
 %{_docdir}/adbc-glib/
 
 %changelog
+* Thu Apr 27 2023 Matt Topol <ma...@voltrondata.com> - 0.4.0-1
+- Add snowflake driver
 * Mon Dec 26 2022 Sutou Kouhei <ko...@clear-code.com> - 0.1.0-1
 - New upstream release.
diff --git a/ci/scripts/cpp_build.sh b/ci/scripts/cpp_build.sh
index cca2589..0c9e496 100755
--- a/ci/scripts/cpp_build.sh
+++ b/ci/scripts/cpp_build.sh
@@ -23,6 +23,7 @@ set -e
 : ${BUILD_DRIVER_POSTGRESQL:=${BUILD_ALL}}
 : ${BUILD_DRIVER_SQLITE:=${BUILD_ALL}}
 : ${BUILD_DRIVER_FLIGHTSQL:=${BUILD_ALL}}
+: ${BUILD_DRIVER_SNOWFLAKE:=${BUILD_ALL}}
 
 : ${ADBC_BUILD_SHARED:=ON}
 : ${ADBC_BUILD_STATIC:=OFF}
@@ -54,6 +55,7 @@ build_subproject() {
           -DADBC_DRIVER_POSTGRESQL="${BUILD_DRIVER_POSTGRESQL}" \
           -DADBC_DRIVER_SQLITE="${BUILD_DRIVER_SQLITE}" \
           -DADBC_DRIVER_FLIGHTSQL="${BUILD_DRIVER_FLIGHTSQL}" \
+          -DADBC_DRIVER_SNOWFLAKE="${BUILD_DRIVER_SNOWFLAKE}" \
           -DADBC_BUILD_STATIC="${ADBC_BUILD_STATIC}" \
           -DADBC_BUILD_TESTS="${ADBC_BUILD_TESTS}" \
           -DADBC_USE_ASAN="${ADBC_USE_ASAN}" \
diff --git a/ci/scripts/cpp_test.sh b/ci/scripts/cpp_test.sh
index 1b169cf..685fb8d 100755
--- a/ci/scripts/cpp_test.sh
+++ b/ci/scripts/cpp_test.sh
@@ -23,6 +23,7 @@ set -e
 : ${BUILD_DRIVER_POSTGRESQL:=${BUILD_ALL}}
 : ${BUILD_DRIVER_SQLITE:=${BUILD_ALL}}
 : ${BUILD_DRIVER_FLIGHTSQL:=${BUILD_ALL}}
+: ${BUILD_DRIVER_SNOWFLAKE:=${BUILD_ALL}}
 
 test_subproject() {
     local -r build_dir="${1}"
diff --git a/ci/scripts/go_build.ps1 b/ci/scripts/go_build.ps1
index 3e4a341..4fdc40e 100644
--- a/ci/scripts/go_build.ps1
+++ b/ci/scripts/go_build.ps1
@@ -37,6 +37,13 @@ if ($env:CGO_ENABLED -eq "1") {
       -buildmode=c-shared `
       ./flightsql
     if (-not $?) { exit 1 }
+
+    go build `
+      -tags driverlib `
+      -o adbc_driver_snowflake.dll `
+      -buildmode=c-shared `
+      ./snowflake
+    if (-not $?) { exit 1 }
     Pop-Location
 }
 
diff --git a/ci/scripts/go_build.sh b/ci/scripts/go_build.sh
index 80fd19d..60ee13b 100755
--- a/ci/scripts/go_build.sh
+++ b/ci/scripts/go_build.sh
@@ -45,9 +45,13 @@ main() {
 
         mkdir -p "${install_dir}/lib"
         if [[ $(go env GOOS) == "linux" ]]; then
-            cp ./pkg/libadbc_driver_flightsql.so "${install_dir}/lib"
+            for lib in ./pkg/*.so; do
+                cp "${lib}" "${install_dir}/lib"
+            done
         else
-            cp ./pkg/libadbc_driver_flightsql.dylib "${install_dir}/lib"
+            for lib in ./pkg/*.dylib; do
+                cp "${lib}" "${install_dir}/lib"
+            done
         fi
     fi
 
diff --git a/dev/release/verify-apt.sh b/dev/release/verify-apt.sh
index d7b3c57..128118c 100755
--- a/dev/release/verify-apt.sh
+++ b/dev/release/verify-apt.sh
@@ -170,6 +170,9 @@ echo "::group::Test ADBC Flight SQL Driver"
 ${APT_INSTALL} libadbc-driver-flightsql-dev=${package_version}
 echo "::endgroup::"
 
+echo "::group::Test ADBC Snowflake Driver"
+${APT_INSTALL} libadbc-driver-snowflake-dev=${package_version}
+echo "::endgroup::"
 
 echo "::group::Test ADBC GLib"
 export G_DEBUG=fatal-warnings
diff --git a/dev/release/verify-release-candidate.ps1 b/dev/release/verify-release-candidate.ps1
index d451a59..ee2d693 100755
--- a/dev/release/verify-release-candidate.ps1
+++ b/dev/release/verify-release-candidate.ps1
@@ -130,10 +130,12 @@ if (-not $?) { exit 1 }
 
 $env:BUILD_DRIVER_FLIGHTSQL = "0"
 $env:BUILD_DRIVER_POSTGRESQL = "0"
+$env:BUILD_DRIVER_SNOWFLAKE = "0"
 & $(Join-Path $ArrowSourceDir ci\scripts\cpp_test.ps1) $ArrowSourceDir $CppBuildDir
 if (-not $?) { exit 1 }
 $env:BUILD_DRIVER_FLIGHTSQL = "1"
 $env:BUILD_DRIVER_POSTGRESQL = "1"
+$env:BUILD_DRIVER_SNOWFLAKE = "1"
 
 Show-Header "Verify Python Sources"
 
diff --git a/dev/release/verify-release-candidate.sh b/dev/release/verify-release-candidate.sh
index fae7994..0455d1e 100755
--- a/dev/release/verify-release-candidate.sh
+++ b/dev/release/verify-release-candidate.sh
@@ -23,7 +23,7 @@
 # - Maven >= 3.3.9
 # - JDK >=7
 # - gcc >= 4.8
-# - Go >= 1.17
+# - Go >= 1.18
 # - Docker
 #
 # To reuse build artifacts between runs set ARROW_TMPDIR environment variable to
@@ -432,9 +432,12 @@ test_cpp() {
   export BUILD_DRIVER_FLIGHTSQL=0
   # PostgreSQL driver requires running database for testing
   export BUILD_DRIVER_POSTGRESQL=0
+  # Snowflake driver requires snowflake creds for testing
+  export BUILD_DRIVER_SNOWFLAKE=0
   "${ADBC_DIR}/ci/scripts/cpp_test.sh" "${ADBC_SOURCE_DIR}" "${ARROW_TMPDIR}/cpp-build" "${install_prefix}"
   export BUILD_DRIVER_FLIGHTSQL=1
   export BUILD_DRIVER_POSTGRESQL=1
+  export BUILD_DRIVER_SNOWFLAKE=1
 }
 
 test_java() {
diff --git a/dev/release/verify-yum.sh b/dev/release/verify-yum.sh
index ee382c2..244fed4 100755
--- a/dev/release/verify-yum.sh
+++ b/dev/release/verify-yum.sh
@@ -153,6 +153,10 @@ echo "::group::Test ADBC Flight SQL Driver"
 ${install_command} --enablerepo=epel adbc-driver-flightsql-devel-${package_version}
 echo "::endgroup::"
 
+echo "::group::Test ADBC Snowflake Driver"
+${install_command} --enablerepo=epel adbc-driver-snowflake-devel-${package_version}
+echo "::endgroup::"
+
 echo "::group::Test Apache Arrow GLib"
 export G_DEBUG=fatal-warnings
 
diff --git a/docs/source/driver/go/flight_sql.rst b/docs/source/driver/go/flight_sql.rst
index a54f371..2b487e3 100644
--- a/docs/source/driver/go/flight_sql.rst
+++ b/docs/source/driver/go/flight_sql.rst
@@ -175,7 +175,7 @@ of the partitions.
 The queue size can be changed by setting an option on the
 :cpp:class:`AdbcStatement`:
 
-``adbc.flight.sql.rpc.queue_size``
+``adbc.rpc.result_queue_size``
     The number of batches to queue per partition.  Defaults to 5.
 
 Metadata
diff --git a/docs/source/driver/go/snowflake.rst b/docs/source/driver/go/snowflake.rst
new file mode 100644
index 0000000..3a233b5
--- /dev/null
+++ b/docs/source/driver/go/snowflake.rst
@@ -0,0 +1,325 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..   http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+================
+Snowflake Driver
+================
+
+The Snowflake Driver provides access to Snowflake Database Warehouses.
+
+Installation
+============
+
+The Snowflake Driver is shipped as a standalone library
+
+.. tab-set::
+
+  .. tab-item:: Go
+    :sync: go
+
+    .. code-block:: shell
+
+      go get github.com/apache/arrow-adbc/go/adbc/driver/snowflake
+
+Usage
+=====
+
+To connect to a Snowflake database you can supply the "uri" parameter when
+constructing the :cpp::class:`AdbcDatabase`.
+
+.. tab-set::
+
+  .. tab-item:: C++
+    :sync: cpp
+
+    .. code-block:: cpp
+
+      #include "adbc.h"
+
+      // Ignoring error handling
+      struct AdbcDatabase database;
+      AdbcDatabaseNew(&database, nullptr);
+      AdbcDatabaseSetOption(&database, "driver", "adbc_driver_snowflake", nullptr);
+      AdbcDatabaseSetOption(&database, "uri", "<snowflake uri>", nullptr);
+      AdbcDatabaseInit(&database, nullptr);
+
+URI Format
+----------
+
+The Snowflake URI should be of one of the following formats:
+
+- ``user[:password]@account/database/schema[?param1=value1&paramN=valueN]``
+- ``user[:password]@account/database[?param1=value1&paramN=valueN]``
+- ``user[:password]@host:port/database/schema?account=user_account[&param1=value1&paramN=valueN]``
+- ``host:port/database/schema?account=user_account[&param1=value1&paramN=valueN]``
+
+Alternately, instead of providing a full URI, the configuration can
+be entirely supplied using the other available options or some combination
+of the URI and other options. If a URI is provided, it will be parsed first
+and any explicit options provided will override anything parsed from the URI.
+
+Supported Features
+==================
+
+The Snowflake driver generally supports features defined in the ADBC API
+specification 1.0.0, as well as some additional, custom options.
+
+Authentication
+--------------
+
+Snowflake requires some form of authentication to be enabled. By default
+it will attempt to use Username/Password authentication. The username and
+password can be provided in the URI or via the ``username`` and ``password``
+options to the :cpp:class:`AdbcDatabase`.
+
+Alternately, other types of authentication can be specified and customized.
+See "Client Options" below.
+
+Bulk Ingestion
+--------------
+
+Bulk ingestion is supported. The mapping from Arrow types to Snowflake types
+is provided below.
+
+Partitioned Result Sets
+-----------------------
+
+Partitioned result sets are not currently supported.
+
+Performance
+-----------
+
+Formal benchmarking is forthcoming. Snowflake does provide an Arrow native
+format for requesting results, but bulk ingestion is still currently executed
+using the REST API. As described in the `Snowflake Documentation
+<https://pkg.go.dev/github.com/snowflakedb/gosnowflake#hdr-Batch_Inserts_and_Binding_Parameters>`
+the driver will potentially attempt to improve performance by streaming the data
+(without creating files on the local machine) to a temporary stage for ingestion
+if the number of values exceeds some threshold.
+
+In order for the driver to leverage this temporary stage, the user must have
+the ``CREATE STAGE`` privilege on the schema. If the user does not have this
+privilege, the driver will fall back to sending the data with the query
+to the snowflake database.
+
+In addition, the current database and schema for the session must be set. If
+these are not set, the ``CREATE TEMPORARY STAGE`` command executed by the driver
+can fail with the following error:
+
+.. code-block::
+  CREATE TEMPORARY STAGE SYSTEM$BIND file_format=(type=csv field_optionally_enclosed_by='"')
+  CANNOT perform CREATE STAGE. This session does not have a current schema. Call 'USE SCHEMA' or use a qualified name.
+
+In addition, results are potentially fetched in parallel from multiple endpoints.
+A limited number of batches are queued per endpoint, though data is always
+returned to the client in the order of the endpoints.
+
+The queue size can be changed by setting an option on the :cpp:class:`AdbcStatement`:
+
+``adbc.rpc.result_queue_size``
+    The number of batches to queue per endpoint. Defaults to 5.
+
+Transactions
+------------
+
+Transactions are supported. Keep in mind that Snowflake transactions will
+implicitly commit if any DDL statements are run, such as ``CREATE TABLE``.
+
+Client Options
+--------------
+
+The options used for creating a Snowflake Database connection can be customized.
+These options map 1:1 with the Snowflake `Config object <https://pkg.go.dev/github.com/snowflakedb/gosnowflake#Config>`.
+
+``adbc.snowflake.sql.db``
+    The database this session should default to using.
+
+``adbc.snowflake.sql.schema``
+    The schema this session should default to using.
+
+``adbc.snowflake.sql.warehouse``
+    The warehouse this session should default to using.
+
+``adbc.snowflake.sql.role``
+    The role that should be used for authentication.
+
+``adbc.snowflake.sql.region``
+    The Snowflake region to use for constructing the connection URI.
+
+``adbc.snowflake.sql.account``
+    The Snowflake account that should be used for authentication and building the
+    connection URI.
+
+``adbc.snowflake.sql.uri.protocol``
+    This should be either `http` or `https`.
+
+``adbc.snowflake.sql.uri.port``
+    The port to use for constructing the URI for connection.
+
+``adbc.snowflake.sql.uri.host``
+    The explicit host to use for constructing the URL to connect to.
+
+``adbc.snowflake.sql.auth_type``
+    Allows specifying alternate types of authentication, the allowed values are:
+
+    - ``auth_snowflake``: General username/password authentication (this is the default)
+    - ``auth_oauth``: Use OAuth authentication for the snowflake connection.
+    - ``auth_ext_browser``: Use an external browser to access a FED and perform SSO auth.
+    - ``auth_okta``: Use a native Okta URL to perform SSO authentication using Okta
+    - ``auth_jwt``: Use a provided JWT to perform authentication.
+    - ``auth_mfa``: Use a username and password with MFA.
+
+``adbc.snowflake.sql.client_option.auth_token``
+    If using OAuth or another form of authentication, this option is how you can
+    explicitly specify the token to be used for connection.
+
+``adbc.snowflake.sql.client_option.okta_url``
+    If using ``auth_okta``, this option is required in order to specify the
+    Okta URL to connect to for SSO authentication.
+
+``adbc.snowflake.sql.client_option.login_timeout``
+    Specify login retry timeout *excluding* network roundtrip and reading http responses.
+    Value should be formatted as described `here <https://pkg.go.dev/time#ParseDuration>`,
+    such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are accepted,
+    the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.request_timeout``
+    Specify request retry timeout *excluding* network roundtrip and reading http responses.
+    Value should be formatted as described `here <https://pkg.go.dev/time#ParseDuration>`,
+    such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are accepted,
+    the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.jwt_expire_timeout``
+    JWT expiration will occur after this timeout.
+    Value should be formatted as described `here <https://pkg.go.dev/time#ParseDuration>`,
+    such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are accepted,
+    the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.client_timeout``
+    Specify timeout for network roundtrip and reading http responses.
+    Value should be formatted as described `here <https://pkg.go.dev/time#ParseDuration>`,
+    such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are accepted,
+    the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.app_name``
+    Allows specifying the Application Name to Snowflake for the connection.
+
+``adbc.snowflake.sql.client_option.tls_skip_verify``
+    Disable verification of the server's TLS certificate. Value should be ``true``
+    or ``false``.
+
+``adbc.snowflake.sql.client_option.ocsp_fail_open_mode``
+    Control the fail open mode for OCSP. Default is ``true``. Value should
+    be either ``true`` or ``false``.
+
+``adbc.snowflake.sql.client_option.keep_session_alive``
+    Enable the session to persist even after the connection is closed. Value
+    should be either ``true`` or ``false``.
+
+``adbc.snowflake.sql.client_option.jwt_private_key``
+    Specify the RSA private key which should be used to sign the JWT for
+    authentication. This should be a path to a file containing a PKCS1
+    private key to be read in and parsed. Commonly encoded in PEM blocks
+    of type "RSA PRIVATE KEY".
+
+``adbc.snowflake.sql.client_option.disable_telemetry``
+    The Snowflake driver allows for telemetry information which can be
+    disabled by setting this to ``true``. Value should be either ``true``
+    or ``false``.
+
+``adbc.snowflake.sql.client_option.tracing``
+    Set the logging level
+
+``adbc.snowflake.sql.client_option.cache_mfa_token``
+    When ``true``, the MFA token is cached in the credential manager. Defaults
+    to ``true`` on Windows/OSX, ``false`` on Linux.
+
+``adbc.snowflake.sql.client_option.store_temp_creds``
+    When ``true``, the ID token is cached in the credential manager. Defaults
+    to ``true`` on Windows/OSX, ``false`` on Linux.
+
+
+Metadata
+--------
+
+When calling :cpp:`AdbcConnectionGetTableSchema`, the returned Arrow Schema
+will contain metadata on each field:
+
+``DATA_TYPE``
+    This will be a string containing the raw Snowflake data type of this column
+
+``PRIMARY_KEY``
+    This will be either ``Y`` or ``N`` to indicate a column is a primary key.
+
+In addition, the schema on the stream of results from a query will contain
+the following metadata keys on each field:
+
+``logicalType``
+    The Snowflake logical type of this column. Will be one of ``fixed``,
+    ``real``, ``text``, ``date``, ``variant``, ``timestamp_ltz``, ``timestamp_ntz``,
+    ``timestamp_tz``, ``object``, ``array``, ``binary``, ``time``, ``boolean``.
+
+``precision``
+    An integer representing the Snowflake precision of the field.
+
+``scale``
+    An integer representing the Snowflake scale of the values in this field.
+
+``charLength``
+    If a text field, this will be equivalent to the ``VARCHAR(#)`` parameter ``#``.
+
+``byteLength``
+    Will contain the length, in bytes, of the raw data sent back from Snowflake
+    regardless of the type of the field in Arrow.
+
+Type Support
+------------
+
+Because Snowflake types do not necessary match up 1-to-1 with Arrow types
+the following is what should be expected when requesting data. Any conversions
+indicated are done to ensure consistency of the stream of record batches.
+
++----------------+---------------+-----------------------------------------+
+| Snowflake Type | Arrow Type    | Notes                                   |
++----------------+---------------+-----------------------------------------+
+| Integral Types | Int64         | All integral types in snowflake are     |
+|                |               | stored as 64-bit integers.              |
++----------------+---------------+-----------------------------------------+
+| Float/Double   | Float64       | Snowflake does not distinguish between  |
+|                |               | float or double. All are 64-bit values  |
++----------------+---------------+-----------------------------------------+
+| Decimal/Numeric| Int64/Float64 | If Scale == 0 then Int64 is used, else  |
+|                |               | Float64 is returned.                    |
++----------------+---------------+-----------------------------------------+
+| Time           | Time64(ns)    | For ingestion, time32 will also work    |
++----------------+---------------+-----------------------------------------+
+| Date           | Date32        | For ingestion, Date64 will also work    |
++----------------+---------------+-----------------------------------------+
+| Timestamp_LTZ  | Timestamp(ns) | Local time zone will be used.           |
+| Timestamp_NTZ  |               | No timezone specified in Arrow type info|
+| Timestamp_TZ   |               | Values will be converted to UTC         |
++----------------+---------------+-----------------------------------------+
+| Variant        | String        | Snowflake does not provide nested type  |
+| Object         |               | information. So each value will be a    |
+| Array          |               | string, similar to JSON, which can be   |
+|                |               | parsed. The ``logicalType`` metadata key|
+|                |               | will contain the snowflake field type.  |
++----------------+---------------+-----------------------------------------+
+| Geography      | String        | There is no canonical Arrow type for    |
+| Geometry       |               | these and snowflake returns them as     |
+|                |               | strings.                                |
++----------------+---------------+-----------------------------------------+
diff --git a/go/adbc/driver/flightsql/flightsql_adbc.go b/go/adbc/driver/flightsql/flightsql_adbc.go
index 5ab6d59..9bbd7d1 100644
--- a/go/adbc/driver/flightsql/flightsql_adbc.go
+++ b/go/adbc/driver/flightsql/flightsql_adbc.go
@@ -41,7 +41,6 @@ import (
 	"io"
 	"math"
 	"net/url"
-	"regexp"
 	"runtime/debug"
 	"strconv"
 	"strings"
@@ -49,6 +48,7 @@ import (
 	"time"
 
 	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow-adbc/go/adbc/driver/internal"
 	"github.com/apache/arrow/go/v12/arrow"
 	"github.com/apache/arrow/go/v12/arrow/array"
 	"github.com/apache/arrow/go/v12/arrow/flight"
@@ -1002,11 +1002,11 @@ func (c *cnxn) GetInfo(ctx context.Context, infoCodes []adbc.InfoCode) (array.Re
 // earlier).
 func (c *cnxn) GetObjects(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string, tableName *string, columnName *string, tableType []string) (array.RecordReader, error) {
 	ctx = metadata.NewOutgoingContext(ctx, c.hdrs)
-	g := getObjects{ctx: ctx, depth: depth, catalog: catalog, dbSchema: dbSchema, tableName: tableName, columnName: columnName, tableType: tableType}
-	if err := g.init(c); err != nil {
+	g := internal.GetObjects{Ctx: ctx, Depth: depth, Catalog: catalog, DbSchema: dbSchema, TableName: tableName, ColumnName: columnName, TableType: tableType}
+	if err := g.Init(c.db.alloc, c.getObjectsDbSchemas, c.getObjectsTables); err != nil {
 		return nil, err
 	}
-	defer g.release()
+	defer g.Release()
 
 	// To avoid an N+1 query problem, we assume result sets here will fit in memory and build up a single response.
 	info, err := c.cl.GetCatalogs(ctx)
@@ -1026,228 +1026,21 @@ func (c *cnxn) GetObjects(ctx context.Context, depth adbc.ObjectDepth, catalog *
 		for i := 0; i < arr.Len(); i++ {
 			// XXX: force copy since accessor is unsafe
 			catalogName := string([]byte(arr.Value(i)))
-			g.appendCatalog(catalogName)
+			g.AppendCatalog(catalogName)
 			foundCatalog = true
 		}
 	}
 
 	// Implementations like Dremio report no catalogs, but still have schemas
 	if !foundCatalog && depth != adbc.ObjectDepthCatalogs {
-		g.appendCatalog("")
+		g.AppendCatalog("")
 	}
 
 	if err = rdr.Err(); err != nil {
 		return nil, adbcFromFlightStatus(err)
 	}
 
-	return g.finish()
-}
-
-// Helper to store state needed for GetObjects
-type getObjects struct {
-	ctx        context.Context
-	depth      adbc.ObjectDepth
-	catalog    *string
-	dbSchema   *string
-	tableName  *string
-	columnName *string
-	tableType  []string
-
-	builder           *array.RecordBuilder
-	schemaLookup      map[string][]string
-	tableLookup       map[catalogAndSchema][]tableInfo
-	catalogPattern    *regexp.Regexp
-	columnNamePattern *regexp.Regexp
-
-	catalogNameBuilder           *array.StringBuilder
-	catalogDbSchemasBuilder      *array.ListBuilder
-	catalogDbSchemasItems        *array.StructBuilder
-	dbSchemaNameBuilder          *array.StringBuilder
-	dbSchemaTablesBuilder        *array.ListBuilder
-	dbSchemaTablesItems          *array.StructBuilder
-	tableNameBuilder             *array.StringBuilder
-	tableTypeBuilder             *array.StringBuilder
-	tableColumnsBuilder          *array.ListBuilder
-	tableColumnsItems            *array.StructBuilder
-	columnNameBuilder            *array.StringBuilder
-	ordinalPositionBuilder       *array.Int32Builder
-	remarksBuilder               *array.StringBuilder
-	xdbcDataTypeBuilder          *array.Int16Builder
-	xdbcTypeNameBuilder          *array.StringBuilder
-	xdbcColumnSizeBuilder        *array.Int32Builder
-	xdbcDecimalDigitsBuilder     *array.Int16Builder
-	xdbcNumPrecRadixBuilder      *array.Int16Builder
-	xdbcNullableBuilder          *array.Int16Builder
-	xdbcColumnDefBuilder         *array.StringBuilder
-	xdbcSqlDataTypeBuilder       *array.Int16Builder
-	xdbcDatetimeSubBuilder       *array.Int16Builder
-	xdbcCharOctetLengthBuilder   *array.Int32Builder
-	xdbcIsNullableBuilder        *array.StringBuilder
-	xdbcScopeCatalogBuilder      *array.StringBuilder
-	xdbcScopeSchemaBuilder       *array.StringBuilder
-	xdbcScopeTableBuilder        *array.StringBuilder
-	xdbcIsAutoincrementBuilder   *array.BooleanBuilder
-	xdbcIsGeneratedcolumnBuilder *array.BooleanBuilder
-	tableConstraintsBuilder      *array.ListBuilder
-}
-
-func (g *getObjects) init(c *cnxn) error {
-	if catalogToDbSchemas, err := c.getObjectsDbSchemas(g.ctx, g.depth, g.catalog, g.dbSchema); err != nil {
-		return err
-	} else {
-		g.schemaLookup = catalogToDbSchemas
-	}
-
-	if tableLookup, err := c.getObjectsTables(g.ctx, g.depth, g.catalog, g.dbSchema, g.tableName, g.columnName, g.tableType); err != nil {
-		return err
-	} else {
-		g.tableLookup = tableLookup
-	}
-
-	if catalogPattern, err := patternToRegexp(g.catalog); err != nil {
-		return adbc.Error{
-			Msg:  err.Error(),
-			Code: adbc.StatusInvalidArgument,
-		}
-	} else {
-		g.catalogPattern = catalogPattern
-	}
-	if columnNamePattern, err := patternToRegexp(g.columnName); err != nil {
-		return adbc.Error{
-			Msg:  err.Error(),
-			Code: adbc.StatusInvalidArgument,
-		}
-	} else {
-		g.columnNamePattern = columnNamePattern
-	}
-
-	g.builder = array.NewRecordBuilder(c.db.alloc, adbc.GetObjectsSchema)
-	g.catalogNameBuilder = g.builder.Field(0).(*array.StringBuilder)
-	g.catalogDbSchemasBuilder = g.builder.Field(1).(*array.ListBuilder)
-	g.catalogDbSchemasItems = g.catalogDbSchemasBuilder.ValueBuilder().(*array.StructBuilder)
-	g.dbSchemaNameBuilder = g.catalogDbSchemasItems.FieldBuilder(0).(*array.StringBuilder)
-	g.dbSchemaTablesBuilder = g.catalogDbSchemasItems.FieldBuilder(1).(*array.ListBuilder)
-	g.dbSchemaTablesItems = g.dbSchemaTablesBuilder.ValueBuilder().(*array.StructBuilder)
-	g.tableNameBuilder = g.dbSchemaTablesItems.FieldBuilder(0).(*array.StringBuilder)
-	g.tableTypeBuilder = g.dbSchemaTablesItems.FieldBuilder(1).(*array.StringBuilder)
-	g.tableColumnsBuilder = g.dbSchemaTablesItems.FieldBuilder(2).(*array.ListBuilder)
-	g.tableColumnsItems = g.tableColumnsBuilder.ValueBuilder().(*array.StructBuilder)
-	g.columnNameBuilder = g.tableColumnsItems.FieldBuilder(0).(*array.StringBuilder)
-	g.ordinalPositionBuilder = g.tableColumnsItems.FieldBuilder(1).(*array.Int32Builder)
-	g.remarksBuilder = g.tableColumnsItems.FieldBuilder(2).(*array.StringBuilder)
-	g.xdbcDataTypeBuilder = g.tableColumnsItems.FieldBuilder(3).(*array.Int16Builder)
-	g.xdbcTypeNameBuilder = g.tableColumnsItems.FieldBuilder(4).(*array.StringBuilder)
-	g.xdbcColumnSizeBuilder = g.tableColumnsItems.FieldBuilder(5).(*array.Int32Builder)
-	g.xdbcDecimalDigitsBuilder = g.tableColumnsItems.FieldBuilder(6).(*array.Int16Builder)
-	g.xdbcNumPrecRadixBuilder = g.tableColumnsItems.FieldBuilder(7).(*array.Int16Builder)
-	g.xdbcNullableBuilder = g.tableColumnsItems.FieldBuilder(8).(*array.Int16Builder)
-	g.xdbcColumnDefBuilder = g.tableColumnsItems.FieldBuilder(9).(*array.StringBuilder)
-	g.xdbcSqlDataTypeBuilder = g.tableColumnsItems.FieldBuilder(10).(*array.Int16Builder)
-	g.xdbcDatetimeSubBuilder = g.tableColumnsItems.FieldBuilder(11).(*array.Int16Builder)
-	g.xdbcCharOctetLengthBuilder = g.tableColumnsItems.FieldBuilder(12).(*array.Int32Builder)
-	g.xdbcIsNullableBuilder = g.tableColumnsItems.FieldBuilder(13).(*array.StringBuilder)
-	g.xdbcScopeCatalogBuilder = g.tableColumnsItems.FieldBuilder(14).(*array.StringBuilder)
-	g.xdbcScopeSchemaBuilder = g.tableColumnsItems.FieldBuilder(15).(*array.StringBuilder)
-	g.xdbcScopeTableBuilder = g.tableColumnsItems.FieldBuilder(16).(*array.StringBuilder)
-	g.xdbcIsAutoincrementBuilder = g.tableColumnsItems.FieldBuilder(17).(*array.BooleanBuilder)
-	g.xdbcIsGeneratedcolumnBuilder = g.tableColumnsItems.FieldBuilder(18).(*array.BooleanBuilder)
-	g.tableConstraintsBuilder = g.dbSchemaTablesItems.FieldBuilder(3).(*array.ListBuilder)
-
-	return nil
-}
-
-func (g *getObjects) release() {
-	g.builder.Release()
-}
-
-func (g *getObjects) finish() (array.RecordReader, error) {
-	record := g.builder.NewRecord()
-	defer record.Release()
-	result, err := array.NewRecordReader(g.builder.Schema(), []arrow.Record{record})
-	if err != nil {
-		return nil, adbc.Error{
-			Msg:  err.Error(),
-			Code: adbc.StatusInternal,
-		}
-	}
-	return result, nil
-}
-
-func (g *getObjects) appendCatalog(catalogName string) {
-	if g.catalogPattern != nil && !g.catalogPattern.MatchString(catalogName) {
-		return
-	}
-	g.catalogNameBuilder.Append(catalogName)
-
-	if g.depth == adbc.ObjectDepthCatalogs {
-		g.catalogDbSchemasBuilder.AppendNull()
-		return
-	}
-	g.catalogDbSchemasBuilder.Append(true)
-
-	for _, dbSchemaName := range g.schemaLookup[catalogName] {
-		g.appendDbSchema(catalogName, dbSchemaName)
-	}
-}
-
-func (g *getObjects) appendDbSchema(catalogName, dbSchemaName string) {
-	g.dbSchemaNameBuilder.Append(dbSchemaName)
-	g.catalogDbSchemasItems.Append(true)
-
-	if g.depth == adbc.ObjectDepthDBSchemas {
-		g.dbSchemaTablesBuilder.AppendNull()
-		return
-	}
-	g.dbSchemaTablesBuilder.Append(true)
-
-	for _, tableInfo := range g.tableLookup[catalogAndSchema{
-		catalog: catalogName,
-		schema:  dbSchemaName,
-	}] {
-		g.appendTableInfo(tableInfo)
-	}
-}
-
-func (g *getObjects) appendTableInfo(tableInfo tableInfo) {
-	g.tableNameBuilder.Append(tableInfo.name)
-	g.tableTypeBuilder.Append(tableInfo.tableType)
-	g.dbSchemaTablesItems.Append(true)
-
-	if g.depth == adbc.ObjectDepthTables {
-		g.tableColumnsBuilder.AppendNull()
-		g.tableConstraintsBuilder.AppendNull()
-		return
-	}
-	g.tableColumnsBuilder.Append(true)
-	// TODO: unimplemented for now
-	g.tableConstraintsBuilder.Append(true)
-
-	for colIndex, column := range tableInfo.schema.Fields() {
-		if g.columnNamePattern != nil && !g.columnNamePattern.MatchString(column.Name) {
-			continue
-		}
-		g.columnNameBuilder.Append(column.Name)
-		g.ordinalPositionBuilder.Append(int32(colIndex + 1))
-		g.remarksBuilder.AppendNull()
-		g.xdbcDataTypeBuilder.AppendNull()
-		g.xdbcTypeNameBuilder.AppendNull()
-		g.xdbcColumnSizeBuilder.AppendNull()
-		g.xdbcDecimalDigitsBuilder.AppendNull()
-		g.xdbcNumPrecRadixBuilder.AppendNull()
-		g.xdbcNullableBuilder.AppendNull()
-		g.xdbcColumnDefBuilder.AppendNull()
-		g.xdbcSqlDataTypeBuilder.AppendNull()
-		g.xdbcDatetimeSubBuilder.AppendNull()
-		g.xdbcCharOctetLengthBuilder.AppendNull()
-		g.xdbcIsNullableBuilder.AppendNull()
-		g.xdbcScopeCatalogBuilder.AppendNull()
-		g.xdbcScopeSchemaBuilder.AppendNull()
-		g.xdbcScopeTableBuilder.AppendNull()
-		g.xdbcIsAutoincrementBuilder.AppendNull()
-		g.xdbcIsGeneratedcolumnBuilder.AppendNull()
-
-		g.tableColumnsItems.Append(true)
-	}
+	return g.Finish()
 }
 
 // Helper function to read and validate a metadata stream
@@ -1268,38 +1061,6 @@ func (c *cnxn) readInfo(ctx context.Context, expectedSchema *arrow.Schema, info
 	return rdr, nil
 }
 
-// Helper function that compiles a SQL-style pattern (%, _) to a regex
-func patternToRegexp(pattern *string) (*regexp.Regexp, error) {
-	if pattern == nil {
-		return nil, nil
-	}
-
-	var builder strings.Builder
-	if _, err := builder.WriteString("^"); err != nil {
-		return nil, err
-	}
-	for _, c := range *pattern {
-		switch {
-		case c == rune('_'):
-			if _, err := builder.WriteString("."); err != nil {
-				return nil, err
-			}
-		case c == rune('%'):
-			if _, err := builder.WriteString(".*"); err != nil {
-				return nil, err
-			}
-		default:
-			if _, err := builder.WriteString(regexp.QuoteMeta(string([]rune{c}))); err != nil {
-				return nil, err
-			}
-		}
-	}
-	if _, err := builder.WriteString("$"); err != nil {
-		return nil, err
-	}
-	return regexp.Compile(builder.String())
-}
-
 // Helper function to build up a map of catalogs to DB schemas
 func (c *cnxn) getObjectsDbSchemas(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string) (result map[string][]string, err error) {
 	if depth == adbc.ObjectDepthCatalogs {
@@ -1340,20 +1101,11 @@ func (c *cnxn) getObjectsDbSchemas(ctx context.Context, depth adbc.ObjectDepth,
 	return
 }
 
-type catalogAndSchema struct {
-	catalog, schema string
-}
-
-type tableInfo struct {
-	name, tableType string
-	schema          *arrow.Schema
-}
-
-func (c *cnxn) getObjectsTables(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string, tableName *string, columnName *string, tableType []string) (result map[catalogAndSchema][]tableInfo, err error) {
+func (c *cnxn) getObjectsTables(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string, tableName *string, columnName *string, tableType []string) (result internal.SchemaToTableInfo, err error) {
 	if depth == adbc.ObjectDepthCatalogs || depth == adbc.ObjectDepthDBSchemas {
 		return
 	}
-	result = make(map[catalogAndSchema][]tableInfo)
+	result = make(map[internal.CatalogAndSchema][]internal.TableInfo)
 
 	// Pre-populate the map of which schemas are in which catalogs
 	includeSchema := depth == adbc.ObjectDepthAll || depth == adbc.ObjectDepthColumns
@@ -1393,9 +1145,9 @@ func (c *cnxn) getObjectsTables(ctx context.Context, depth adbc.ObjectDepth, cat
 			if !dbSchema.IsNull(i) {
 				dbSchemaName = string([]byte(dbSchema.Value(i)))
 			}
-			key := catalogAndSchema{
-				catalog: catalogName,
-				schema:  dbSchemaName,
+			key := internal.CatalogAndSchema{
+				Catalog: catalogName,
+				Schema:  dbSchemaName,
 			}
 
 			var schema *arrow.Schema
@@ -1411,10 +1163,10 @@ func (c *cnxn) getObjectsTables(ctx context.Context, depth adbc.ObjectDepth, cat
 				reader.Release()
 			}
 
-			result[key] = append(result[key], tableInfo{
-				name:      string([]byte(tableName.Value(i))),
-				tableType: string([]byte(tableType.Value(i))),
-				schema:    schema,
+			result[key] = append(result[key], internal.TableInfo{
+				Name:      string([]byte(tableName.Value(i))),
+				TableType: string([]byte(tableType.Value(i))),
+				Schema:    schema,
 			})
 		}
 	}
diff --git a/go/adbc/driver/flightsql/flightsql_adbc_test.go b/go/adbc/driver/flightsql/flightsql_adbc_test.go
index b9265ca..52aee96 100644
--- a/go/adbc/driver/flightsql/flightsql_adbc_test.go
+++ b/go/adbc/driver/flightsql/flightsql_adbc_test.go
@@ -214,6 +214,21 @@ func (s *FlightSQLQuirks) CreateSampleTable(tableName string, r arrow.Record) er
 	return nil
 }
 
+func (s *FlightSQLQuirks) DropTable(cnxn adbc.Connection, tblname string) error {
+	stmt, err := cnxn.NewStatement()
+	if err != nil {
+		return err
+	}
+	defer stmt.Close()
+
+	if err = stmt.SetSqlQuery(`DROP TABLE IF EXISTS ` + tblname); err != nil {
+		return err
+	}
+
+	_, err = stmt.ExecuteUpdate(context.Background())
+	return err
+}
+
 func (s *FlightSQLQuirks) Alloc() memory.Allocator               { return s.mem }
 func (s *FlightSQLQuirks) BindParameter(_ int) string            { return "?" }
 func (s *FlightSQLQuirks) SupportsConcurrentStatements() bool    { return true }
@@ -221,6 +236,7 @@ func (s *FlightSQLQuirks) SupportsPartitionedData() bool         { return true }
 func (s *FlightSQLQuirks) SupportsTransactions() bool            { return true }
 func (s *FlightSQLQuirks) SupportsGetParameterSchema() bool      { return false }
 func (s *FlightSQLQuirks) SupportsDynamicParameterBinding() bool { return true }
+func (s *FlightSQLQuirks) SupportsBulkIngest() bool              { return false }
 func (s *FlightSQLQuirks) GetMetadata(code adbc.InfoCode) interface{} {
 	switch code {
 	case adbc.InfoDriverName:
@@ -242,6 +258,21 @@ func (s *FlightSQLQuirks) GetMetadata(code adbc.InfoCode) interface{} {
 	return nil
 }
 
+func (s *FlightSQLQuirks) SampleTableSchemaMetadata(tblName string, dt arrow.DataType) arrow.Metadata {
+	switch dt.ID() {
+	case arrow.STRING:
+		return arrow.MetadataFrom(map[string]string{
+			flightsql.ScaleKey: "15", flightsql.IsReadOnlyKey: "0", flightsql.IsAutoIncrementKey: "0",
+			flightsql.TableNameKey: tblName,
+		})
+	default:
+		return arrow.MetadataFrom(map[string]string{
+			flightsql.ScaleKey: "15", flightsql.IsReadOnlyKey: "0", flightsql.IsAutoIncrementKey: "0",
+			flightsql.TableNameKey: tblName, flightsql.PrecisionKey: "10",
+		})
+	}
+}
+
 func TestADBCFlightSQL(t *testing.T) {
 	db, err := example.CreateDB()
 	require.NoError(t, err)
@@ -516,16 +547,16 @@ func (suite *StatementTests) TearDownTest() {
 
 func (suite *StatementTests) TestQueueSizeOption() {
 	var err error
-	option := "adbc.flight.sql.rpc.queue_size"
+	option := "adbc.rpc.result_queue_size"
 
 	err = suite.Stmt.SetOption(option, "")
-	suite.Require().ErrorContains(err, "Invalid value for statement option 'adbc.flight.sql.rpc.queue_size': '' is not a positive integer")
+	suite.Require().ErrorContains(err, "Invalid value for statement option '"+option+"': '' is not a positive integer")
 
 	err = suite.Stmt.SetOption(option, "foo")
-	suite.Require().ErrorContains(err, "Invalid value for statement option 'adbc.flight.sql.rpc.queue_size': 'foo' is not a positive integer")
+	suite.Require().ErrorContains(err, "Invalid value for statement option '"+option+"': 'foo' is not a positive integer")
 
 	err = suite.Stmt.SetOption(option, "-1")
-	suite.Require().ErrorContains(err, "Invalid value for statement option 'adbc.flight.sql.rpc.queue_size': '-1' is not a positive integer")
+	suite.Require().ErrorContains(err, "Invalid value for statement option '"+option+"': '-1' is not a positive integer")
 
 	err = suite.Stmt.SetOption(option, "1")
 	suite.Require().NoError(err)
diff --git a/go/adbc/driver/flightsql/flightsql_statement.go b/go/adbc/driver/flightsql/flightsql_statement.go
index 0d5c9b4..3d051ef 100644
--- a/go/adbc/driver/flightsql/flightsql_statement.go
+++ b/go/adbc/driver/flightsql/flightsql_statement.go
@@ -36,7 +36,7 @@ import (
 )
 
 const (
-	OptionStatementQueueSize = "adbc.flight.sql.rpc.queue_size"
+	OptionStatementQueueSize = "adbc.rpc.result_queue_size"
 	// Explicitly set substrait version for Flight SQL
 	// substrait *does* include the version in the serialized plan
 	// so this is not entirely necessary depending on the version
@@ -313,6 +313,7 @@ func (s *statement) Bind(_ context.Context, values arrow.Record) error {
 			Code: adbc.StatusInvalidState}
 	}
 
+	// calls retain
 	s.prepared.SetParameters(values)
 	return nil
 }
@@ -329,6 +330,7 @@ func (s *statement) BindStream(_ context.Context, stream array.RecordReader) err
 			Code: adbc.StatusInvalidState}
 	}
 
+	// calls retain
 	s.prepared.SetRecordReader(stream)
 	return nil
 }
diff --git a/go/adbc/driver/flightsql/record_reader.go b/go/adbc/driver/flightsql/record_reader.go
index 9e204c0..017faa2 100644
--- a/go/adbc/driver/flightsql/record_reader.go
+++ b/go/adbc/driver/flightsql/record_reader.go
@@ -23,6 +23,7 @@ import (
 	"sync/atomic"
 
 	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow-adbc/go/adbc/utils"
 	"github.com/apache/arrow/go/v12/arrow"
 	"github.com/apache/arrow/go/v12/arrow/array"
 	"github.com/apache/arrow/go/v12/arrow/flight"
@@ -117,7 +118,7 @@ func newRecordReader(ctx context.Context, alloc memory.Allocator, cl *flightsql.
 
 	lastChannelIndex := len(chs) - 1
 
-	referenceSchema := removeSchemaMetadata(schema)
+	referenceSchema := utils.RemoveSchemaMetadata(schema)
 	for i, ep := range endpoints {
 		endpoint := ep
 		endpointIndex := i
@@ -134,7 +135,7 @@ func newRecordReader(ctx context.Context, alloc memory.Allocator, cl *flightsql.
 			}
 			defer rdr.Release()
 
-			streamSchema := removeSchemaMetadata(rdr.Schema())
+			streamSchema := utils.RemoveSchemaMetadata(rdr.Schema())
 			if !streamSchema.Equal(referenceSchema) {
 				return fmt.Errorf("endpoint %d returned inconsistent schema: expected %s but got %s", endpointIndex, referenceSchema.String(), streamSchema.String())
 			}
@@ -208,50 +209,3 @@ func (r *reader) Schema() *arrow.Schema {
 func (r *reader) Record() arrow.Record {
 	return r.rec
 }
-
-func removeSchemaMetadata(schema *arrow.Schema) *arrow.Schema {
-	fields := make([]arrow.Field, len(schema.Fields()))
-	for i, field := range schema.Fields() {
-		fields[i] = removeFieldMetadata(&field)
-	}
-	return arrow.NewSchema(fields, nil)
-}
-
-func removeFieldMetadata(field *arrow.Field) arrow.Field {
-	fieldType := field.Type
-
-	if nestedType, ok := field.Type.(arrow.NestedType); ok {
-		childFields := make([]arrow.Field, len(nestedType.Fields()))
-		for i, field := range nestedType.Fields() {
-			childFields[i] = removeFieldMetadata(&field)
-		}
-
-		switch ty := field.Type.(type) {
-		case *arrow.DenseUnionType:
-			fieldType = arrow.DenseUnionOf(childFields, ty.TypeCodes())
-		case *arrow.FixedSizeListType:
-			fieldType = arrow.FixedSizeListOfField(ty.Len(), childFields[0])
-		case *arrow.ListType:
-			fieldType = arrow.ListOfField(childFields[0])
-		case *arrow.LargeListType:
-			fieldType = arrow.LargeListOfField(childFields[0])
-		case *arrow.MapType:
-			mapType := arrow.MapOf(childFields[0].Type, childFields[1].Type)
-			mapType.KeysSorted = ty.KeysSorted
-			fieldType = mapType
-		case *arrow.SparseUnionType:
-			fieldType = arrow.SparseUnionOf(childFields, ty.TypeCodes())
-		case *arrow.StructType:
-			fieldType = arrow.StructOf(childFields...)
-		default:
-			// XXX: ignore it
-		}
-	}
-
-	return arrow.Field{
-		Name:     field.Name,
-		Type:     fieldType,
-		Nullable: field.Nullable,
-		Metadata: arrow.Metadata{},
-	}
-}
diff --git a/go/adbc/driver/internal/shared_utils.go b/go/adbc/driver/internal/shared_utils.go
new file mode 100644
index 0000000..b2b0c63
--- /dev/null
+++ b/go/adbc/driver/internal/shared_utils.go
@@ -0,0 +1,306 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package internal
+
+import (
+	"context"
+	"regexp"
+	"strconv"
+	"strings"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+)
+
+type CatalogAndSchema struct {
+	Catalog, Schema string
+}
+
+type TableInfo struct {
+	Name, TableType string
+	Schema          *arrow.Schema
+}
+
+type GetObjDBSchemasFn func(ctx context.Context, depth adbc.ObjectDepth, catalog *string, schema *string) (map[string][]string, error)
+type GetObjTablesFn func(ctx context.Context, depth adbc.ObjectDepth, catalog *string, schema *string, tableName *string, columnName *string, tableType []string) (map[CatalogAndSchema][]TableInfo, error)
+type SchemaToTableInfo = map[CatalogAndSchema][]TableInfo
+
+// Helper function that compiles a SQL-style pattern (%, _) to a regex
+func PatternToRegexp(pattern *string) (*regexp.Regexp, error) {
+	if pattern == nil {
+		return nil, nil
+	}
+
+	var builder strings.Builder
+	if _, err := builder.WriteString("(?i)^"); err != nil {
+		return nil, err
+	}
+	for _, c := range *pattern {
+		switch {
+		case c == rune('_'):
+			if _, err := builder.WriteString("."); err != nil {
+				return nil, err
+			}
+		case c == rune('%'):
+			if _, err := builder.WriteString(".*"); err != nil {
+				return nil, err
+			}
+		default:
+			if _, err := builder.WriteString(regexp.QuoteMeta(string([]rune{c}))); err != nil {
+				return nil, err
+			}
+		}
+	}
+	if _, err := builder.WriteString("$"); err != nil {
+		return nil, err
+	}
+	return regexp.Compile(builder.String())
+}
+
+// Helper to store state needed for GetObjects
+type GetObjects struct {
+	Ctx        context.Context
+	Depth      adbc.ObjectDepth
+	Catalog    *string
+	DbSchema   *string
+	TableName  *string
+	ColumnName *string
+	TableType  []string
+
+	builder           *array.RecordBuilder
+	schemaLookup      map[string][]string
+	tableLookup       map[CatalogAndSchema][]TableInfo
+	catalogPattern    *regexp.Regexp
+	columnNamePattern *regexp.Regexp
+
+	catalogNameBuilder           *array.StringBuilder
+	catalogDbSchemasBuilder      *array.ListBuilder
+	catalogDbSchemasItems        *array.StructBuilder
+	dbSchemaNameBuilder          *array.StringBuilder
+	dbSchemaTablesBuilder        *array.ListBuilder
+	dbSchemaTablesItems          *array.StructBuilder
+	tableNameBuilder             *array.StringBuilder
+	tableTypeBuilder             *array.StringBuilder
+	tableColumnsBuilder          *array.ListBuilder
+	tableColumnsItems            *array.StructBuilder
+	columnNameBuilder            *array.StringBuilder
+	ordinalPositionBuilder       *array.Int32Builder
+	remarksBuilder               *array.StringBuilder
+	xdbcDataTypeBuilder          *array.Int16Builder
+	xdbcTypeNameBuilder          *array.StringBuilder
+	xdbcColumnSizeBuilder        *array.Int32Builder
+	xdbcDecimalDigitsBuilder     *array.Int16Builder
+	xdbcNumPrecRadixBuilder      *array.Int16Builder
+	xdbcNullableBuilder          *array.Int16Builder
+	xdbcColumnDefBuilder         *array.StringBuilder
+	xdbcSqlDataTypeBuilder       *array.Int16Builder
+	xdbcDatetimeSubBuilder       *array.Int16Builder
+	xdbcCharOctetLengthBuilder   *array.Int32Builder
+	xdbcIsNullableBuilder        *array.StringBuilder
+	xdbcScopeCatalogBuilder      *array.StringBuilder
+	xdbcScopeSchemaBuilder       *array.StringBuilder
+	xdbcScopeTableBuilder        *array.StringBuilder
+	xdbcIsAutoincrementBuilder   *array.BooleanBuilder
+	xdbcIsGeneratedcolumnBuilder *array.BooleanBuilder
+	tableConstraintsBuilder      *array.ListBuilder
+}
+
+func (g *GetObjects) Init(mem memory.Allocator, getObj GetObjDBSchemasFn, getTbls GetObjTablesFn) error {
+	if catalogToDbSchemas, err := getObj(g.Ctx, g.Depth, g.Catalog, g.DbSchema); err != nil {
+		return err
+	} else {
+		g.schemaLookup = catalogToDbSchemas
+	}
+
+	if tableLookup, err := getTbls(g.Ctx, g.Depth, g.Catalog, g.DbSchema, g.TableName, g.ColumnName, g.TableType); err != nil {
+		return err
+	} else {
+		g.tableLookup = tableLookup
+	}
+
+	if catalogPattern, err := PatternToRegexp(g.Catalog); err != nil {
+		return adbc.Error{
+			Msg:  err.Error(),
+			Code: adbc.StatusInvalidArgument,
+		}
+	} else {
+		g.catalogPattern = catalogPattern
+	}
+	if columnNamePattern, err := PatternToRegexp(g.ColumnName); err != nil {
+		return adbc.Error{
+			Msg:  err.Error(),
+			Code: adbc.StatusInvalidArgument,
+		}
+	} else {
+		g.columnNamePattern = columnNamePattern
+	}
+
+	g.builder = array.NewRecordBuilder(mem, adbc.GetObjectsSchema)
+	g.catalogNameBuilder = g.builder.Field(0).(*array.StringBuilder)
+	g.catalogDbSchemasBuilder = g.builder.Field(1).(*array.ListBuilder)
+	g.catalogDbSchemasItems = g.catalogDbSchemasBuilder.ValueBuilder().(*array.StructBuilder)
+	g.dbSchemaNameBuilder = g.catalogDbSchemasItems.FieldBuilder(0).(*array.StringBuilder)
+	g.dbSchemaTablesBuilder = g.catalogDbSchemasItems.FieldBuilder(1).(*array.ListBuilder)
+	g.dbSchemaTablesItems = g.dbSchemaTablesBuilder.ValueBuilder().(*array.StructBuilder)
+	g.tableNameBuilder = g.dbSchemaTablesItems.FieldBuilder(0).(*array.StringBuilder)
+	g.tableTypeBuilder = g.dbSchemaTablesItems.FieldBuilder(1).(*array.StringBuilder)
+	g.tableColumnsBuilder = g.dbSchemaTablesItems.FieldBuilder(2).(*array.ListBuilder)
+	g.tableColumnsItems = g.tableColumnsBuilder.ValueBuilder().(*array.StructBuilder)
+	g.columnNameBuilder = g.tableColumnsItems.FieldBuilder(0).(*array.StringBuilder)
+	g.ordinalPositionBuilder = g.tableColumnsItems.FieldBuilder(1).(*array.Int32Builder)
+	g.remarksBuilder = g.tableColumnsItems.FieldBuilder(2).(*array.StringBuilder)
+	g.xdbcDataTypeBuilder = g.tableColumnsItems.FieldBuilder(3).(*array.Int16Builder)
+	g.xdbcTypeNameBuilder = g.tableColumnsItems.FieldBuilder(4).(*array.StringBuilder)
+	g.xdbcColumnSizeBuilder = g.tableColumnsItems.FieldBuilder(5).(*array.Int32Builder)
+	g.xdbcDecimalDigitsBuilder = g.tableColumnsItems.FieldBuilder(6).(*array.Int16Builder)
+	g.xdbcNumPrecRadixBuilder = g.tableColumnsItems.FieldBuilder(7).(*array.Int16Builder)
+	g.xdbcNullableBuilder = g.tableColumnsItems.FieldBuilder(8).(*array.Int16Builder)
+	g.xdbcColumnDefBuilder = g.tableColumnsItems.FieldBuilder(9).(*array.StringBuilder)
+	g.xdbcSqlDataTypeBuilder = g.tableColumnsItems.FieldBuilder(10).(*array.Int16Builder)
+	g.xdbcDatetimeSubBuilder = g.tableColumnsItems.FieldBuilder(11).(*array.Int16Builder)
+	g.xdbcCharOctetLengthBuilder = g.tableColumnsItems.FieldBuilder(12).(*array.Int32Builder)
+	g.xdbcIsNullableBuilder = g.tableColumnsItems.FieldBuilder(13).(*array.StringBuilder)
+	g.xdbcScopeCatalogBuilder = g.tableColumnsItems.FieldBuilder(14).(*array.StringBuilder)
+	g.xdbcScopeSchemaBuilder = g.tableColumnsItems.FieldBuilder(15).(*array.StringBuilder)
+	g.xdbcScopeTableBuilder = g.tableColumnsItems.FieldBuilder(16).(*array.StringBuilder)
+	g.xdbcIsAutoincrementBuilder = g.tableColumnsItems.FieldBuilder(17).(*array.BooleanBuilder)
+	g.xdbcIsGeneratedcolumnBuilder = g.tableColumnsItems.FieldBuilder(18).(*array.BooleanBuilder)
+	g.tableConstraintsBuilder = g.dbSchemaTablesItems.FieldBuilder(3).(*array.ListBuilder)
+
+	return nil
+}
+
+func (g *GetObjects) Release() {
+	g.builder.Release()
+}
+
+func (g *GetObjects) Finish() (array.RecordReader, error) {
+	record := g.builder.NewRecord()
+	defer record.Release()
+
+	result, err := array.NewRecordReader(g.builder.Schema(), []arrow.Record{record})
+	if err != nil {
+		return nil, adbc.Error{
+			Msg:  err.Error(),
+			Code: adbc.StatusInternal,
+		}
+	}
+	return result, nil
+}
+
+func (g *GetObjects) AppendCatalog(catalogName string) {
+	if g.catalogPattern != nil && !g.catalogPattern.MatchString(catalogName) {
+		return
+	}
+	g.catalogNameBuilder.Append(catalogName)
+
+	if g.Depth == adbc.ObjectDepthCatalogs {
+		g.catalogDbSchemasBuilder.AppendNull()
+		return
+	}
+
+	g.catalogDbSchemasBuilder.Append(true)
+
+	for _, dbSchemaName := range g.schemaLookup[catalogName] {
+		g.appendDbSchema(catalogName, dbSchemaName)
+	}
+}
+
+func (g *GetObjects) appendDbSchema(catalogName, dbSchemaName string) {
+	g.dbSchemaNameBuilder.Append(dbSchemaName)
+	g.catalogDbSchemasItems.Append(true)
+
+	if g.Depth == adbc.ObjectDepthDBSchemas {
+		g.dbSchemaTablesBuilder.AppendNull()
+		return
+	}
+	g.dbSchemaTablesBuilder.Append(true)
+
+	for _, tableInfo := range g.tableLookup[CatalogAndSchema{
+		Catalog: catalogName,
+		Schema:  dbSchemaName,
+	}] {
+		g.appendTableInfo(tableInfo)
+	}
+}
+
+func (g *GetObjects) appendTableInfo(tableInfo TableInfo) {
+	g.tableNameBuilder.Append(tableInfo.Name)
+	g.tableTypeBuilder.Append(tableInfo.TableType)
+	g.dbSchemaTablesItems.Append(true)
+
+	if g.Depth == adbc.ObjectDepthTables {
+		g.tableColumnsBuilder.AppendNull()
+		g.tableConstraintsBuilder.AppendNull()
+		return
+	}
+	g.tableColumnsBuilder.Append(true)
+	// TODO: unimplemented for now
+	g.tableConstraintsBuilder.Append(true)
+
+	if tableInfo.Schema == nil {
+		return
+	}
+
+	for colIndex, column := range tableInfo.Schema.Fields() {
+		if g.columnNamePattern != nil && !g.columnNamePattern.MatchString(column.Name) {
+			continue
+		}
+		g.columnNameBuilder.Append(column.Name)
+		if !column.HasMetadata() {
+			g.ordinalPositionBuilder.Append(int32(colIndex + 1))
+			g.remarksBuilder.AppendNull()
+		} else {
+			if remark, ok := column.Metadata.GetValue("COMMENT"); ok {
+				g.remarksBuilder.Append(remark)
+			} else {
+				g.remarksBuilder.AppendNull()
+			}
+
+			pos := int32(colIndex + 1)
+			if ordinal, ok := column.Metadata.GetValue("ORDINAL_POSITION"); ok {
+				v, err := strconv.ParseInt(ordinal, 10, 32)
+				if err == nil {
+					pos = int32(v)
+				}
+			}
+			g.ordinalPositionBuilder.Append(pos)
+		}
+
+		g.xdbcDataTypeBuilder.AppendNull()
+		g.xdbcTypeNameBuilder.AppendNull()
+		g.xdbcColumnSizeBuilder.AppendNull()
+		g.xdbcDecimalDigitsBuilder.AppendNull()
+		g.xdbcNumPrecRadixBuilder.AppendNull()
+		g.xdbcNullableBuilder.AppendNull()
+		g.xdbcColumnDefBuilder.AppendNull()
+		g.xdbcSqlDataTypeBuilder.AppendNull()
+		g.xdbcDatetimeSubBuilder.AppendNull()
+		g.xdbcCharOctetLengthBuilder.AppendNull()
+		g.xdbcIsNullableBuilder.AppendNull()
+		g.xdbcScopeCatalogBuilder.AppendNull()
+		g.xdbcScopeSchemaBuilder.AppendNull()
+		g.xdbcScopeTableBuilder.AppendNull()
+		g.xdbcIsAutoincrementBuilder.AppendNull()
+		g.xdbcIsGeneratedcolumnBuilder.AppendNull()
+
+		g.tableColumnsItems.Append(true)
+	}
+}
diff --git a/go/adbc/driver/snowflake/connection.go b/go/adbc/driver/snowflake/connection.go
new file mode 100644
index 0000000..fdb8bcd
--- /dev/null
+++ b/go/adbc/driver/snowflake/connection.go
@@ -0,0 +1,805 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package snowflake
+
+import (
+	"context"
+	"database/sql"
+	"database/sql/driver"
+	"fmt"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow-adbc/go/adbc/driver/internal"
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/snowflakedb/gosnowflake"
+)
+
+type snowflakeConn interface {
+	driver.Conn
+	driver.ConnBeginTx
+	driver.ConnPrepareContext
+	driver.ExecerContext
+	driver.QueryerContext
+	driver.Pinger
+	QueryArrowStream(context.Context, string, ...driver.NamedValue) (gosnowflake.ArrowStreamLoader, error)
+}
+
+type cnxn struct {
+	cn    snowflakeConn
+	db    *database
+	ctor  gosnowflake.Connector
+	sqldb *sql.DB
+
+	activeTransaction bool
+}
+
+// Metadata methods
+// Generally these methods return an array.RecordReader that
+// can be consumed to retrieve metadata about the database as Arrow
+// data. The returned metadata has an expected schema given in the
+// doc strings of the specific methods. Schema fields are nullable
+// unless otherwise marked. While no Statement is used in these
+// methods, the result set may count as an active statement to the
+// driver for the purposes of concurrency management (e.g. if the
+// driver has a limit on concurrent active statements and it must
+// execute a SQL query internally in order to implement the metadata
+// method).
+//
+// Some methods accept "search pattern" arguments, which are strings
+// that can contain the special character "%" to match zero or more
+// characters, or "_" to match exactly one character. (See the
+// documentation of DatabaseMetaData in JDBC or "Pattern Value Arguments"
+// in the ODBC documentation.) Escaping is not currently supported.
+// GetInfo returns metadata about the database/driver.
+//
+// The result is an Arrow dataset with the following schema:
+//
+//	Field Name									| Field Type
+//	----------------------------|-----------------------------
+//	info_name					   				| uint32 not null
+//	info_value									| INFO_SCHEMA
+//
+// INFO_SCHEMA is a dense union with members:
+//
+//	Field Name (Type Code)			| Field Type
+//	----------------------------|-----------------------------
+//	string_value (0)						| utf8
+//	bool_value (1)							| bool
+//	int64_value (2)							| int64
+//	int32_bitmask (3)						| int32
+//	string_list (4)							| list<utf8>
+//	int32_to_int32_list_map (5)	| map<int32, list<int32>>
+//
+// Each metadatum is identified by an integer code. The recognized
+// codes are defined as constants. Codes [0, 10_000) are reserved
+// for ADBC usage. Drivers/vendors will ignore requests for unrecognized
+// codes (the row will be omitted from the result).
+func (c *cnxn) GetInfo(ctx context.Context, infoCodes []adbc.InfoCode) (array.RecordReader, error) {
+	const strValTypeID arrow.UnionTypeCode = 0
+
+	if len(infoCodes) == 0 {
+		infoCodes = infoSupportedCodes
+	}
+
+	bldr := array.NewRecordBuilder(c.db.alloc, adbc.GetInfoSchema)
+	defer bldr.Release()
+	bldr.Reserve(len(infoCodes))
+
+	infoNameBldr := bldr.Field(0).(*array.Uint32Builder)
+	infoValueBldr := bldr.Field(1).(*array.DenseUnionBuilder)
+	strInfoBldr := infoValueBldr.Child(0).(*array.StringBuilder)
+
+	for _, code := range infoCodes {
+		switch code {
+		case adbc.InfoDriverName:
+			infoNameBldr.Append(uint32(code))
+			infoValueBldr.Append(strValTypeID)
+			strInfoBldr.Append(infoDriverName)
+		case adbc.InfoDriverVersion:
+			infoNameBldr.Append(uint32(code))
+			infoValueBldr.Append(strValTypeID)
+			strInfoBldr.Append(infoDriverVersion)
+		case adbc.InfoDriverArrowVersion:
+			infoNameBldr.Append(uint32(code))
+			infoValueBldr.Append(strValTypeID)
+			strInfoBldr.Append(infoDriverArrowVersion)
+		case adbc.InfoVendorName:
+			infoNameBldr.Append(uint32(code))
+			infoValueBldr.Append(strValTypeID)
+			strInfoBldr.Append(infoVendorName)
+		default:
+			infoNameBldr.Append(uint32(code))
+			infoValueBldr.AppendNull()
+		}
+	}
+
+	final := bldr.NewRecord()
+	defer final.Release()
+	return array.NewRecordReader(adbc.GetInfoSchema, []arrow.Record{final})
+}
+
+// GetObjects gets a hierarchical view of all catalogs, database schemas,
+// tables, and columns.
+//
+// The result is an Arrow Dataset with the following schema:
+//
+//	Field Name									| Field Type
+//	----------------------------|----------------------------
+//	catalog_name								| utf8
+//	catalog_db_schemas					| list<DB_SCHEMA_SCHEMA>
+//
+// DB_SCHEMA_SCHEMA is a Struct with the fields:
+//
+//	Field Name									| Field Type
+//	----------------------------|----------------------------
+//	db_schema_name							| utf8
+//	db_schema_tables						|	list<TABLE_SCHEMA>
+//
+// TABLE_SCHEMA is a Struct with the fields:
+//
+//	Field Name									| Field Type
+//	----------------------------|----------------------------
+//	table_name									| utf8 not null
+//	table_type									|	utf8 not null
+//	table_columns								| list<COLUMN_SCHEMA>
+//	table_constraints						| list<CONSTRAINT_SCHEMA>
+//
+// COLUMN_SCHEMA is a Struct with the fields:
+//
+//		Field Name 									| Field Type					| Comments
+//		----------------------------|---------------------|---------
+//		column_name									| utf8 not null				|
+//		ordinal_position						| int32								| (1)
+//		remarks											| utf8								| (2)
+//		xdbc_data_type							| int16								| (3)
+//		xdbc_type_name							| utf8								| (3)
+//		xdbc_column_size						| int32								| (3)
+//		xdbc_decimal_digits					| int16								| (3)
+//		xdbc_num_prec_radix					| int16								| (3)
+//		xdbc_nullable								| int16								| (3)
+//		xdbc_column_def							| utf8								| (3)
+//		xdbc_sql_data_type					| int16								| (3)
+//		xdbc_datetime_sub						| int16								| (3)
+//		xdbc_char_octet_length			| int32								| (3)
+//		xdbc_is_nullable						| utf8								| (3)
+//		xdbc_scope_catalog					| utf8								| (3)
+//		xdbc_scope_schema						| utf8								| (3)
+//		xdbc_scope_table						| utf8								| (3)
+//		xdbc_is_autoincrement				| bool								| (3)
+//		xdbc_is_generatedcolumn			| bool								| (3)
+//
+//	 1. The column's ordinal position in the table (starting from 1).
+//	 2. Database-specific description of the column.
+//	 3. Optional Value. Should be null if not supported by the driver.
+//	    xdbc_values are meant to provide JDBC/ODBC-compatible metadata
+//	    in an agnostic manner.
+//
+// CONSTRAINT_SCHEMA is a Struct with the fields:
+//
+//	Field Name									| Field Type					| Comments
+//	----------------------------|---------------------|---------
+//	constraint_name							| utf8								|
+//	constraint_type							| utf8 not null				| (1)
+//	constraint_column_names			| list<utf8> not null | (2)
+//	constraint_column_usage			| list<USAGE_SCHEMA>	| (3)
+//
+// 1. One of 'CHECK', 'FOREIGN KEY', 'PRIMARY KEY', or 'UNIQUE'.
+// 2. The columns on the current table that are constrained, in order.
+// 3. For FOREIGN KEY only, the referenced table and columns.
+//
+// USAGE_SCHEMA is a Struct with fields:
+//
+//	Field Name									|	Field Type
+//	----------------------------|----------------------------
+//	fk_catalog									| utf8
+//	fk_db_schema								| utf8
+//	fk_table										| utf8 not null
+//	fk_column_name							| utf8 not null
+//
+// For the parameters: If nil is passed, then that parameter will not
+// be filtered by at all. If an empty string, then only objects without
+// that property (ie: catalog or db schema) will be returned.
+//
+// tableName and columnName must be either nil (do not filter by
+// table name or column name) or non-empty.
+//
+// All non-empty, non-nil strings should be a search pattern (as described
+// earlier).
+func (c *cnxn) GetObjects(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string, tableName *string, columnName *string, tableType []string) (array.RecordReader, error) {
+	g := internal.GetObjects{Ctx: ctx, Depth: depth, Catalog: catalog, DbSchema: dbSchema, TableName: tableName, ColumnName: columnName, TableType: tableType}
+	if err := g.Init(c.db.alloc, c.getObjectsDbSchemas, c.getObjectsTables); err != nil {
+		return nil, err
+	}
+	defer g.Release()
+
+	rows, err := c.sqldb.QueryContext(ctx, "SHOW TERSE DATABASES", nil)
+	if err != nil {
+		return nil, err
+	}
+	defer rows.Close()
+
+	var (
+		created              time.Time
+		name                 string
+		kind, dbname, schema sql.NullString
+	)
+	for rows.Next() {
+		if err := rows.Scan(&created, &name, &kind, &dbname, &schema); err != nil {
+			return nil, errToAdbcErr(adbc.StatusInvalidData, err)
+		}
+
+		// SNOWFLAKE catalog contains functions and no tables
+		if name == "SNOWFLAKE" {
+			continue
+		}
+
+		// schema for SHOW TERSE DATABASES is:
+		// created_on:timestamp, name:text, kind:null, database_name:null, schema_name:null
+		// the last three columns are always null because they are not applicable for databases
+		// so we want values[1].(string) for the name
+		g.AppendCatalog(name)
+	}
+
+	return g.Finish()
+}
+
+func (c *cnxn) getObjectsDbSchemas(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string) (result map[string][]string, err error) {
+	if depth == adbc.ObjectDepthCatalogs {
+		return
+	}
+
+	conditions := make([]string, 0)
+	if catalog != nil && *catalog != "" {
+		conditions = append(conditions, ` CATALOG_NAME LIKE \'`+*catalog+`\'`)
+	}
+	if dbSchema != nil && *dbSchema != "" {
+		conditions = append(conditions, ` SCHEMA_NAME LIKE \'`+*dbSchema+`\'`)
+	}
+
+	cond := strings.Join(conditions, " AND ")
+	if cond != "" {
+		cond = `statement := 'SELECT * FROM (' || statement || ') WHERE ` + cond + `';`
+	}
+
+	result = make(map[string][]string)
+	const queryPrefix = `DECLARE
+	    c1 CURSOR FOR SELECT DATABASE_NAME FROM INFORMATION_SCHEMA.DATABASES;
+			res RESULTSET;
+			counter INTEGER DEFAULT 0;
+			statement VARCHAR DEFAULT '';
+		BEGIN
+		  FOR rec IN c1 DO
+				IF (counter > 0) THEN
+				  statement := statement || ' UNION ALL ';
+				END IF;
+				statement := statement || ' SELECT CATALOG_NAME, SCHEMA_NAME FROM ' || rec.database_name || '.INFORMATION_SCHEMA.SCHEMATA';
+				counter := counter + 1;
+			END FOR;
+		  `
+	const querySuffix = `
+	    res := (EXECUTE IMMEDIATE :statement);
+			RETURN TABLE (res);
+		END;`
+
+	query := queryPrefix + cond + querySuffix
+	var rows *sql.Rows
+	rows, err = c.sqldb.QueryContext(ctx, query)
+	if err != nil {
+		err = errToAdbcErr(adbc.StatusIO, err)
+		return
+	}
+	defer rows.Close()
+
+	var catalogName, schemaName string
+	for rows.Next() {
+		if err = rows.Scan(&catalogName, &schemaName); err != nil {
+			err = errToAdbcErr(adbc.StatusIO, err)
+			return
+		}
+
+		cat, ok := result[catalogName]
+		if !ok {
+			cat = make([]string, 0, 1)
+		}
+		result[catalogName] = append(cat, schemaName)
+	}
+
+	return
+}
+
+var loc = time.Now().Location()
+
+func toField(name string, isnullable bool, dataType string, numPrec, numPrecRadix, numScale sql.NullInt16, isIdent bool, identGen, identInc, comment sql.NullString, ordinalPos int) (ret arrow.Field) {
+	ret.Name, ret.Nullable = name, isnullable
+	switch dataType {
+	case "NUMBER":
+		if !numScale.Valid || numScale.Int16 == 0 {
+			ret.Type = arrow.PrimitiveTypes.Int64
+		} else {
+			ret.Type = arrow.PrimitiveTypes.Float64
+		}
+	case "FLOAT":
+		fallthrough
+	case "DOUBLE":
+		ret.Type = arrow.PrimitiveTypes.Float64
+	case "TEXT":
+		ret.Type = arrow.BinaryTypes.String
+	case "BINARY":
+		ret.Type = arrow.BinaryTypes.Binary
+	case "BOOLEAN":
+		ret.Type = arrow.FixedWidthTypes.Boolean
+	case "ARRAY":
+		fallthrough
+	case "VARIANT":
+		fallthrough
+	case "OBJECT":
+		// snowflake will return each value as a string
+		ret.Type = arrow.BinaryTypes.String
+	case "DATE":
+		ret.Type = arrow.FixedWidthTypes.Date32
+	case "TIME":
+		ret.Type = arrow.FixedWidthTypes.Time64ns
+	case "DATETIME":
+		fallthrough
+	case "TIMESTAMP", "TIMESTAMP_NTZ":
+		ret.Type = &arrow.TimestampType{Unit: arrow.Nanosecond}
+	case "TIMESTAMP_LTZ":
+		ret.Type = &arrow.TimestampType{Unit: arrow.Nanosecond, TimeZone: loc.String()}
+	case "TIMESTAMP_TZ":
+		ret.Type = arrow.FixedWidthTypes.Timestamp_ns
+	case "GEOGRAPHY":
+		fallthrough
+	case "GEOMETRY":
+		ret.Type = arrow.BinaryTypes.String
+	}
+
+	md := make(map[string]string)
+	md["TYPE_NAME"] = dataType
+	if isIdent {
+		md["IS_IDENTITY"] = "YES"
+		md["IDENTITY_GENERATION"] = identGen.String
+		md["IDENTITY_INCREMENT"] = identInc.String
+	}
+	if comment.Valid {
+		md["COMMENT"] = comment.String
+	}
+	md["ORDINAL_POSITION"] = strconv.Itoa(ordinalPos)
+
+	ret.Metadata = arrow.MetadataFrom(md)
+	return
+}
+
+func (c *cnxn) getObjectsTables(ctx context.Context, depth adbc.ObjectDepth, catalog *string, dbSchema *string, tableName *string, columnName *string, tableType []string) (result internal.SchemaToTableInfo, err error) {
+	if depth == adbc.ObjectDepthCatalogs || depth == adbc.ObjectDepthDBSchemas {
+		return
+	}
+
+	result = make(internal.SchemaToTableInfo)
+	includeSchema := depth == adbc.ObjectDepthAll || depth == adbc.ObjectDepthColumns
+
+	conditions := make([]string, 0)
+	if catalog != nil && *catalog != "" {
+		conditions = append(conditions, ` TABLE_CATALOG ILIKE \'`+*catalog+`\'`)
+	}
+	if dbSchema != nil && *dbSchema != "" {
+		conditions = append(conditions, ` TABLE_SCHEMA ILIKE \'`+*dbSchema+`\'`)
+	}
+	if tableName != nil && *tableName != "" {
+		conditions = append(conditions, ` TABLE_NAME ILIKE \'`+*tableName+`\'`)
+	}
+
+	const queryPrefix = `DECLARE
+		c1 CURSOR FOR SELECT DATABASE_NAME FROM INFORMATION_SCHEMA.DATABASES;
+		res RESULTSET;
+		counter INTEGER DEFAULT 0;
+		statement VARCHAR DEFAULT '';
+	BEGIN
+		FOR rec IN c1 DO
+			IF (counter > 0) THEN
+				statement := statement || ' UNION ALL ';
+			END IF;
+			`
+
+	const noSchema = `statement := statement || ' SELECT table_catalog, table_schema, table_name, table_type FROM ' || rec.database_name || '.INFORMATION_SCHEMA.TABLES';
+			counter := counter + 1;
+		END FOR;
+		`
+
+	const getSchema = `statement := statement ||
+		' SELECT
+				table_catalog, table_schema, table_name, column_name,
+				ordinal_position, is_nullable::boolean, data_type, numeric_precision,
+				numeric_precision_radix, numeric_scale, is_identity::boolean,
+				identity_generation, identity_increment, comment
+		FROM ' || rec.database_name || '.INFORMATION_SCHEMA.COLUMNS';
+
+		  counter := counter + 1;
+		END FOR;
+	  `
+
+	const querySuffix = `
+		res := (EXECUTE IMMEDIATE :statement);
+		RETURN TABLE (res);
+	END;`
+
+	// first populate the tables and table types
+	var rows *sql.Rows
+	var tblConditions []string
+	if len(tableType) > 0 {
+		tblConditions = append(conditions, ` TABLE_TYPE IN (\'`+strings.Join(tableType, `\',\'`)+`\')`)
+	} else {
+		tblConditions = conditions
+	}
+
+	cond := strings.Join(tblConditions, " AND ")
+	if cond != "" {
+		cond = `statement := 'SELECT * FROM (' || statement || ') WHERE ` + cond + `';`
+	}
+	query := queryPrefix + noSchema + cond + querySuffix
+	rows, err = c.sqldb.QueryContext(ctx, query)
+	if err != nil {
+		err = errToAdbcErr(adbc.StatusIO, err)
+		return
+	}
+	defer rows.Close()
+
+	var tblCat, tblSchema, tblName, tblType string
+	for rows.Next() {
+		if err = rows.Scan(&tblCat, &tblSchema, &tblName, &tblType); err != nil {
+			err = errToAdbcErr(adbc.StatusIO, err)
+			return
+		}
+
+		key := internal.CatalogAndSchema{
+			Catalog: tblCat, Schema: tblSchema}
+
+		result[key] = append(result[key], internal.TableInfo{
+			Name: tblName, TableType: tblType})
+	}
+
+	if includeSchema {
+		// if we need to include the schemas of the tables, make another fetch
+		// to fetch the columns and column info
+		if columnName != nil && *columnName != "" {
+			conditions = append(conditions, ` column_name ILIKE \'`+*columnName+`\'`)
+		}
+		cond = strings.Join(conditions, " AND ")
+		if cond != "" {
+			cond = " WHERE " + cond
+		}
+		cond = `statement := 'SELECT * FROM (' || statement || ')` + cond +
+			` ORDER BY table_catalog, table_schema, table_name, ordinal_position';`
+		query = queryPrefix + getSchema + cond + querySuffix
+		rows, err = c.sqldb.QueryContext(ctx, query)
+		if err != nil {
+			return
+		}
+		defer rows.Close()
+
+		var (
+			colName, dataType                           string
+			identGen, identIncrement, comment           sql.NullString
+			ordinalPos                                  int
+			numericPrec, numericPrecRadix, numericScale sql.NullInt16
+			isNullable, isIdent                         bool
+
+			prevKey      internal.CatalogAndSchema
+			curTableInfo *internal.TableInfo
+			fieldList    = make([]arrow.Field, 0)
+		)
+
+		for rows.Next() {
+			// order here matches the order of the columns requested in the query
+			err = rows.Scan(&tblCat, &tblSchema, &tblName, &colName,
+				&ordinalPos, &isNullable, &dataType, &numericPrec,
+				&numericPrecRadix, &numericScale, &isIdent, &identGen,
+				&identIncrement, &comment)
+			if err != nil {
+				err = errToAdbcErr(adbc.StatusIO, err)
+				return
+			}
+
+			key := internal.CatalogAndSchema{Catalog: tblCat, Schema: tblSchema}
+			if prevKey != key || (curTableInfo != nil && curTableInfo.Name != tblName) {
+				if len(fieldList) > 0 && curTableInfo != nil {
+					curTableInfo.Schema = arrow.NewSchema(fieldList, nil)
+					fieldList = fieldList[:0]
+				}
+
+				info := result[key]
+				for i := range info {
+					if info[i].Name == tblName {
+						curTableInfo = &info[i]
+						break
+					}
+				}
+			}
+
+			prevKey = key
+			fieldList = append(fieldList, toField(colName, isNullable, dataType, numericPrec, numericPrecRadix, numericScale, isIdent, identGen, identIncrement, comment, ordinalPos))
+		}
+
+		if len(fieldList) > 0 && curTableInfo != nil {
+			curTableInfo.Schema = arrow.NewSchema(fieldList, nil)
+		}
+	}
+	return
+}
+
+func descToField(name, typ, isnull, primary string, comment sql.NullString) (field arrow.Field, err error) {
+	field.Name = strings.ToLower(name)
+	if isnull == "Y" {
+		field.Nullable = true
+	}
+	md := make(map[string]string)
+	md["DATA_TYPE"] = typ
+	md["PRIMARY_KEY"] = primary
+	if comment.Valid {
+		md["COMMENT"] = comment.String
+	}
+	field.Metadata = arrow.MetadataFrom(md)
+
+	paren := strings.Index(typ, "(")
+	if paren == -1 {
+		// types without params
+		switch typ {
+		case "FLOAT":
+			fallthrough
+		case "DOUBLE":
+			field.Type = arrow.PrimitiveTypes.Float64
+		case "DATE":
+			field.Type = arrow.FixedWidthTypes.Date32
+		// array, object and variant are all represented as strings by
+		// snowflake's return
+		case "ARRAY":
+			fallthrough
+		case "OBJECT":
+			fallthrough
+		case "VARIANT":
+			field.Type = arrow.BinaryTypes.String
+		case "GEOGRAPHY":
+			fallthrough
+		case "GEOMETRY":
+			field.Type = arrow.BinaryTypes.String
+		case "BOOLEAN":
+			field.Type = arrow.FixedWidthTypes.Boolean
+		default:
+			err = adbc.Error{
+				Msg:  fmt.Sprintf("Snowflake Data Type %s not implemented", typ),
+				Code: adbc.StatusNotImplemented,
+			}
+		}
+		return
+	}
+
+	prefix := typ[:paren]
+	switch prefix {
+	case "VARCHAR", "TEXT":
+		field.Type = arrow.BinaryTypes.String
+	case "BINARY", "VARBINARY":
+		field.Type = arrow.BinaryTypes.Binary
+	case "NUMBER":
+		comma := strings.Index(typ, ",")
+		scale, err := strconv.ParseInt(typ[comma+1:len(typ)-1], 10, 32)
+		if err != nil {
+			return field, adbc.Error{
+				Msg:  "could not parse Scale from type '" + typ + "'",
+				Code: adbc.StatusInvalidData,
+			}
+		}
+		if scale == 0 {
+			field.Type = arrow.PrimitiveTypes.Int64
+		} else {
+			field.Type = arrow.PrimitiveTypes.Float64
+		}
+	case "TIME":
+		field.Type = arrow.FixedWidthTypes.Time64ns
+	case "DATETIME":
+		fallthrough
+	case "TIMESTAMP", "TIMESTAMP_NTZ":
+		field.Type = &arrow.TimestampType{Unit: arrow.Nanosecond}
+	case "TIMESTAMP_LTZ":
+		field.Type = &arrow.TimestampType{Unit: arrow.Nanosecond, TimeZone: loc.String()}
+	case "TIMESTAMP_TZ":
+		field.Type = arrow.FixedWidthTypes.Timestamp_ns
+	default:
+		err = adbc.Error{
+			Msg:  fmt.Sprintf("Snowflake Data Type %s not implemented", typ),
+			Code: adbc.StatusNotImplemented,
+		}
+	}
+	return
+}
+
+func (c *cnxn) GetTableSchema(ctx context.Context, catalog *string, dbSchema *string, tableName string) (*arrow.Schema, error) {
+	tblParts := make([]string, 0, 3)
+	if catalog != nil {
+		tblParts = append(tblParts, strconv.Quote(strings.ToUpper(*catalog)))
+	}
+	if dbSchema != nil {
+		tblParts = append(tblParts, strconv.Quote(strings.ToUpper(*dbSchema)))
+	}
+	tblParts = append(tblParts, strconv.Quote(strings.ToUpper(tableName)))
+	fullyQualifiedTable := strings.Join(tblParts, ".")
+
+	rows, err := c.sqldb.QueryContext(ctx, `DESC TABLE `+fullyQualifiedTable)
+	if err != nil {
+		return nil, errToAdbcErr(adbc.StatusIO, err)
+	}
+	defer rows.Close()
+
+	var (
+		name, typ, kind, isnull, primary, unique string
+		def, check, expr, comment, policyName    sql.NullString
+		fields                                   = []arrow.Field{}
+	)
+
+	for rows.Next() {
+		err := rows.Scan(&name, &typ, &kind, &isnull, &def, &primary, &unique,
+			&check, &expr, &comment, &policyName)
+		if err != nil {
+			return nil, errToAdbcErr(adbc.StatusIO, err)
+		}
+
+		f, err := descToField(name, typ, isnull, primary, comment)
+		if err != nil {
+			return nil, err
+		}
+		fields = append(fields, f)
+	}
+
+	sc := arrow.NewSchema(fields, nil)
+	return sc, nil
+}
+
+// GetTableTypes returns a list of the table types in the database.
+//
+// The result is an arrow dataset with the following schema:
+//
+//	Field Name			| Field Type
+//	----------------|--------------
+//	table_type			| utf8 not null
+func (c *cnxn) GetTableTypes(_ context.Context) (array.RecordReader, error) {
+	bldr := array.NewRecordBuilder(c.db.alloc, adbc.TableTypesSchema)
+	defer bldr.Release()
+
+	bldr.Field(0).(*array.StringBuilder).AppendValues([]string{"BASE TABLE", "TEMPORARY TABLE", "VIEW"}, nil)
+	final := bldr.NewRecord()
+	defer final.Release()
+	return array.NewRecordReader(adbc.TableTypesSchema, []arrow.Record{final})
+}
+
+// Commit commits any pending transactions on this connection, it should
+// only be used if autocommit is disabled.
+//
+// Behavior is undefined if this is mixed with SQL transaction statements.
+func (c *cnxn) Commit(_ context.Context) error {
+	if !c.activeTransaction {
+		return adbc.Error{
+			Msg:  "no active transaction, cannot commit",
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	_, err := c.cn.ExecContext(context.Background(), "COMMIT", nil)
+	if err != nil {
+		return errToAdbcErr(adbc.StatusInternal, err)
+	}
+
+	_, err = c.cn.ExecContext(context.Background(), "BEGIN", nil)
+	return errToAdbcErr(adbc.StatusInternal, err)
+}
+
+// Rollback rolls back any pending transactions. Only used if autocommit
+// is disabled.
+//
+// Behavior is undefined if this is mixed with SQL transaction statements.
+func (c *cnxn) Rollback(_ context.Context) error {
+	if !c.activeTransaction {
+		return adbc.Error{
+			Msg:  "no active transaction, cannot rollback",
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	_, err := c.cn.ExecContext(context.Background(), "ROLLBACK", nil)
+	if err != nil {
+		return errToAdbcErr(adbc.StatusInternal, err)
+	}
+
+	_, err = c.cn.ExecContext(context.Background(), "BEGIN", nil)
+	return errToAdbcErr(adbc.StatusInternal, err)
+}
+
+// NewStatement initializes a new statement object tied to this connection
+func (c *cnxn) NewStatement() (adbc.Statement, error) {
+	return &statement{
+		alloc: c.db.alloc,
+		cnxn:  c,
+	}, nil
+}
+
+// Close closes this connection and releases any associated resources.
+func (c *cnxn) Close() error {
+	if c.sqldb == nil || c.cn == nil {
+		return adbc.Error{Code: adbc.StatusInvalidState}
+	}
+
+	if err := c.sqldb.Close(); err != nil {
+		return err
+	}
+	c.sqldb = nil
+
+	defer func() {
+		c.cn = nil
+	}()
+	return c.cn.Close()
+}
+
+// ReadPartition constructs a statement for a partition of a query. The
+// results can then be read independently using the returned RecordReader.
+//
+// A partition can be retrieved by using ExecutePartitions on a statement.
+func (c *cnxn) ReadPartition(ctx context.Context, serializedPartition []byte) (array.RecordReader, error) {
+	return nil, adbc.Error{
+		Code: adbc.StatusNotImplemented,
+		Msg:  "ReadPartition not yet implemented for snowflake driver",
+	}
+}
+
+func (c *cnxn) SetOption(key, value string) error {
+	switch key {
+	case adbc.OptionKeyAutoCommit:
+		switch value {
+		case adbc.OptionValueEnabled:
+			if c.activeTransaction {
+				_, err := c.cn.ExecContext(context.Background(), "COMMIT", nil)
+				if err != nil {
+					return errToAdbcErr(adbc.StatusInternal, err)
+				}
+				c.activeTransaction = false
+			}
+			_, err := c.cn.ExecContext(context.Background(), "ALTER SESSION SET AUTOCOMMIT = true", nil)
+			return err
+		case adbc.OptionValueDisabled:
+			if !c.activeTransaction {
+				_, err := c.cn.ExecContext(context.Background(), "BEGIN", nil)
+				if err != nil {
+					return errToAdbcErr(adbc.StatusInternal, err)
+				}
+				c.activeTransaction = true
+			}
+			_, err := c.cn.ExecContext(context.Background(), "ALTER SESSION SET AUTOCOMMIT = false", nil)
+			return err
+		default:
+			return adbc.Error{
+				Msg:  "[Snowflake] invalid value for option " + key + ": " + value,
+				Code: adbc.StatusInvalidArgument,
+			}
+		}
+	default:
+		return adbc.Error{
+			Msg:  "[Snowflake] unknown connection option " + key + ": " + value,
+			Code: adbc.StatusInvalidArgument,
+		}
+	}
+}
diff --git a/go/adbc/driver/snowflake/driver.go b/go/adbc/driver/snowflake/driver.go
new file mode 100644
index 0000000..e6661df
--- /dev/null
+++ b/go/adbc/driver/snowflake/driver.go
@@ -0,0 +1,433 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package snowflake
+
+import (
+	"context"
+	"crypto/x509"
+	"database/sql"
+	"errors"
+	"fmt"
+	"net/url"
+	"os"
+	"runtime/debug"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+	"github.com/snowflakedb/gosnowflake"
+	"golang.org/x/exp/maps"
+)
+
+const (
+	infoDriverName = "ADBC Snowflake Driver - Go"
+	infoVendorName = "Snowflake"
+
+	OptionDatabase  = "adbc.snowflake.sql.db"
+	OptionSchema    = "adbc.snowflake.sql.schema"
+	OptionWarehouse = "adbc.snowflake.sql.warehouse"
+	OptionRole      = "adbc.snowflake.sql.role"
+	OptionRegion    = "adbc.snowflake.sql.region"
+	OptionAccount   = "adbc.snowflake.sql.account"
+	OptionProtocol  = "adbc.snowflake.sql.uri.protocol"
+	OptionPort      = "adbc.snowflake.sql.uri.port"
+	OptionHost      = "adbc.snowflake.sql.uri.host"
+	// Specify auth type to use for snowflake connection based on
+	// what is supported by the snowflake driver. Default is
+	// "auth_snowflake" (use OptionValueAuth* consts to specify desired
+	// authentication type).
+	OptionAuthType = "adbc.snowflake.sql.auth_type"
+	// Login retry timeout EXCLUDING network roundtrip and reading http response
+	// use format like http://pkg.go.dev/time#ParseDuration such as
+	// "300ms", "1.5s" or "1m30s". ParseDuration accepts negative values
+	// but the absolute value will be used.
+	OptionLoginTimeout = "adbc.snowflake.sql.client_option.login_timeout"
+	// request retry timeout EXCLUDING network roundtrip and reading http response
+	// use format like http://pkg.go.dev/time#ParseDuration such as
+	// "300ms", "1.5s" or "1m30s". ParseDuration accepts negative values
+	// but the absolute value will be used.
+	OptionRequestTimeout = "adbc.snowflake.sql.client_option.request_timeout"
+	// JWT expiration after timeout
+	// use format like http://pkg.go.dev/time#ParseDuration such as
+	// "300ms", "1.5s" or "1m30s". ParseDuration accepts negative values
+	// but the absolute value will be used.
+	OptionJwtExpireTimeout = "adbc.snowflake.sql.client_option.jwt_expire_timeout"
+	// Timeout for network round trip + reading http response
+	// use format like http://pkg.go.dev/time#ParseDuration such as
+	// "300ms", "1.5s" or "1m30s". ParseDuration accepts negative values
+	// but the absolute value will be used.
+	OptionClientTimeout = "adbc.snowflake.sql.client_option.client_timeout"
+
+	OptionApplicationName  = "adbc.snowflake.sql.client_option.app_name"
+	OptionSSLSkipVerify    = "adbc.snowflake.sql.client_option.tls_skip_verify"
+	OptionOCSPFailOpenMode = "adbc.snowflake.sql.client_option.ocsp_fail_open_mode"
+	// specify the token to use for OAuth or other forms of authentication
+	OptionAuthToken = "adbc.snowflake.sql.client_option.auth_token"
+	// specify the OKTAUrl to use for OKTA Authentication
+	OptionAuthOktaUrl = "adbc.snowflake.sql.client_option.okta_url"
+	// enable the session to persist even after the connection is closed
+	OptionKeepSessionAlive = "adbc.snowflake.sql.client_option.keep_session_alive"
+	// specify the RSA private key to use to sign the JWT
+	// this should point to a file containing a PKCS1 private key to be
+	// loaded. Commonly encoded in PEM blocks of type "RSA PRIVATE KEY"
+	OptionJwtPrivateKey    = "adbc.snowflake.sql.client_option.jwt_private_key"
+	OptionDisableTelemetry = "adbc.snowflake.sql.client_option.disable_telemetry"
+	// snowflake driver logging level
+	OptionLogTracing = "adbc.snowflake.sql.client_option.tracing"
+	// When true, the MFA token is cached in the credential manager. True by default
+	// on Windows/OSX, false for Linux
+	OptionClientRequestMFAToken = "adbc.snowflake.sql.client_option.cache_mfa_token"
+	// When true, the ID token is cached in the credential manager. True by default
+	// on Windows/OSX, false for Linux
+	OptionClientStoreTempCred = "adbc.snowflake.sql.client_option.store_temp_creds"
+
+	// auth types are implemented by the Snowflake driver in gosnowflake
+	// general username password authentication
+	OptionValueAuthSnowflake = "auth_snowflake"
+	// use OAuth authentication for snowflake connection
+	OptionValueAuthOAuth = "auth_oauth"
+	// use an external browser to access a FED and perform SSO auth
+	OptionValueAuthExternalBrowser = "auth_ext_browser"
+	// use a native OKTA URL to perform SSO authentication on Okta
+	OptionValueAuthOkta = "auth_okta"
+	// use a JWT to perform authentication
+	OptionValueAuthJwt = "auth_jwt"
+	// use a username and password with mfa
+	OptionValueAuthUserPassMFA = "auth_mfa"
+)
+
+var (
+	infoDriverVersion      string
+	infoDriverArrowVersion string
+	infoSupportedCodes     []adbc.InfoCode
+)
+
+func init() {
+	if info, ok := debug.ReadBuildInfo(); ok {
+		for _, dep := range info.Deps {
+			switch {
+			case dep.Path == "github.com/apache/arrow-adbc/go/adbc/driver/snowflake":
+				infoDriverVersion = dep.Version
+			case strings.HasPrefix(dep.Path, "github.com/apache/arrow/go/"):
+				infoDriverArrowVersion = dep.Version
+			}
+		}
+	}
+	// XXX: Deps not populated in tests
+	// https://github.com/golang/go/issues/33976
+	if infoDriverVersion == "" {
+		infoDriverVersion = "(unknown or development build)"
+	}
+	if infoDriverArrowVersion == "" {
+		infoDriverArrowVersion = "(unknown or development build)"
+	}
+
+	infoSupportedCodes = []adbc.InfoCode{
+		adbc.InfoDriverName,
+		adbc.InfoDriverVersion,
+		adbc.InfoDriverArrowVersion,
+		adbc.InfoVendorName,
+	}
+}
+
+func errToAdbcErr(code adbc.Status, err error) error {
+	if err == nil {
+		return nil
+	}
+
+	var e adbc.Error
+	if errors.As(err, &e) {
+		e.Code = code
+		return e
+	}
+
+	var sferr *gosnowflake.SnowflakeError
+	if errors.As(err, &sferr) {
+		var sqlstate [5]byte
+		if len(sferr.SQLState) > 0 {
+			copy(sqlstate[:], sferr.SQLState[:5])
+		}
+		return adbc.Error{
+			Code:       code,
+			Msg:        sferr.Error(),
+			VendorCode: int32(sferr.Number),
+			SqlState:   sqlstate,
+		}
+	}
+
+	return adbc.Error{
+		Msg:  err.Error(),
+		Code: code,
+	}
+}
+
+type Driver struct {
+	Alloc memory.Allocator
+}
+
+func (d Driver) NewDatabase(opts map[string]string) (adbc.Database, error) {
+	db := &database{alloc: d.Alloc}
+
+	opts = maps.Clone(opts)
+	if db.alloc == nil {
+		db.alloc = memory.DefaultAllocator
+	}
+
+	return db, db.SetOptions(opts)
+}
+
+var (
+	drv         = gosnowflake.SnowflakeDriver{}
+	authTypeMap = map[string]gosnowflake.AuthType{
+		OptionValueAuthSnowflake:       gosnowflake.AuthTypeSnowflake,
+		OptionValueAuthOAuth:           gosnowflake.AuthTypeOAuth,
+		OptionValueAuthExternalBrowser: gosnowflake.AuthTypeExternalBrowser,
+		OptionValueAuthOkta:            gosnowflake.AuthTypeOkta,
+		OptionValueAuthJwt:             gosnowflake.AuthTypeJwt,
+		OptionValueAuthUserPassMFA:     gosnowflake.AuthTypeUsernamePasswordMFA,
+	}
+)
+
+type database struct {
+	cfg   *gosnowflake.Config
+	alloc memory.Allocator
+}
+
+func (d *database) SetOptions(cnOptions map[string]string) error {
+	uri, ok := cnOptions[adbc.OptionKeyURI]
+	if ok {
+		cfg, err := gosnowflake.ParseDSN(uri)
+		if err != nil {
+			return errToAdbcErr(adbc.StatusInvalidArgument, err)
+		}
+
+		d.cfg = cfg
+		delete(cnOptions, adbc.OptionKeyURI)
+	} else {
+		d.cfg = &gosnowflake.Config{}
+	}
+
+	var err error
+	for k, v := range cnOptions {
+		switch k {
+		case adbc.OptionKeyUsername:
+			d.cfg.User = v
+		case adbc.OptionKeyPassword:
+			d.cfg.Password = v
+		case OptionDatabase:
+			d.cfg.Database = v
+		case OptionSchema:
+			d.cfg.Schema = v
+		case OptionWarehouse:
+			d.cfg.Warehouse = v
+		case OptionRole:
+			d.cfg.Role = v
+		case OptionRegion:
+			d.cfg.Region = v
+		case OptionAccount:
+			d.cfg.Account = v
+		case OptionProtocol:
+			d.cfg.Protocol = v
+		case OptionHost:
+			d.cfg.Host = v
+		case OptionPort:
+			d.cfg.Port, err = strconv.Atoi(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "error encountered parsing Port option: " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionAuthType:
+			d.cfg.Authenticator, ok = authTypeMap[v]
+			if !ok {
+				return adbc.Error{
+					Msg:  "invalid option value for " + OptionAuthType + ": '" + v + "'",
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionLoginTimeout:
+			dur, err := time.ParseDuration(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "could not parse duration for '" + OptionLoginTimeout + "': " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+			if dur < 0 {
+				dur = -dur
+			}
+			d.cfg.LoginTimeout = dur
+		case OptionRequestTimeout:
+			dur, err := time.ParseDuration(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "could not parse duration for '" + OptionRequestTimeout + "': " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+			if dur < 0 {
+				dur = -dur
+			}
+			d.cfg.RequestTimeout = dur
+		case OptionJwtExpireTimeout:
+			dur, err := time.ParseDuration(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "could not parse duration for '" + OptionJwtExpireTimeout + "': " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+			if dur < 0 {
+				dur = -dur
+			}
+			d.cfg.JWTExpireTimeout = dur
+		case OptionClientTimeout:
+			dur, err := time.ParseDuration(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "could not parse duration for '" + OptionClientTimeout + "': " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+			if dur < 0 {
+				dur = -dur
+			}
+			d.cfg.ClientTimeout = dur
+		case OptionApplicationName:
+			d.cfg.Application = v
+		case OptionSSLSkipVerify:
+			switch v {
+			case adbc.OptionValueEnabled:
+				d.cfg.InsecureMode = true
+			case adbc.OptionValueDisabled:
+				d.cfg.InsecureMode = false
+			default:
+				return adbc.Error{
+					Msg:  fmt.Sprintf("Invalid value for database option '%s': '%s'", OptionSSLSkipVerify, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionOCSPFailOpenMode:
+			switch v {
+			case adbc.OptionValueEnabled:
+				d.cfg.OCSPFailOpen = gosnowflake.OCSPFailOpenTrue
+			case adbc.OptionValueDisabled:
+				d.cfg.OCSPFailOpen = gosnowflake.OCSPFailOpenFalse
+			default:
+				return adbc.Error{
+					Msg:  fmt.Sprintf("Invalid value for database option '%s': '%s'", OptionSSLSkipVerify, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionAuthToken:
+			d.cfg.Token = v
+		case OptionAuthOktaUrl:
+			d.cfg.OktaURL, err = url.Parse(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  fmt.Sprintf("error parsing URL for database option '%s': '%s'", k, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionKeepSessionAlive:
+			switch v {
+			case adbc.OptionValueEnabled:
+				d.cfg.KeepSessionAlive = true
+			case adbc.OptionValueDisabled:
+				d.cfg.KeepSessionAlive = false
+			default:
+				return adbc.Error{
+					Msg:  fmt.Sprintf("Invalid value for database option '%s': '%s'", OptionSSLSkipVerify, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionDisableTelemetry:
+			switch v {
+			case adbc.OptionValueEnabled:
+				d.cfg.DisableTelemetry = true
+			case adbc.OptionValueDisabled:
+				d.cfg.DisableTelemetry = false
+			default:
+				return adbc.Error{
+					Msg:  fmt.Sprintf("Invalid value for database option '%s': '%s'", OptionSSLSkipVerify, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionJwtPrivateKey:
+			data, err := os.ReadFile(v)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "could not read private key file '" + v + "': " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+
+			d.cfg.PrivateKey, err = x509.ParsePKCS1PrivateKey(data)
+			if err != nil {
+				return adbc.Error{
+					Msg:  "failed parsing private key file '" + v + "': " + err.Error(),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionClientRequestMFAToken:
+			switch v {
+			case adbc.OptionValueEnabled:
+				d.cfg.ClientRequestMfaToken = gosnowflake.ConfigBoolTrue
+			case adbc.OptionValueDisabled:
+				d.cfg.ClientRequestMfaToken = gosnowflake.ConfigBoolFalse
+			default:
+				return adbc.Error{
+					Msg:  fmt.Sprintf("Invalid value for database option '%s': '%s'", OptionSSLSkipVerify, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		case OptionClientStoreTempCred:
+			switch v {
+			case adbc.OptionValueEnabled:
+				d.cfg.ClientStoreTemporaryCredential = gosnowflake.ConfigBoolTrue
+			case adbc.OptionValueDisabled:
+				d.cfg.ClientStoreTemporaryCredential = gosnowflake.ConfigBoolFalse
+			default:
+				return adbc.Error{
+					Msg:  fmt.Sprintf("Invalid value for database option '%s': '%s'", OptionSSLSkipVerify, v),
+					Code: adbc.StatusInvalidArgument,
+				}
+			}
+		default:
+			d.cfg.Params[k] = &v
+		}
+	}
+	return nil
+}
+
+func (d *database) Open(ctx context.Context) (adbc.Connection, error) {
+	connector := gosnowflake.NewConnector(drv, *d.cfg)
+
+	ctx = gosnowflake.WithArrowAllocator(
+		gosnowflake.WithArrowBatches(ctx), d.alloc)
+
+	cn, err := connector.Connect(ctx)
+	if err != nil {
+		return nil, errToAdbcErr(adbc.StatusIO, err)
+	}
+
+	return &cnxn{cn: cn.(snowflakeConn), db: d, ctor: connector, sqldb: sql.OpenDB(connector)}, nil
+}
diff --git a/go/adbc/driver/snowflake/driver_test.go b/go/adbc/driver/snowflake/driver_test.go
new file mode 100644
index 0000000..face6a9
--- /dev/null
+++ b/go/adbc/driver/snowflake/driver_test.go
@@ -0,0 +1,226 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package snowflake_test
+
+import (
+	"context"
+	"database/sql"
+	"fmt"
+	"os"
+	"strings"
+	"testing"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	driver "github.com/apache/arrow-adbc/go/adbc/driver/snowflake"
+	"github.com/apache/arrow-adbc/go/adbc/validation"
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+	"github.com/snowflakedb/gosnowflake"
+	"github.com/stretchr/testify/require"
+	"github.com/stretchr/testify/suite"
+)
+
+type SnowflakeQuirks struct {
+	dsn       string
+	mem       *memory.CheckedAllocator
+	connector gosnowflake.Connector
+}
+
+func (s *SnowflakeQuirks) SetupDriver(t *testing.T) adbc.Driver {
+	s.mem = memory.NewCheckedAllocator(memory.DefaultAllocator)
+	cfg, err := gosnowflake.ParseDSN(s.dsn)
+	require.NoError(t, err)
+	s.connector = gosnowflake.NewConnector(gosnowflake.SnowflakeDriver{}, *cfg)
+	return driver.Driver{Alloc: s.mem}
+}
+
+func (s *SnowflakeQuirks) TearDownDriver(t *testing.T, _ adbc.Driver) {
+	s.mem.AssertSize(t, 0)
+}
+
+func (s *SnowflakeQuirks) DatabaseOptions() map[string]string {
+	return map[string]string{
+		adbc.OptionKeyURI: s.dsn,
+	}
+}
+
+func (s *SnowflakeQuirks) getSqlTypeFromArrowType(dt arrow.DataType) string {
+	switch dt.ID() {
+	case arrow.INT8, arrow.INT16, arrow.INT32, arrow.INT64:
+		return "INTEGER"
+	case arrow.FLOAT32:
+		return "float4"
+	case arrow.FLOAT64:
+		return "double"
+	case arrow.STRING:
+		return "text"
+	default:
+		return ""
+	}
+}
+
+func getArr(arr arrow.Array) interface{} {
+	switch arr := arr.(type) {
+	case *array.Int8:
+		v := arr.Int8Values()
+		return gosnowflake.Array(&v)
+	case *array.Uint8:
+		v := arr.Uint8Values()
+		return gosnowflake.Array(&v)
+	case *array.Int16:
+		v := arr.Int16Values()
+		return gosnowflake.Array(&v)
+	case *array.Uint16:
+		v := arr.Uint16Values()
+		return gosnowflake.Array(&v)
+	case *array.Int32:
+		v := arr.Int32Values()
+		return gosnowflake.Array(&v)
+	case *array.Uint32:
+		v := arr.Uint32Values()
+		return gosnowflake.Array(&v)
+	case *array.Int64:
+		v := arr.Int64Values()
+		return gosnowflake.Array(&v)
+	case *array.Uint64:
+		v := arr.Uint64Values()
+		return gosnowflake.Array(&v)
+	case *array.Float32:
+		v := arr.Float32Values()
+		return gosnowflake.Array(&v)
+	case *array.Float64:
+		v := arr.Float64Values()
+		return gosnowflake.Array(&v)
+	case *array.String:
+		v := make([]string, arr.Len())
+		for i := 0; i < arr.Len(); i++ {
+			if arr.IsNull(i) {
+				continue
+			}
+			v[i] = arr.Value(i)
+		}
+		return gosnowflake.Array(&v)
+	default:
+		panic(fmt.Errorf("unimplemented type %s", arr.DataType()))
+	}
+}
+
+func (s *SnowflakeQuirks) CreateSampleTable(tableName string, r arrow.Record) error {
+	var b strings.Builder
+	b.WriteString("CREATE OR REPLACE TABLE ")
+	b.WriteString(tableName)
+	b.WriteString(" (")
+
+	for i := 0; i < int(r.NumCols()); i++ {
+		if i != 0 {
+			b.WriteString(", ")
+		}
+		f := r.Schema().Field(i)
+		b.WriteString(f.Name)
+		b.WriteByte(' ')
+		b.WriteString(s.getSqlTypeFromArrowType(f.Type))
+	}
+
+	b.WriteString(")")
+	db := sql.OpenDB(s.connector)
+	defer db.Close()
+
+	if _, err := db.Exec(b.String()); err != nil {
+		return err
+	}
+
+	insertQuery := "INSERT INTO " + tableName + " VALUES ("
+	bindings := strings.Repeat("?,", int(r.NumCols()))
+	insertQuery += bindings[:len(bindings)-1] + ")"
+
+	args := make([]interface{}, 0, r.NumCols())
+	for _, col := range r.Columns() {
+		args = append(args, getArr(col))
+	}
+
+	_, err := db.Exec(insertQuery, args...)
+	return err
+}
+
+func (s *SnowflakeQuirks) DropTable(cnxn adbc.Connection, tblname string) error {
+	stmt, err := cnxn.NewStatement()
+	if err != nil {
+		return err
+	}
+	defer stmt.Close()
+
+	if err = stmt.SetSqlQuery(`DROP TABLE IF EXISTS ` + tblname); err != nil {
+		return err
+	}
+
+	_, err = stmt.ExecuteUpdate(context.Background())
+	return err
+}
+
+func (s *SnowflakeQuirks) Alloc() memory.Allocator               { return s.mem }
+func (s *SnowflakeQuirks) BindParameter(_ int) string            { return "?" }
+func (s *SnowflakeQuirks) SupportsConcurrentStatements() bool    { return true }
+func (s *SnowflakeQuirks) SupportsPartitionedData() bool         { return false }
+func (s *SnowflakeQuirks) SupportsTransactions() bool            { return true }
+func (s *SnowflakeQuirks) SupportsGetParameterSchema() bool      { return false }
+func (s *SnowflakeQuirks) SupportsDynamicParameterBinding() bool { return false }
+func (s *SnowflakeQuirks) SupportsBulkIngest() bool              { return true }
+func (s *SnowflakeQuirks) GetMetadata(code adbc.InfoCode) interface{} {
+	switch code {
+	case adbc.InfoDriverName:
+		return "ADBC Snowflake Driver - Go"
+	// runtime/debug.ReadBuildInfo doesn't currently work for tests
+	// github.com/golang/go/issues/33976
+	case adbc.InfoDriverVersion:
+		return "(unknown or development build)"
+	case adbc.InfoDriverArrowVersion:
+		return "(unknown or development build)"
+	case adbc.InfoVendorName:
+		return "Snowflake"
+	}
+
+	return nil
+}
+
+func (s *SnowflakeQuirks) SampleTableSchemaMetadata(tblName string, dt arrow.DataType) arrow.Metadata {
+	switch dt.ID() {
+	case arrow.STRING:
+		return arrow.MetadataFrom(map[string]string{
+			"DATA_TYPE": "VARCHAR(16777216)", "PRIMARY_KEY": "N",
+		})
+	case arrow.INT64:
+		return arrow.MetadataFrom(map[string]string{
+			"DATA_TYPE": "NUMBER(38,0)", "PRIMARY_KEY": "N",
+		})
+	}
+	return arrow.Metadata{}
+}
+
+func TestADBCSnowflake(t *testing.T) {
+	uri := os.Getenv("SNOWFLAKE_URI")
+
+	if uri == "" {
+		t.Skip("no SNOWFLAKE_URI defined, skip snowflake driver tests")
+	}
+
+	q := &SnowflakeQuirks{dsn: uri}
+	suite.Run(t, &validation.DatabaseTests{Quirks: q})
+	suite.Run(t, &validation.ConnectionTests{Quirks: q})
+	suite.Run(t, &validation.StatementTests{Quirks: q})
+}
diff --git a/go/adbc/driver/snowflake/record_reader.go b/go/adbc/driver/snowflake/record_reader.go
new file mode 100644
index 0000000..c9bcac9
--- /dev/null
+++ b/go/adbc/driver/snowflake/record_reader.go
@@ -0,0 +1,392 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package snowflake
+
+import (
+	"context"
+	"math"
+	"strings"
+	"sync/atomic"
+	"time"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/compute"
+	"github.com/apache/arrow/go/v12/arrow/ipc"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+	"github.com/snowflakedb/gosnowflake"
+	"golang.org/x/sync/errgroup"
+)
+
+func identCol(_ context.Context, a arrow.Array) (arrow.Array, error) {
+	a.Retain()
+	return a, nil
+}
+
+type recordTransformer = func(context.Context, arrow.Record) (arrow.Record, error)
+type colTransformer = func(context.Context, arrow.Array) (arrow.Array, error)
+
+func getRecTransformer(sc *arrow.Schema, tr []colTransformer) recordTransformer {
+	return func(ctx context.Context, r arrow.Record) (arrow.Record, error) {
+		if len(tr) != int(r.NumCols()) {
+			return nil, adbc.Error{
+				Msg:  "mismatch in record cols and transformers",
+				Code: adbc.StatusInvalidState,
+			}
+		}
+
+		var (
+			err  error
+			cols = make([]arrow.Array, r.NumCols())
+		)
+		for i, col := range r.Columns() {
+			if cols[i], err = tr[i](ctx, col); err != nil {
+				return nil, errToAdbcErr(adbc.StatusInternal, err)
+			}
+			defer cols[i].Release()
+		}
+
+		return array.NewRecord(sc, cols, r.NumRows()), nil
+	}
+}
+
+func getTransformer(sc *arrow.Schema, ld gosnowflake.ArrowStreamLoader) (*arrow.Schema, recordTransformer) {
+	loc, types := ld.Location(), ld.RowTypes()
+
+	fields := make([]arrow.Field, len(sc.Fields()))
+	transformers := make([]func(context.Context, arrow.Array) (arrow.Array, error), len(sc.Fields()))
+	for i, f := range sc.Fields() {
+		srcMeta := types[i]
+
+		switch strings.ToUpper(srcMeta.Type) {
+		case "FIXED":
+			switch f.Type.ID() {
+			case arrow.DECIMAL, arrow.DECIMAL256:
+				if srcMeta.Scale == 0 {
+					f.Type = arrow.PrimitiveTypes.Int64
+				} else {
+					f.Type = arrow.PrimitiveTypes.Float64
+				}
+
+				transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+					return compute.CastArray(ctx, a, compute.UnsafeCastOptions(f.Type))
+				}
+			default:
+				if srcMeta.Scale != 0 {
+					f.Type = arrow.PrimitiveTypes.Float64
+					transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+						result, err := compute.Divide(ctx, compute.ArithmeticOptions{NoCheckOverflow: true},
+							&compute.ArrayDatum{Value: a.Data()},
+							compute.NewDatum(math.Pow10(int(srcMeta.Scale))))
+						if err != nil {
+							return nil, err
+						}
+						defer result.Release()
+						return result.(*compute.ArrayDatum).MakeArray(), nil
+					}
+				} else {
+					f.Type = arrow.PrimitiveTypes.Int64
+					transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+						return compute.CastArray(ctx, a, compute.SafeCastOptions(arrow.PrimitiveTypes.Int64))
+					}
+				}
+			}
+		case "TIME":
+			f.Type = arrow.FixedWidthTypes.Time64ns
+			transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+				return compute.CastArray(ctx, a, compute.SafeCastOptions(f.Type))
+			}
+		case "TIMESTAMP_NTZ":
+			dt := &arrow.TimestampType{Unit: arrow.Nanosecond}
+			f.Type = dt
+			transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+				pool := compute.GetAllocator(ctx)
+				tb := array.NewTimestampBuilder(pool, dt)
+				defer tb.Release()
+
+				if a.DataType().ID() == arrow.STRUCT {
+					structData := a.(*array.Struct)
+					epoch := structData.Field(0).(*array.Int64).Int64Values()
+					fraction := structData.Field(1).(*array.Int32).Int32Values()
+					for i := 0; i < a.Len(); i++ {
+						if a.IsNull(i) {
+							tb.AppendNull()
+							continue
+						}
+
+						tb.Append(arrow.Timestamp(time.Unix(epoch[i], int64(fraction[i])).UnixNano()))
+					}
+				} else {
+					for i, t := range a.(*array.Timestamp).TimestampValues() {
+						if a.IsNull(i) {
+							tb.AppendNull()
+							continue
+						}
+
+						val := time.Unix(0, int64(t)*int64(math.Pow10(9-int(srcMeta.Scale)))).UTC()
+						tb.Append(arrow.Timestamp(val.UnixNano()))
+					}
+				}
+				return tb.NewArray(), nil
+			}
+		case "TIMESTAMP_LTZ":
+			dt := &arrow.TimestampType{Unit: arrow.Nanosecond, TimeZone: loc.String()}
+			f.Type = dt
+			transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+				pool := compute.GetAllocator(ctx)
+				tb := array.NewTimestampBuilder(pool, dt)
+				defer tb.Release()
+
+				if a.DataType().ID() == arrow.STRUCT {
+					structData := a.(*array.Struct)
+					epoch := structData.Field(0).(*array.Int64).Int64Values()
+					fraction := structData.Field(1).(*array.Int32).Int32Values()
+					for i := 0; i < a.Len(); i++ {
+						if a.IsNull(i) {
+							tb.AppendNull()
+							continue
+						}
+
+						tb.Append(arrow.Timestamp(time.Unix(epoch[i], int64(fraction[i])).UnixNano()))
+					}
+				} else {
+					for i, t := range a.(*array.Timestamp).TimestampValues() {
+						if a.IsNull(i) {
+							tb.AppendNull()
+							continue
+						}
+
+						q := int64(t) / int64(math.Pow10(int(srcMeta.Scale)))
+						r := int64(t) % int64(math.Pow10(int(srcMeta.Scale)))
+						tb.Append(arrow.Timestamp(time.Unix(q, r).UnixNano()))
+					}
+				}
+				return tb.NewArray(), nil
+			}
+		case "TIMESTAMP_TZ":
+			dt := &arrow.TimestampType{Unit: arrow.Nanosecond}
+			f.Type = dt
+			transformers[i] = func(ctx context.Context, a arrow.Array) (arrow.Array, error) {
+				pool := compute.GetAllocator(ctx)
+				tb := array.NewTimestampBuilder(pool, dt)
+				defer tb.Release()
+
+				structData := a.(*array.Struct)
+				if structData.NumField() == 2 {
+					epoch := structData.Field(0).(*array.Int64).Int64Values()
+					tzoffset := structData.Field(1).(*array.Int32).Int32Values()
+					for i := 0; i < a.Len(); i++ {
+						if a.IsNull(i) {
+							tb.AppendNull()
+							continue
+						}
+
+						loc := gosnowflake.Location(int(tzoffset[i]) - 1440)
+						tb.Append(arrow.Timestamp(time.Unix(epoch[i], 0).In(loc).UnixNano()))
+					}
+				} else {
+					epoch := structData.Field(0).(*array.Int64).Int64Values()
+					fraction := structData.Field(1).(*array.Int32).Int32Values()
+					tzoffset := structData.Field(2).(*array.Int32).Int32Values()
+					for i := 0; i < a.Len(); i++ {
+						if a.IsNull(i) {
+							tb.AppendNull()
+							continue
+						}
+
+						loc := gosnowflake.Location(int(tzoffset[i]) - 1440)
+						tb.Append(arrow.Timestamp(time.Unix(epoch[i], int64(fraction[i])).In(loc).UnixNano()))
+					}
+				}
+				return tb.NewArray(), nil
+			}
+		default:
+			transformers[i] = identCol
+		}
+
+		fields[i] = f
+	}
+
+	meta := sc.Metadata()
+	out := arrow.NewSchema(fields, &meta)
+	return out, getRecTransformer(out, transformers)
+}
+
+type reader struct {
+	refCount   int64
+	schema     *arrow.Schema
+	chs        []chan arrow.Record
+	curChIndex int
+	rec        arrow.Record
+	err        error
+
+	cancelFn context.CancelFunc
+}
+
+func newRecordReader(ctx context.Context, alloc memory.Allocator, ld gosnowflake.ArrowStreamLoader, bufferSize int) (array.RecordReader, error) {
+	batches, err := ld.GetBatches()
+	if err != nil {
+		return nil, errToAdbcErr(adbc.StatusInternal, err)
+	}
+
+	ch := make(chan arrow.Record, bufferSize)
+	r, err := batches[0].GetStream(ctx)
+	if err != nil {
+		return nil, errToAdbcErr(adbc.StatusIO, err)
+	}
+
+	rr, err := ipc.NewReader(r, ipc.WithAllocator(alloc))
+	if err != nil {
+		return nil, adbc.Error{
+			Msg:  err.Error(),
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	group, ctx := errgroup.WithContext(compute.WithAllocator(ctx, alloc))
+	ctx, cancelFn := context.WithCancel(ctx)
+
+	schema, recTransform := getTransformer(rr.Schema(), ld)
+
+	defer func() {
+		if err != nil {
+			close(ch)
+			cancelFn()
+		}
+	}()
+
+	group.Go(func() error {
+		defer rr.Release()
+		defer r.Close()
+
+		for rr.Next() && ctx.Err() == nil {
+			rec := rr.Record()
+			rec, err = recTransform(ctx, rec)
+			if err != nil {
+				return err
+			}
+			ch <- rec
+		}
+		return rr.Err()
+	})
+
+	chs := make([]chan arrow.Record, len(batches))
+	chs[0] = ch
+	rdr := &reader{
+		refCount: 1,
+		chs:      chs,
+		err:      nil,
+		cancelFn: cancelFn,
+		schema:   schema,
+	}
+
+	lastChannelIndex := len(chs) - 1
+	for i, b := range batches[1:] {
+		batch, batchIdx := b, i+1
+		chs[batchIdx] = make(chan arrow.Record, bufferSize)
+		group.Go(func() error {
+			// close channels (except the last) so that Next can move on to the next channel properly
+			if batchIdx != lastChannelIndex {
+				defer close(chs[batchIdx])
+			}
+
+			rdr, err := batch.GetStream(ctx)
+			if err != nil {
+				return err
+			}
+			defer rdr.Close()
+
+			rr, err := ipc.NewReader(rdr, ipc.WithAllocator(alloc))
+			if err != nil {
+				return err
+			}
+			defer rr.Release()
+
+			for rr.Next() && ctx.Err() == nil {
+				rec := rr.Record()
+				rec, err = recTransform(ctx, rec)
+				if err != nil {
+					return err
+				}
+				chs[batchIdx] <- rec
+			}
+
+			return rr.Err()
+		})
+	}
+
+	go func() {
+		rdr.err = group.Wait()
+		// don't close the last channel until after the group is finished,
+		// so that Next() can only return after reader.err may have been set
+		close(chs[lastChannelIndex])
+	}()
+
+	return rdr, nil
+}
+
+func (r *reader) Schema() *arrow.Schema {
+	return r.schema
+}
+
+func (r *reader) Record() arrow.Record {
+	return r.rec
+}
+
+func (r *reader) Err() error {
+	return r.err
+}
+
+func (r *reader) Next() bool {
+	if r.rec != nil {
+		r.rec.Release()
+		r.rec = nil
+	}
+
+	if r.curChIndex >= len(r.chs) {
+		return false
+	}
+
+	var ok bool
+	for r.curChIndex < len(r.chs) {
+		if r.rec, ok = <-r.chs[r.curChIndex]; ok {
+			break
+		}
+		r.curChIndex++
+	}
+	return r.rec != nil
+}
+
+func (r *reader) Retain() {
+	atomic.AddInt64(&r.refCount, 1)
+}
+
+func (r *reader) Release() {
+	if atomic.AddInt64(&r.refCount, -1) == 0 {
+		if r.rec != nil {
+			r.rec.Release()
+		}
+		r.cancelFn()
+		for _, ch := range r.chs {
+			for rec := range ch {
+				rec.Release()
+			}
+		}
+	}
+}
diff --git a/go/adbc/driver/snowflake/statement.go b/go/adbc/driver/snowflake/statement.go
new file mode 100644
index 0000000..80f0f9c
--- /dev/null
+++ b/go/adbc/driver/snowflake/statement.go
@@ -0,0 +1,582 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package snowflake
+
+import (
+	"context"
+	"database/sql/driver"
+	"fmt"
+	"strconv"
+	"strings"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow/go/v12/arrow"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/memory"
+	"github.com/snowflakedb/gosnowflake"
+	"golang.org/x/exp/constraints"
+)
+
+const (
+	OptionStatementQueueSize = "adbc.rpc.result_queue_size"
+)
+
+type statement struct {
+	cnxn      *cnxn
+	alloc     memory.Allocator
+	queueSize int
+
+	query       string
+	targetTable string
+	append      bool
+
+	bound      arrow.Record
+	streamBind array.RecordReader
+}
+
+// Close releases any relevant resources associated with this statement
+// and closes it (particularly if it is a prepared statement).
+//
+// A statement instance should not be used after Close is called.
+func (st *statement) Close() error {
+	if st.cnxn == nil {
+		return adbc.Error{
+			Msg:  "statement already closed",
+			Code: adbc.StatusInvalidState}
+	}
+
+	if st.bound != nil {
+		st.bound.Release()
+		st.bound = nil
+	} else if st.streamBind != nil {
+		st.streamBind.Release()
+		st.streamBind = nil
+	}
+	st.cnxn = nil
+	return nil
+}
+
+// SetOption sets a string option on this statement
+func (st *statement) SetOption(key string, val string) error {
+	switch key {
+	case adbc.OptionKeyIngestTargetTable:
+		st.query = ""
+		st.targetTable = val
+	case adbc.OptionKeyIngestMode:
+		switch val {
+		case adbc.OptionValueIngestModeAppend:
+			st.append = true
+		case adbc.OptionValueIngestModeCreate:
+			st.append = false
+		default:
+			return adbc.Error{
+				Msg:  fmt.Sprintf("invalid statement option %s=%s", key, val),
+				Code: adbc.StatusInvalidArgument,
+			}
+		}
+	default:
+		return adbc.Error{
+			Msg:  fmt.Sprintf("invalid statement option %s=%s", key, val),
+			Code: adbc.StatusInvalidArgument,
+		}
+	}
+	return nil
+}
+
+// SetSqlQuery sets the query string to be executed.
+//
+// The query can then be executed with any of the Execute methods.
+// For queries expected to be executed repeatedly, Prepare should be
+// called before execution.
+func (st *statement) SetSqlQuery(query string) error {
+	st.query = query
+	st.targetTable = ""
+	return nil
+}
+
+func toSnowflakeType(dt arrow.DataType) string {
+	switch dt.ID() {
+	case arrow.EXTENSION:
+		return toSnowflakeType(dt.(arrow.ExtensionType).StorageType())
+	case arrow.DICTIONARY:
+		return toSnowflakeType(dt.(*arrow.DictionaryType).ValueType)
+	case arrow.RUN_END_ENCODED:
+		return toSnowflakeType(dt.(*arrow.RunEndEncodedType).Encoded())
+	case arrow.INT8, arrow.INT16, arrow.INT32, arrow.INT64,
+		arrow.UINT8, arrow.UINT16, arrow.UINT32, arrow.UINT64:
+		return "integer"
+	case arrow.FLOAT32, arrow.FLOAT16, arrow.FLOAT64:
+		return "double"
+	case arrow.DECIMAL, arrow.DECIMAL256:
+		dec := dt.(arrow.DecimalType)
+		return fmt.Sprintf("NUMERIC(%d,%d)", dec.GetPrecision(), dec.GetScale())
+	case arrow.STRING, arrow.LARGE_STRING:
+		return "text"
+	case arrow.BINARY, arrow.LARGE_BINARY:
+		return "binary"
+	case arrow.FIXED_SIZE_BINARY:
+		fsb := dt.(*arrow.FixedSizeBinaryType)
+		return fmt.Sprintf("binary(%d)", fsb.ByteWidth)
+	case arrow.BOOL:
+		return "boolean"
+	case arrow.TIME32, arrow.TIME64:
+		t := dt.(arrow.TemporalWithUnit)
+		prec := int(t.TimeUnit()) * 3
+		return fmt.Sprintf("time(%d)", prec)
+	case arrow.DATE32, arrow.DATE64:
+		return "date"
+	case arrow.TIMESTAMP:
+		ts := dt.(*arrow.TimestampType)
+		prec := int(ts.Unit) * 3
+		if ts.TimeZone == "" {
+			return fmt.Sprintf("timestamp_tz(%d)", prec)
+		}
+		return fmt.Sprintf("timestamp_ltz(%d)", prec)
+	case arrow.DENSE_UNION, arrow.SPARSE_UNION:
+		return "variant"
+	case arrow.LIST, arrow.LARGE_LIST, arrow.FIXED_SIZE_LIST:
+		return "array"
+	case arrow.STRUCT, arrow.MAP:
+		return "object"
+	}
+
+	return ""
+}
+
+func (st *statement) initIngest(ctx context.Context) (string, error) {
+	var (
+		createBldr, insertBldr strings.Builder
+	)
+
+	createBldr.WriteString("CREATE TABLE ")
+	createBldr.WriteString(st.targetTable)
+	createBldr.WriteString(" (")
+
+	insertBldr.WriteString("INSERT INTO ")
+	insertBldr.WriteString(st.targetTable)
+	insertBldr.WriteString(" VALUES (")
+
+	var schema *arrow.Schema
+	if st.bound != nil {
+		schema = st.bound.Schema()
+	} else {
+		schema = st.streamBind.Schema()
+	}
+
+	for i, f := range schema.Fields() {
+		if i != 0 {
+			insertBldr.WriteString(", ")
+			createBldr.WriteString(", ")
+		}
+
+		createBldr.WriteString(strconv.Quote(f.Name))
+		createBldr.WriteString(" ")
+		ty := toSnowflakeType(f.Type)
+		if ty == "" {
+			return "", adbc.Error{
+				Msg:  fmt.Sprintf("unimplemented type conversion for field %s, arrow type: %s", f.Name, f.Type),
+				Code: adbc.StatusNotImplemented,
+			}
+		}
+
+		createBldr.WriteString(ty)
+		if !f.Nullable {
+			createBldr.WriteString(" NOT NULL")
+		}
+
+		insertBldr.WriteString("?")
+	}
+
+	createBldr.WriteString(")")
+	insertBldr.WriteString(")")
+
+	if !st.append {
+		// create the table!
+		createQuery := createBldr.String()
+		_, err := st.cnxn.cn.ExecContext(ctx, createQuery, nil)
+		if err != nil {
+			return "", errToAdbcErr(adbc.StatusInternal, err)
+		}
+	}
+
+	return insertBldr.String(), nil
+}
+
+type nativeArrowArr[T string | []byte] interface {
+	arrow.Array
+	Value(int) T
+}
+
+func convToArr[T string | []byte](arr nativeArrowArr[T]) interface{} {
+	if arr.Len() == 1 {
+		if arr.IsNull(0) {
+			return nil
+		}
+
+		return arr.Value(0)
+	}
+
+	v := make([]interface{}, arr.Len())
+	for i := 0; i < arr.Len(); i++ {
+		if arr.IsNull(i) {
+			continue
+		}
+		v[i] = arr.Value(i)
+	}
+	return gosnowflake.Array(&v)
+}
+
+func convMarshal(arr arrow.Array) interface{} {
+	if arr.Len() == 0 {
+		if arr.IsNull(0) {
+			return nil
+		}
+		return arr.ValueStr(0)
+	}
+
+	v := make([]interface{}, arr.Len())
+	for i := 0; i < arr.Len(); i++ {
+		if arr.IsNull(i) {
+			continue
+		}
+		v[i] = arr.ValueStr(i)
+	}
+	return gosnowflake.Array(&v)
+}
+
+// snowflake driver bindings only support specific types
+// int/int32/int64/float64/float32/bool/string/byte/time
+// so we have to cast anything else appropriately
+func convToSlice[T, O constraints.Integer | constraints.Float](arr arrow.Array, vals []T) interface{} {
+	if arr.Len() == 1 {
+		if arr.IsNull(0) {
+			return nil
+		}
+
+		return vals[0]
+	}
+
+	out := make([]interface{}, arr.Len())
+	for i, v := range vals {
+		if arr.IsNull(i) {
+			continue
+		}
+		out[i] = O(v)
+	}
+	return gosnowflake.Array(&out)
+}
+
+func getQueryArg(arr arrow.Array) interface{} {
+	switch arr := arr.(type) {
+	case *array.Int8:
+		v := arr.Int8Values()
+		return convToSlice[int8, int32](arr, v)
+	case *array.Uint8:
+		v := arr.Uint8Values()
+		return convToSlice[uint8, int32](arr, v)
+	case *array.Int16:
+		v := arr.Int16Values()
+		return convToSlice[int16, int32](arr, v)
+	case *array.Uint16:
+		v := arr.Uint16Values()
+		return convToSlice[uint16, int32](arr, v)
+	case *array.Int32:
+		v := arr.Int32Values()
+		return convToSlice[int32, int32](arr, v)
+	case *array.Uint32:
+		v := arr.Uint32Values()
+		return convToSlice[uint32, int64](arr, v)
+	case *array.Int64:
+		v := arr.Int64Values()
+		return convToSlice[int64, int64](arr, v)
+	case *array.Uint64:
+		v := arr.Uint64Values()
+		return convToSlice[uint64, int64](arr, v)
+	case *array.Float32:
+		v := arr.Float32Values()
+		return convToSlice[float32, float64](arr, v)
+	case *array.Float64:
+		v := arr.Float64Values()
+		return convToSlice[float64, float64](arr, v)
+	case *array.LargeBinary:
+		return convToArr[[]byte](arr)
+	case *array.Binary:
+		return convToArr[[]byte](arr)
+	case *array.LargeString:
+		return convToArr[string](arr)
+	case *array.String:
+		return convToArr[string](arr)
+	default:
+		// default convert to array of strings and pass to snowflake driver
+		// not the most efficient, but snowflake doesn't really give a better
+		// route currently short of writing everything out to a Parquet file
+		// and then uploading that (which might be preferable)
+		return convMarshal(arr)
+	}
+}
+
+func (st *statement) executeIngest(ctx context.Context) (int64, error) {
+	if st.streamBind == nil && st.bound == nil {
+		return -1, adbc.Error{
+			Msg:  "must call Bind before bulk ingestion",
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	insertQuery, err := st.initIngest(ctx)
+	if err != nil {
+		return -1, err
+	}
+
+	// if the ingestion is large enough it might make more sense to
+	// write this out to a temporary file / stage / etc. and use
+	// the snowflake bulk loader that way.
+	//
+	// on the other hand, according to the documentation,
+	// https://pkg.go.dev/github.com/snowflakedb/gosnowflake#hdr-Batch_Inserts_and_Binding_Parameters
+	// snowflake's internal driver work should already be doing this.
+
+	var n int64
+	exec := func(rec arrow.Record, args []driver.NamedValue) error {
+		for i, c := range rec.Columns() {
+			args[i].Ordinal = i
+			args[i].Value = getQueryArg(c)
+		}
+
+		r, err := st.cnxn.cn.ExecContext(ctx, insertQuery, args)
+		if err != nil {
+			return errToAdbcErr(adbc.StatusInternal, err)
+		}
+
+		rows, err := r.RowsAffected()
+		if err == nil {
+			n += rows
+		}
+		return nil
+	}
+
+	if st.bound != nil {
+		defer func() {
+			st.bound.Release()
+			st.bound = nil
+		}()
+		args := make([]driver.NamedValue, len(st.bound.Schema().Fields()))
+		return n, exec(st.bound, args)
+	}
+
+	defer func() {
+		st.streamBind.Release()
+		st.streamBind = nil
+	}()
+	args := make([]driver.NamedValue, len(st.streamBind.Schema().Fields()))
+	for st.streamBind.Next() {
+		rec := st.streamBind.Record()
+		if err := exec(rec, args); err != nil {
+			return n, err
+		}
+	}
+
+	return n, nil
+}
+
+// ExecuteQuery executes the current query or prepared statement
+// and returnes a RecordReader for the results along with the number
+// of rows affected if known, otherwise it will be -1.
+//
+// This invalidates any prior result sets on this statement.
+func (st *statement) ExecuteQuery(ctx context.Context) (array.RecordReader, int64, error) {
+	if st.targetTable != "" {
+		n, err := st.executeIngest(ctx)
+		return nil, n, err
+	}
+
+	if st.query == "" {
+		return nil, -1, adbc.Error{
+			Msg:  "cannot execute without a query",
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	// for a bound stream reader we'd need to implement something to
+	// concatenate RecordReaders which doesn't exist yet. let's put
+	// that off for now.
+	if st.streamBind != nil || st.bound != nil {
+		return nil, -1, adbc.Error{
+			Msg:  "executing non-bulk ingest with bound params not yet implemented",
+			Code: adbc.StatusNotImplemented,
+		}
+	}
+
+	loader, err := st.cnxn.cn.QueryArrowStream(ctx, st.query)
+	if err != nil {
+		return nil, -1, errToAdbcErr(adbc.StatusInternal, err)
+	}
+
+	rdr, err := newRecordReader(ctx, st.alloc, loader, st.queueSize)
+	nrec := loader.TotalRows()
+	return rdr, nrec, err
+}
+
+// ExecuteUpdate executes a statement that does not generate a result
+// set. It returns the number of rows affected if known, otherwise -1.
+func (st *statement) ExecuteUpdate(ctx context.Context) (int64, error) {
+	if st.targetTable != "" {
+		return st.executeIngest(ctx)
+	}
+
+	if st.query == "" {
+		return -1, adbc.Error{
+			Msg:  "cannot execute without a query",
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	r, err := st.cnxn.cn.ExecContext(ctx, st.query, nil)
+	if err != nil {
+		return -1, errToAdbcErr(adbc.StatusIO, err)
+	}
+
+	n, err := r.RowsAffected()
+	if err != nil {
+		n = -1
+	}
+
+	return n, nil
+}
+
+// Prepare turns this statement into a prepared statement to be executed
+// multiple times. This invalidates any prior result sets.
+func (st *statement) Prepare(_ context.Context) error {
+	if st.query == "" {
+		return adbc.Error{
+			Code: adbc.StatusInvalidState,
+			Msg:  "cannot prepare statement with no query",
+		}
+	}
+	// snowflake doesn't provide a "Prepare" api, this is a no-op
+	return nil
+}
+
+// SetSubstraitPlan allows setting a serialized Substrait execution
+// plan into the query or for querying Substrait-related metadata.
+//
+// Drivers are not required to support both SQL and Substrait semantics.
+// If they do, it may be via converting between representations internally.
+//
+// Like SetSqlQuery, after this is called the query can be executed
+// using any of the Execute methods. If the query is expected to be
+// executed repeatedly, Prepare should be called first on the statement.
+func (st *statement) SetSubstraitPlan(plan []byte) error {
+	return adbc.Error{
+		Msg:  "Snowflake does not support Substrait plans",
+		Code: adbc.StatusNotImplemented,
+	}
+}
+
+// Bind uses an arrow record batch to bind parameters to the query.
+//
+// This can be used for bulk inserts or for prepared statements.
+// The driver will call release on the passed in Record when it is done,
+// but it may not do this until the statement is closed or another
+// record is bound.
+func (st *statement) Bind(_ context.Context, values arrow.Record) error {
+	if st.streamBind != nil {
+		st.streamBind.Release()
+		st.streamBind = nil
+	} else if st.bound != nil {
+		st.bound.Release()
+		st.bound = nil
+	}
+
+	st.bound = values
+	if st.bound != nil {
+		st.bound.Retain()
+	}
+	return nil
+}
+
+// BindStream uses a record batch stream to bind parameters for this
+// query. This can be used for bulk inserts or prepared statements.
+//
+// The driver will call Release on the record reader, but may not do this
+// until Close is called.
+func (st *statement) BindStream(_ context.Context, stream array.RecordReader) error {
+	if st.streamBind != nil {
+		st.streamBind.Release()
+		st.streamBind = nil
+	} else if st.bound != nil {
+		st.bound.Release()
+		st.bound = nil
+	}
+
+	st.streamBind = stream
+	if st.streamBind != nil {
+		st.streamBind.Retain()
+	}
+	return nil
+}
+
+// GetParameterSchema returns an Arrow schema representation of
+// the expected parameters to be bound.
+//
+// This retrieves an Arrow Schema describing the number, names, and
+// types of the parameters in a parameterized statement. The fields
+// of the schema should be in order of the ordinal position of the
+// parameters; named parameters should appear only once.
+//
+// If the parameter does not have a name, or a name cannot be determined,
+// the name of the corresponding field in the schema will be an empty
+// string. If the type cannot be determined, the type of the corresponding
+// field will be NA (NullType).
+//
+// This should be called only after calling Prepare.
+//
+// This should return an error with StatusNotImplemented if the schema
+// cannot be determined.
+func (st *statement) GetParameterSchema() (*arrow.Schema, error) {
+	// snowflake's API does not provide any way to determine the schema
+	return nil, adbc.Error{
+		Code: adbc.StatusNotImplemented,
+	}
+}
+
+// ExecutePartitions executes the current statement and gets the results
+// as a partitioned result set.
+//
+// It returns the Schema of the result set, the collection of partition
+// descriptors and the number of rows affected, if known. If unknown,
+// the number of rows affected will be -1.
+//
+// If the driver does not support partitioned results, this will return
+// an error with a StatusNotImplemented code.
+func (st *statement) ExecutePartitions(ctx context.Context) (*arrow.Schema, adbc.Partitions, int64, error) {
+	if st.query == "" {
+		return nil, adbc.Partitions{}, -1, adbc.Error{
+			Msg:  "cannot execute without a query",
+			Code: adbc.StatusInvalidState,
+		}
+	}
+
+	// snowflake partitioned results are not currently portable enough to
+	// satisfy the requirements of this function. At least not what is
+	// returned from the snowflake driver.
+	return nil, adbc.Partitions{}, -1, adbc.Error{
+		Msg:  "ExecutePartitions not implemented for Snowflake",
+		Code: adbc.StatusNotImplemented,
+	}
+}
diff --git a/go/adbc/go.mod b/go/adbc/go.mod
index b428162..6116097 100644
--- a/go/adbc/go.mod
+++ b/go/adbc/go.mod
@@ -22,51 +22,85 @@ go 1.18
 require (
 	github.com/apache/arrow/go/v12 v12.0.0-20230421000340-388f3a88c647
 	github.com/bluele/gcache v0.0.2
-	github.com/stretchr/testify v1.8.1
-	golang.org/x/exp v0.0.0-20230206171751-46f607a40771
+	github.com/snowflakedb/gosnowflake v1.6.21-0.20230427202326-79f2d00be7ac
+	github.com/stretchr/testify v1.8.2
+	golang.org/x/exp v0.0.0-20230420155640-133eef4313cb
 	golang.org/x/sync v0.1.0
-	golang.org/x/tools v0.6.0
-	google.golang.org/grpc v1.53.0
-	google.golang.org/protobuf v1.28.1
+	golang.org/x/tools v0.8.0
+	google.golang.org/grpc v1.54.0
+	google.golang.org/protobuf v1.30.0
 )
 
 require (
-	github.com/andybalholm/brotli v1.0.4 // indirect
+	github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 // indirect
+	github.com/99designs/keyring v1.2.2 // indirect
+	github.com/Azure/azure-sdk-for-go/sdk/azcore v1.5.0 // indirect
+	github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
+	github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0 // indirect
+	github.com/JohnCGriffin/overflow v0.0.0-20211019200055-46fa312c352c // indirect
+	github.com/andybalholm/brotli v1.0.5 // indirect
+	github.com/apache/arrow/go/v11 v11.0.0 // indirect
 	github.com/apache/thrift v0.17.0 // indirect
+	github.com/aws/aws-sdk-go-v2 v1.17.8 // indirect
+	github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 // indirect
+	github.com/aws/aws-sdk-go-v2/credentials v1.13.20 // indirect
+	github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.63 // indirect
+	github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.32 // indirect
+	github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.26 // indirect
+	github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.24 // indirect
+	github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11 // indirect
+	github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.27 // indirect
+	github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.26 // indirect
+	github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.1 // indirect
+	github.com/aws/aws-sdk-go-v2/service/s3 v1.32.0 // indirect
+	github.com/aws/smithy-go v1.13.5 // indirect
+	github.com/danieljoos/wincred v1.1.2 // indirect
 	github.com/davecgh/go-spew v1.1.1 // indirect
 	github.com/dustin/go-humanize v1.0.1 // indirect
-	github.com/goccy/go-json v0.10.0 // indirect
-	github.com/golang/protobuf v1.5.2 // indirect
+	github.com/dvsekhvalnov/jose2go v1.5.0 // indirect
+	github.com/form3tech-oss/jwt-go v3.2.5+incompatible // indirect
+	github.com/gabriel-vasile/mimetype v1.4.2 // indirect
+	github.com/goccy/go-json v0.10.2 // indirect
+	github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 // indirect
+	github.com/golang/protobuf v1.5.3 // indirect
 	github.com/golang/snappy v0.0.4 // indirect
-	github.com/google/flatbuffers v23.1.21+incompatible // indirect
+	github.com/google/flatbuffers v23.3.3+incompatible // indirect
 	github.com/google/uuid v1.3.0 // indirect
+	github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c // indirect
+	github.com/jmespath/go-jmespath v0.4.0 // indirect
 	github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
 	github.com/klauspost/asmfmt v1.3.2 // indirect
-	github.com/klauspost/compress v1.15.15 // indirect
-	github.com/klauspost/cpuid/v2 v2.2.3 // indirect
+	github.com/klauspost/compress v1.16.5 // indirect
+	github.com/klauspost/cpuid/v2 v2.2.4 // indirect
 	github.com/kr/text v0.2.0 // indirect
-	github.com/mattn/go-isatty v0.0.17 // indirect
+	github.com/mattn/go-isatty v0.0.18 // indirect
 	github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 // indirect
 	github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 // indirect
+	github.com/mtibben/percent v0.2.1 // indirect
 	github.com/pierrec/lz4/v4 v4.1.17 // indirect
+	github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
 	github.com/pmezard/go-difflib v1.0.0 // indirect
 	github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
+	github.com/sirupsen/logrus v1.9.0 // indirect
 	github.com/zeebo/xxh3 v1.0.2 // indirect
-	golang.org/x/mod v0.8.0 // indirect
-	golang.org/x/net v0.7.0 // indirect
-	golang.org/x/sys v0.5.0 // indirect
-	golang.org/x/text v0.7.0 // indirect
+	golang.org/x/crypto v0.8.0 // indirect
+	golang.org/x/mod v0.10.0 // indirect
+	golang.org/x/net v0.9.0 // indirect
+	golang.org/x/sys v0.7.0 // indirect
+	golang.org/x/term v0.7.0 // indirect
+	golang.org/x/text v0.9.0 // indirect
 	golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
-	google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc // indirect
+	gonum.org/v1/gonum v0.12.0 // indirect
+	google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
 	gopkg.in/yaml.v3 v3.0.1 // indirect
-	lukechampine.com/uint128 v1.2.0 // indirect
+	lukechampine.com/uint128 v1.3.0 // indirect
 	modernc.org/cc/v3 v3.40.0 // indirect
 	modernc.org/ccgo/v3 v3.16.13 // indirect
-	modernc.org/libc v1.22.2 // indirect
+	modernc.org/libc v1.22.4 // indirect
 	modernc.org/mathutil v1.5.0 // indirect
 	modernc.org/memory v1.5.0 // indirect
 	modernc.org/opt v0.1.3 // indirect
-	modernc.org/sqlite v1.20.4 // indirect
+	modernc.org/sqlite v1.21.2 // indirect
 	modernc.org/strutil v1.1.3 // indirect
 	modernc.org/token v1.1.0 // indirect
 )
diff --git a/go/adbc/go.sum b/go/adbc/go.sum
index 3fd31dd..3284b02 100644
--- a/go/adbc/go.sum
+++ b/go/adbc/go.sum
@@ -1,123 +1,219 @@
+github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 h1:/vQbFIOMbk2FiG/kXiLl8BRyzTWDw7gX/Hz7Dd5eDMs=
+github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4/go.mod h1:hN7oaIRCjzsZ2dE+yG5k+rsdt3qcwykqK6HVGcKwsw4=
+github.com/99designs/keyring v1.2.2 h1:pZd3neh/EmUzWONb35LxQfvuY7kiSXAq3HQd97+XBn0=
+github.com/99designs/keyring v1.2.2/go.mod h1:wes/FrByc8j7lFOAGLGSNEg8f/PaI3cgTBqhFkHUrPk=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.5.0 h1:xGLAFFd9D3iLGxYiUGPdITSzsFmU1K8VtfuUHWAoN7M=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.5.0/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q=
+github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8=
+github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY=
+github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM=
+github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0 h1:u/LLAOFgsMv7HmNL4Qufg58y+qElGOt5qv0z1mURkRY=
+github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0/go.mod h1:2e8rMJtl2+2j+HXbTBwnyGpm5Nou7KhvSfxOq8JpTag=
+github.com/AzureAD/microsoft-authentication-library-for-go v0.5.1 h1:BWe8a+f/t+7KY7zH2mqygeUD0t8hNFXe08p1Pb3/jKE=
 github.com/JohnCGriffin/overflow v0.0.0-20211019200055-46fa312c352c h1:RGWPOewvKIROun94nF7v2cua9qP+thov/7M50KEoeSU=
-github.com/andybalholm/brotli v1.0.4 h1:V7DdXeJtZscaqfNuAdSRuRFzuiKlHSC/Zh3zl9qY3JY=
-github.com/andybalholm/brotli v1.0.4/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
+github.com/JohnCGriffin/overflow v0.0.0-20211019200055-46fa312c352c/go.mod h1:X0CRv0ky0k6m906ixxpzmDRLvX58TFUKS2eePweuyxk=
+github.com/andybalholm/brotli v1.0.5 h1:8uQZIdzKmjc/iuPu7O2ioW48L81FgatrcpfFmiq/cCs=
+github.com/andybalholm/brotli v1.0.5/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
+github.com/apache/arrow/go/v11 v11.0.0 h1:hqauxvFQxww+0mEU/2XHG6LT7eZternCZq+A5Yly2uM=
+github.com/apache/arrow/go/v11 v11.0.0/go.mod h1:Eg5OsL5H+e299f7u5ssuXsuHQVEGC4xei5aX110hRiI=
 github.com/apache/arrow/go/v12 v12.0.0-20230421000340-388f3a88c647 h1:qsBSonbDQRwj8HyUeD/NSaA0e2bT4f3kgcqkSqVZzdo=
 github.com/apache/arrow/go/v12 v12.0.0-20230421000340-388f3a88c647/go.mod h1:d+tV/eHZZ7Dz7RPrFKtPK02tpr+c9/PEd/zm8mDS9Vg=
 github.com/apache/thrift v0.17.0 h1:cMd2aj52n+8VoAtvSvLn4kDC3aZ6IAkBuqWQ2IDu7wo=
 github.com/apache/thrift v0.17.0/go.mod h1:OLxhMRJxomX+1I/KUw03qoV3mMz16BwaKI+d4fPBx7Q=
+github.com/aws/aws-sdk-go-v2 v1.17.8 h1:GMupCNNI7FARX27L7GjCJM8NgivWbRgpjNI/hOQjFS8=
+github.com/aws/aws-sdk-go-v2 v1.17.8/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw=
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 h1:dK82zF6kkPeCo8J1e+tGx4JdvDIQzj7ygIoLg8WMuGs=
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10/go.mod h1:VeTZetY5KRJLuD/7fkQXMU6Mw7H5m/KP2J5Iy9osMno=
+github.com/aws/aws-sdk-go-v2/config v1.18.21 h1:ENTXWKwE8b9YXgQCsruGLhvA9bhg+RqAsL9XEMEsa2c=
+github.com/aws/aws-sdk-go-v2/config v1.18.21/go.mod h1:+jPQiVPz1diRnjj6VGqWcLK6EzNmQ42l7J3OqGTLsSY=
+github.com/aws/aws-sdk-go-v2/credentials v1.13.20 h1:oZCEFcrMppP/CNiS8myzv9JgOzq2s0d3v3MXYil/mxQ=
+github.com/aws/aws-sdk-go-v2/credentials v1.13.20/go.mod h1:xtZnXErtbZ8YGXC3+8WfajpMBn5Ga/3ojZdxHq6iI8o=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.2 h1:jOzQAesnBFDmz93feqKnsTHsXrlwWORNZMFHMV+WLFU=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.2/go.mod h1:cDh1p6XkSGSwSRIArWRc6+UqAQ7x4alQ0QfpVR6f+co=
+github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.63 h1:3TZm9tzOXOX/aPAxBh3+LtMZtjHMzNO/wQ41+HFUzIA=
+github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.63/go.mod h1:M1piIHmVL5lJ6OomRJdZtemge4TeGw6sPPsQzIIjHWw=
+github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.32 h1:dpbVNUjczQ8Ae3QKHbpHBpfvaVkRdesxpTOe9pTouhU=
+github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.32/go.mod h1:RudqOgadTWdcS3t/erPQo24pcVEoYyqj/kKW5Vya21I=
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.26 h1:QH2kOS3Ht7x+u0gHCh06CXL/h6G8LQJFpZfFBYBNboo=
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.26/go.mod h1:vq86l7956VgFr0/FWQ2BWnK07QC3WYsepKzy33qqY5U=
+github.com/aws/aws-sdk-go-v2/internal/ini v1.3.33 h1:HbH1VjUgrCdLJ+4lnnuLI4iVNRvBbBELGaJ5f69ClA8=
+github.com/aws/aws-sdk-go-v2/internal/ini v1.3.33/go.mod h1:zG2FcwjQarWaqXSCGpgcr3RSjZ6dHGguZSppUL0XR7Q=
+github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.24 h1:zsg+5ouVLLbePknVZlUMm1ptwyQLkjjLMWnN+kVs5dA=
+github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.24/go.mod h1:+fFaIjycTmpV6hjmPTbyU9Kp5MI/lA+bbibcAtmlhYA=
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11 h1:y2+VQzC6Zh2ojtV2LoC0MNwHWc6qXv/j2vrQtlftkdA=
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11/go.mod h1:iV4q2hsqtNECrfmlXyord9u4zyuFEJX9eLgLpSPzWA8=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.27 h1:qIw7Hg5eJEc1uSxg3hRwAthPAO7NeOd4dPxhaTi0yB0=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.27/go.mod h1:Zz0kvhcSlu3NX4XJkaGgdjaa+u7a9LYuy8JKxA5v3RM=
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.26 h1:uUt4XctZLhl9wBE1L8lobU3bVN8SNUP7T+olb0bWBO4=
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.26/go.mod h1:Bd4C/4PkVGubtNe5iMXu5BNnaBi/9t/UsFspPt4ram8=
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.1 h1:lRWp3bNu5wy0X3a8GS42JvZFlv++AKsMdzEnoiVJrkg=
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.1/go.mod h1:VXBHSxdN46bsJrkniN68psSwbyBKsazQfU2yX/iSDso=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.32.0 h1:NAc8WQsVQ3+kz3rU619mlz8NcbpZI6FVJHQfH33QK0g=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.32.0/go.mod h1:aSl9/LJltSz1cVusiR/Mu8tvI4Sv/5w/WWrJmmkNii0=
+github.com/aws/aws-sdk-go-v2/service/sso v1.12.8 h1:5cb3D6xb006bPTqEfCNaEA6PPEfBXxxy4NNeX/44kGk=
+github.com/aws/aws-sdk-go-v2/service/sso v1.12.8/go.mod h1:GNIveDnP+aE3jujyUSH5aZ/rktsTM5EvtKnCqBZawdw=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.8 h1:NZaj0ngZMzsubWZbrEFSB4rgSQRbFq38Sd6KBxHuOIU=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.8/go.mod h1:44qFP1g7pfd+U+sQHLPalAPKnyfTZjJsYR4xIwsJy5o=
+github.com/aws/aws-sdk-go-v2/service/sts v1.18.9 h1:Qf1aWwnsNkyAoqDqmdM3nHwN78XQjec27LjM6b9vyfI=
+github.com/aws/aws-sdk-go-v2/service/sts v1.18.9/go.mod h1:yyW88BEPXA2fGFyI2KCcZC3dNpiT0CZAHaF+i656/tQ=
+github.com/aws/smithy-go v1.13.5 h1:hgz0X/DX0dGqTYpGALqXJoRKRj5oQ7150i5FdTePzO8=
+github.com/aws/smithy-go v1.13.5/go.mod h1:Tg+OJXh4MB2R/uN61Ko2f6hTZwB/ZYGOtib8J3gBHzA=
 github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
 github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
 github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
+github.com/danieljoos/wincred v1.1.2 h1:QLdCxFs1/Yl4zduvBdcHB8goaYk9RARS2SgLLRuAyr0=
+github.com/danieljoos/wincred v1.1.2/go.mod h1:GijpziifJoIBfYh+S7BbkdUTU4LfM+QnGqR5Vl2tAx0=
 github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
 github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
 github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c=
 github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
 github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
-github.com/goccy/go-json v0.10.0 h1:mXKd9Qw4NuzShiRlOXKews24ufknHO7gx30lsDyokKA=
-github.com/goccy/go-json v0.10.0/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
+github.com/dvsekhvalnov/jose2go v1.5.0 h1:3j8ya4Z4kMCwT5nXIKFSV84YS+HdqSSO0VsTQxaLAeM=
+github.com/dvsekhvalnov/jose2go v1.5.0/go.mod h1:QsHjhyTlD/lAVqn/NSbVZmSCGeDehTB/mPZadG+mhXU=
+github.com/form3tech-oss/jwt-go v3.2.5+incompatible h1:/l4kBbb4/vGSsdtB5nUe8L7B9mImVMaBPw9L/0TBHU8=
+github.com/form3tech-oss/jwt-go v3.2.5+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
+github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
+github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
+github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
+github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 h1:ZpnhV/YsD2/4cESfV5+Hoeu/iUR3ruzNvZ+yQfO03a0=
+github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
+github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
 github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
-github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
-github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
+github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
 github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
 github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
-github.com/google/flatbuffers v23.1.21+incompatible h1:bUqzx/MXCDxuS0hRJL2EfjyZL3uQrPbMocUa8zGqsTA=
-github.com/google/flatbuffers v23.1.21+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
+github.com/google/flatbuffers v23.3.3+incompatible h1:5PJI/WbJkaMTvpGxsHVKG/LurN/KnWXNyGpwSCDgen0=
+github.com/google/flatbuffers v23.3.3+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
 github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
 github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
 github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26 h1:Xim43kblpZXfIBQsbuBVKCudVG457BR2GZFIz3uw3hQ=
 github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
 github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c h1:6rhixN/i8ZofjG1Y75iExal34USq5p+wiN1tpie8IrU=
+github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c/go.mod h1:NMPJylDgVpX0MLRlPy15sqSwOFv/U1GZ2m21JhFfek0=
+github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
+github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
 github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
 github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
 github.com/klauspost/asmfmt v1.3.2 h1:4Ri7ox3EwapiOjCki+hw14RyKk201CN4rzyCJRFLpK4=
 github.com/klauspost/asmfmt v1.3.2/go.mod h1:AG8TuvYojzulgDAMCnYn50l/5QV3Bs/tp6j0HLHbNSE=
-github.com/klauspost/compress v1.15.15 h1:EF27CXIuDsYJ6mmvtBRlEuB2UVOqHG1tAXgZ7yIO+lw=
-github.com/klauspost/compress v1.15.15/go.mod h1:ZcK2JAFqKOpnBlxcLsJzYfrS9X1akm9fHZNnD9+Vo/4=
-github.com/klauspost/cpuid/v2 v2.2.3 h1:sxCkb+qR91z4vsqw4vGGZlDgPz3G7gjaLyK3V8y70BU=
-github.com/klauspost/cpuid/v2 v2.2.3/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
-github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
+github.com/klauspost/compress v1.16.5 h1:IFV2oUNUzZaz+XyusxpLzpzS8Pt5rh0Z16For/djlyI=
+github.com/klauspost/compress v1.16.5/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
+github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
+github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
+github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
 github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
 github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
-github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
-github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
-github.com/mattn/go-sqlite3 v1.14.15 h1:vfoHhTN1af61xCRSWzFIWzx2YskyMTwHLrExkBOjvxI=
+github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
+github.com/mattn/go-isatty v0.0.18 h1:DOKFKCQ7FNG2L1rbrmstDN4QVRdS89Nkh85u68Uwp98=
+github.com/mattn/go-isatty v0.0.18/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
+github.com/mattn/go-sqlite3 v1.14.16 h1:yOQRA0RpS5PFz/oikGwBEqvAWhWg5ufRz4ETLjwpU1Y=
 github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 h1:AMFGa4R4MiIpspGNG7Z948v4n35fFGB3RR3G/ry4FWs=
 github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcsrzbxHt8iiaC+zU4b1ylILSosueou12R++wfY=
 github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 h1:+n/aFZefKZp7spd8DFdX7uMikMLXX4oubIzJF4kv/wI=
 github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE=
+github.com/mtibben/percent v0.2.1 h1:5gssi8Nqo8QU/r2pynCm+hBQHpkB/uNK7BJCFogWdzs=
+github.com/mtibben/percent v0.2.1/go.mod h1:KG9uO+SZkUp+VkRHsCdYQV3XSZrrSpR3O9ibNBTZrns=
+github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
 github.com/pierrec/lz4/v4 v4.1.17 h1:kV4Ip+/hUBC+8T6+2EgburRtkE9ef4nbY3f4dFhGjMc=
 github.com/pierrec/lz4/v4 v4.1.17/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
+github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU=
+github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI=
 github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
 github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
 github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
 github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
 github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
 github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
+github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
+github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
+github.com/snowflakedb/gosnowflake v1.6.21-0.20230427202326-79f2d00be7ac h1:6oD4r5fULxuF32shPaEyZQH3ey+Nhh/Z8s3MGrsmB70=
+github.com/snowflakedb/gosnowflake v1.6.21-0.20230427202326-79f2d00be7ac/go.mod h1:BD4pgch4dCUCi29r6japI5W9plp38ys+k6Hm2ujHdAI=
 github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
 github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
 github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
 github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
+github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
 github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
 github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
-github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
-github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
+github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
+github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
 github.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=
 github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
 github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
-golang.org/x/exp v0.0.0-20230206171751-46f607a40771 h1:xP7rWLUr1e1n2xkK5YB4LI0hPEy3LJC6Wk+D4pGlOJg=
-golang.org/x/exp v0.0.0-20230206171751-46f607a40771/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
-golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8=
-golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
-golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
-golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
+golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
+golang.org/x/exp v0.0.0-20230420155640-133eef4313cb h1:rhjz/8Mbfa8xROFiH+MQphmAmgqRM0bOMnytznhWEXk=
+golang.org/x/exp v0.0.0-20230420155640-133eef4313cb/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
+golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
+golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM=
+golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
 golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
 golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210819135213-f52c844e1c1c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
-golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
-golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
-golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM=
-golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
+golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU=
+golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/term v0.7.0 h1:BEvjmm5fURWqcfbSKTdpkDXYBrUS1c0m8agp14W48vQ=
+golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
+golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
+golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/tools v0.8.0 h1:vSDcovVPld282ceKgDimkRSC8kpaH1dgyc9UMzlt84Y=
+golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4=
 golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
 golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
 golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
-gonum.org/v1/gonum v0.11.0 h1:f1IJhK4Km5tBJmaiJXtk/PkL4cdVX6J+tGiM187uT5E=
-google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc h1:ijGwO+0vL2hJt5gaygqP2j6PfflOBrRot0IczKbmtio=
-google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
-google.golang.org/grpc v1.53.0 h1:LAv2ds7cmFV/XTS3XG1NneeENYrXGmorPxsBbptIjNc=
-google.golang.org/grpc v1.53.0/go.mod h1:OnIrk0ipVdj4N5d9IUoFUx72/VlD7+jUsHwZgwSMQpw=
+gonum.org/v1/gonum v0.12.0 h1:xKuo6hzt+gMav00meVPUlXwSdoEJP46BR+wdxQEFK2o=
+gonum.org/v1/gonum v0.12.0/go.mod h1:73TDxJfAAHeA8Mk9mf8NlIppyhQNo5GLTcYeqgo2lvY=
+google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A=
+google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU=
+google.golang.org/grpc v1.54.0 h1:EhTqbhiYeixwWQtAEZAxmV9MGqcjEU2mFx52xCzNyag=
+google.golang.org/grpc v1.54.0/go.mod h1:PUSEXI6iWghWaB6lXM4knEgpJNu2qUcKfDtNci3EC2g=
 google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
 google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
-google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
-google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
+google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
 gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
 gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
 gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
 gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
 gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
-lukechampine.com/uint128 v1.2.0 h1:mBi/5l91vocEN8otkC5bDLhi2KdCticRiwbdB0O+rjI=
-lukechampine.com/uint128 v1.2.0/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk=
+lukechampine.com/uint128 v1.3.0 h1:cDdUVfRwDUDovz610ABgFD17nXD4/uDgVHl2sC3+sbo=
+lukechampine.com/uint128 v1.3.0/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk=
 modernc.org/cc/v3 v3.40.0 h1:P3g79IUS/93SYhtoeaHW+kRCIrYaxJ27MFPv+7kaTOw=
 modernc.org/cc/v3 v3.40.0/go.mod h1:/bTg4dnWkSXowUO6ssQKnOV0yMVxDYNIsIrzqTFDGH0=
 modernc.org/ccgo/v3 v3.16.13 h1:Mkgdzl46i5F/CNR/Kj80Ri59hC8TKAhZrYSaqvkwzUw=
 modernc.org/ccgo/v3 v3.16.13/go.mod h1:2Quk+5YgpImhPjv2Qsob1DnZ/4som1lJTodubIcoUkY=
 modernc.org/ccorpus v1.11.6 h1:J16RXiiqiCgua6+ZvQot4yUuUy8zxgqbqEEUuGPlISk=
 modernc.org/httpfs v1.0.6 h1:AAgIpFZRXuYnkjftxTAZwMIiwEqAfk8aVB2/oA6nAeM=
-modernc.org/libc v1.22.2 h1:4U7v51GyhlWqQmwCHj28Rdq2Yzwk55ovjFrdPjs8Hb0=
-modernc.org/libc v1.22.2/go.mod h1:uvQavJ1pZ0hIoC/jfqNoMLURIMhKzINIWypNM17puug=
+modernc.org/libc v1.22.4 h1:wymSbZb0AlrjdAVX3cjreCHTPCpPARbQXNz6BHPzdwQ=
+modernc.org/libc v1.22.4/go.mod h1:jj+Z7dTNX8fBScMVNRAYZ/jF91K8fdT2hYMThc3YjBY=
 modernc.org/mathutil v1.5.0 h1:rV0Ko/6SfM+8G+yKiyI830l3Wuz1zRutdslNoQ0kfiQ=
 modernc.org/mathutil v1.5.0/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
 modernc.org/memory v1.5.0 h1:N+/8c5rE6EqugZwHii4IFsaJ7MUhoWX07J5tC/iI5Ds=
 modernc.org/memory v1.5.0/go.mod h1:PkUhL0Mugw21sHPeskwZW4D6VscE/GQJOnIpCnW6pSU=
 modernc.org/opt v0.1.3 h1:3XOZf2yznlhC+ibLltsDGzABUGVx8J6pnFMS3E4dcq4=
 modernc.org/opt v0.1.3/go.mod h1:WdSiB5evDcignE70guQKxYUl14mgWtbClRi5wmkkTX0=
-modernc.org/sqlite v1.20.4 h1:J8+m2trkN+KKoE7jglyHYYYiaq5xmz2HoHJIiBlRzbE=
-modernc.org/sqlite v1.20.4/go.mod h1:zKcGyrICaxNTMEHSr1HQ2GUraP0j+845GYw37+EyT6A=
+modernc.org/sqlite v1.21.2 h1:ixuUG0QS413Vfzyx6FWx6PYTmHaOegTY+hjzhn7L+a0=
+modernc.org/sqlite v1.21.2/go.mod h1:cxbLkB5WS32DnQqeH4h4o1B0eMr8W/y8/RGuxQ3JsC0=
 modernc.org/strutil v1.1.3 h1:fNMm+oJklMGYfU9Ylcywl0CO5O6nTfaowNsh2wpPjzY=
 modernc.org/strutil v1.1.3/go.mod h1:MEHNA7PdEnEwLvspRMtWTNnp2nnyvMfkimT1NKNAGbw=
-modernc.org/tcl v1.15.0 h1:oY+JeD11qVVSgVvodMJsu7Edf8tr5E/7tuhF5cNYz34=
+modernc.org/tcl v1.15.1 h1:mOQwiEK4p7HruMZcwKTZPw/aqtGM4aY00uzWhlKKYws=
 modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
 modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
 modernc.org/z v1.7.0 h1:xkDw/KepgEjeizO2sNco+hqYkU12taxQFqPEmgm1GWE=
diff --git a/go/adbc/pkg/Makefile b/go/adbc/pkg/Makefile
index b7a41d5..8b04489 100644
--- a/go/adbc/pkg/Makefile
+++ b/go/adbc/pkg/Makefile
@@ -22,7 +22,8 @@ else
 endif
 
 DRIVERS := \
-	libadbc_driver_flightsql.$(SUFFIX)
+	libadbc_driver_flightsql.$(SUFFIX) \
+	libadbc_driver_snowflake.$(SUFFIX)
 
 .PHONY: all
 all: $(DRIVERS)
diff --git a/go/adbc/pkg/doc.go b/go/adbc/pkg/doc.go
index d5dd143..8aa95d7 100644
--- a/go/adbc/pkg/doc.go
+++ b/go/adbc/pkg/doc.go
@@ -29,3 +29,4 @@
 package pkg
 
 //go:generate go run ./gen -prefix "FlightSQL" -driver ../driver/flightsql -o flightsql
+//go:generate go run ./gen -prefix "Snowflake" -driver ../driver/snowflake -o snowflake
diff --git a/go/adbc/pkg/gen/main.go b/go/adbc/pkg/gen/main.go
index 575eef1..24ae7d2 100644
--- a/go/adbc/pkg/gen/main.go
+++ b/go/adbc/pkg/gen/main.go
@@ -22,7 +22,6 @@ import (
 	"errors"
 	"flag"
 	"fmt"
-	"io/ioutil"
 	"log"
 	"os"
 	"os/exec"
@@ -130,7 +129,7 @@ func main() {
 }
 
 func mustReadAll(path string) []byte {
-	data, err := ioutil.ReadFile(path)
+	data, err := os.ReadFile(path)
 	if err != nil {
 		log.Fatal(err)
 	}
@@ -177,7 +176,7 @@ func process(data interface{}, specs []pathSpec) {
 				log.Fatalf("error formatting '%s': %s", spec.in, err)
 			}
 		}
-		if err := ioutil.WriteFile(spec.out, generated, fileMode(spec.in)); err != nil {
+		if err := os.WriteFile(spec.out, generated, fileMode(spec.in)); err != nil {
 			log.Fatal(err)
 		}
 	}
diff --git a/go/adbc/pkg/snowflake/driver.go b/go/adbc/pkg/snowflake/driver.go
new file mode 100644
index 0000000..296c671
--- /dev/null
+++ b/go/adbc/pkg/snowflake/driver.go
@@ -0,0 +1,728 @@
+// Code generated by _tmpl/driver.go.tmpl. DO NOT EDIT.
+
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//go:build driverlib
+
+package main
+
+// #cgo CXXFLAGS: -std=c++11
+// #include "../../drivermgr/adbc.h"
+// #include "utils.h"
+// #include <stdint.h>
+// #include <string.h>
+//
+// typedef const char cchar_t;
+// typedef const uint8_t cuint8_t;
+//
+// void releasePartitions(struct AdbcPartitions* partitions);
+//
+import "C"
+import (
+	"context"
+	"errors"
+	"fmt"
+	"runtime"
+	"runtime/cgo"
+	"unsafe"
+
+	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow-adbc/go/adbc/driver/snowflake"
+	"github.com/apache/arrow/go/v12/arrow/array"
+	"github.com/apache/arrow/go/v12/arrow/cdata"
+	"github.com/apache/arrow/go/v12/arrow/memory/mallocator"
+)
+
+var drv = snowflake.Driver{Alloc: mallocator.NewMallocator()}
+
+const errPrefix = "[Snowflake] "
+
+func setErr(err *C.struct_AdbcError, format string, vals ...interface{}) {
+	if err == nil {
+		return
+	}
+
+	if err.release != nil {
+		C.SnowflakeerrRelease(err)
+	}
+
+	msg := errPrefix + fmt.Sprintf(format, vals...)
+	err.message = C.CString(msg)
+	err.release = (*[0]byte)(C.Snowflake_release_error)
+}
+
+func errToAdbcErr(adbcerr *C.struct_AdbcError, err error) adbc.Status {
+	if adbcerr == nil || err == nil {
+		return adbc.StatusOK
+	}
+
+	var adbcError adbc.Error
+	if errors.As(err, &adbcError) {
+		setErr(adbcerr, adbcError.Msg)
+		return adbcError.Code
+	}
+
+	setErr(adbcerr, err.Error())
+	return adbc.StatusUnknown
+}
+
+// Allocate a new cgo.Handle and store its address in a heap-allocated
+// uintptr_t.  Experimentally, this was found to be necessary, else
+// something (the Go runtime?) would corrupt (garbage-collect?) the
+// handle.
+func createHandle(hndl cgo.Handle) unsafe.Pointer {
+	// uintptr_t* hptr = malloc(sizeof(uintptr_t));
+	hptr := (*C.uintptr_t)(C.malloc(C.sizeof_uintptr_t))
+	// *hptr = (uintptr)hndl;
+	*hptr = C.uintptr_t(uintptr(hndl))
+	return unsafe.Pointer(hptr)
+}
+
+func getFromHandle[T any](ptr unsafe.Pointer) *T {
+	// uintptr_t* hptr = (uintptr_t*)ptr;
+	hptr := (*C.uintptr_t)(ptr)
+	return cgo.Handle((uintptr)(*hptr)).Value().(*T)
+}
+
+func checkDBAlloc(db *C.struct_AdbcDatabase, err *C.struct_AdbcError, fname string) bool {
+	if db == nil {
+		setErr(err, "%s: database not allocated", fname)
+		return false
+	}
+	if db.private_data == nil {
+		setErr(err, "%s: database not allocated", fname)
+		return false
+	}
+	return true
+}
+
+func checkDBInit(db *C.struct_AdbcDatabase, err *C.struct_AdbcError, fname string) *cDatabase {
+	if !checkDBAlloc(db, err, fname) {
+		return nil
+	}
+	cdb := getFromHandle[cDatabase](db.private_data)
+	if cdb.db == nil {
+		setErr(err, "%s: database not initialized", fname)
+		return nil
+	}
+
+	return cdb
+}
+
+type cDatabase struct {
+	opts map[string]string
+	db   adbc.Database
+}
+
+//export SnowflakeDatabaseNew
+func SnowflakeDatabaseNew(db *C.struct_AdbcDatabase, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if db.private_data != nil {
+		setErr(err, "AdbcDatabaseNew: database already allocated")
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	dbobj := &cDatabase{opts: make(map[string]string)}
+	hndl := cgo.NewHandle(dbobj)
+	db.private_data = createHandle(hndl)
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeDatabaseSetOption
+func SnowflakeDatabaseSetOption(db *C.struct_AdbcDatabase, key, value *C.cchar_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if !checkDBAlloc(db, err, "AdbcDatabaseSetOption") {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	cdb := getFromHandle[cDatabase](db.private_data)
+
+	k, v := C.GoString(key), C.GoString(value)
+	cdb.opts[k] = v
+
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeDatabaseInit
+func SnowflakeDatabaseInit(db *C.struct_AdbcDatabase, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if !checkDBAlloc(db, err, "AdbcDatabaseInit") {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	cdb := getFromHandle[cDatabase](db.private_data)
+
+	if cdb.db != nil {
+		setErr(err, "AdbcDatabaseInit: database already initialized")
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	adb, aerr := drv.NewDatabase(cdb.opts)
+	if aerr != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, aerr))
+	}
+
+	cdb.db = adb
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeDatabaseRelease
+func SnowflakeDatabaseRelease(db *C.struct_AdbcDatabase, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if !checkDBAlloc(db, err, "AdbcDatabaseRelease") {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	h := (*(*cgo.Handle)(db.private_data))
+
+	cdb := h.Value().(*cDatabase)
+	cdb.db = nil
+	cdb.opts = nil
+	C.free(unsafe.Pointer(db.private_data))
+	db.private_data = nil
+	h.Delete()
+	// manually trigger GC for two reasons:
+	//  1. ASAN expects the release callback to be called before
+	//     the process ends, but GC is not deterministic. So by manually
+	//     triggering the GC we ensure the release callback gets called.
+	//  2. Creates deterministic GC behavior by all Release functions
+	//     triggering a garbage collection
+	runtime.GC()
+	return C.ADBC_STATUS_OK
+}
+
+type cConn struct {
+	cnxn adbc.Connection
+}
+
+func checkConnAlloc(cnxn *C.struct_AdbcConnection, err *C.struct_AdbcError, fname string) bool {
+	if cnxn == nil {
+		setErr(err, "%s: connection not allocated", fname)
+		return false
+	}
+	if cnxn.private_data == nil {
+		setErr(err, "%s: connection not allocated", fname)
+		return false
+	}
+	return true
+}
+
+func checkConnInit(cnxn *C.struct_AdbcConnection, err *C.struct_AdbcError, fname string) *cConn {
+	if !checkConnAlloc(cnxn, err, fname) {
+		return nil
+	}
+	conn := getFromHandle[cConn](cnxn.private_data)
+	if conn.cnxn == nil {
+		setErr(err, "%s: connection not initialized", fname)
+		return nil
+	}
+
+	return conn
+}
+
+//export SnowflakeConnectionNew
+func SnowflakeConnectionNew(cnxn *C.struct_AdbcConnection, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if cnxn.private_data != nil {
+		setErr(err, "AdbcConnectionNew: connection already allocated")
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	hndl := cgo.NewHandle(&cConn{})
+	cnxn.private_data = createHandle(hndl)
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeConnectionSetOption
+func SnowflakeConnectionSetOption(cnxn *C.struct_AdbcConnection, key, val *C.cchar_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if !checkConnAlloc(cnxn, err, "AdbcConnectionSetOption") {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	conn := getFromHandle[cConn](cnxn.private_data)
+
+	code := errToAdbcErr(err, conn.cnxn.(adbc.PostInitOptions).SetOption(C.GoString(key), C.GoString(val)))
+	return C.AdbcStatusCode(code)
+}
+
+//export SnowflakeConnectionInit
+func SnowflakeConnectionInit(cnxn *C.struct_AdbcConnection, db *C.struct_AdbcDatabase, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if !checkConnAlloc(cnxn, err, "AdbcConnectionInit") {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	conn := getFromHandle[cConn](cnxn.private_data)
+	if conn.cnxn != nil {
+		setErr(err, "AdbcConnectionInit: connection already initialized")
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	cdb := checkDBInit(db, err, "AdbcConnectionInit")
+	if cdb == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	c, e := cdb.db.Open(context.Background())
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+
+	conn.cnxn = c
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeConnectionRelease
+func SnowflakeConnectionRelease(cnxn *C.struct_AdbcConnection, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if !checkConnAlloc(cnxn, err, "AdbcConnectionRelease") {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+	h := (*(*cgo.Handle)(cnxn.private_data))
+
+	conn := h.Value().(*cConn)
+	defer func() {
+		conn.cnxn = nil
+		C.free(unsafe.Pointer(cnxn.private_data))
+		cnxn.private_data = nil
+		h.Delete()
+		// manually trigger GC for two reasons:
+		//  1. ASAN expects the release callback to be called before
+		//     the process ends, but GC is not deterministic. So by manually
+		//     triggering the GC we ensure the release callback gets called.
+		//  2. Creates deterministic GC behavior by all Release functions
+		//     triggering a garbage collection
+		runtime.GC()
+	}()
+	if conn.cnxn == nil {
+		return C.ADBC_STATUS_OK
+	}
+	return C.AdbcStatusCode(errToAdbcErr(err, conn.cnxn.Close()))
+}
+
+func fromCArr[T, CType any](ptr *CType, sz int) []T {
+	if ptr == nil || sz == 0 {
+		return nil
+	}
+
+	return unsafe.Slice((*T)(unsafe.Pointer(ptr)), sz)
+}
+
+func toCdataStream(ptr *C.struct_ArrowArrayStream) *cdata.CArrowArrayStream {
+	return (*cdata.CArrowArrayStream)(unsafe.Pointer(ptr))
+}
+
+func toCdataSchema(ptr *C.struct_ArrowSchema) *cdata.CArrowSchema {
+	return (*cdata.CArrowSchema)(unsafe.Pointer(ptr))
+}
+
+func toCdataArray(ptr *C.struct_ArrowArray) *cdata.CArrowArray {
+	return (*cdata.CArrowArray)(unsafe.Pointer(ptr))
+}
+
+//export SnowflakeConnectionGetInfo
+func SnowflakeConnectionGetInfo(cnxn *C.struct_AdbcConnection, codes *C.uint32_t, len C.size_t, out *C.struct_ArrowArrayStream, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcConnectionGetInfo")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	infoCodes := fromCArr[adbc.InfoCode](codes, int(len))
+	rdr, e := conn.cnxn.GetInfo(context.Background(), infoCodes)
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+
+	cdata.ExportRecordReader(rdr, toCdataStream(out))
+	return C.ADBC_STATUS_OK
+}
+
+func toStrPtr(in *C.cchar_t) *string {
+	if in == nil {
+		return nil
+	}
+
+	out := C.GoString((*C.char)(in))
+	return &out
+}
+
+func toStrSlice(in **C.cchar_t) []string {
+	if in == nil {
+		return nil
+	}
+
+	sz := unsafe.Sizeof(*in)
+
+	out := make([]string, 0, 1)
+	for *in != nil {
+		out = append(out, C.GoString(*in))
+		in = (**C.cchar_t)(unsafe.Add(unsafe.Pointer(in), sz))
+	}
+	return out
+}
+
+//export SnowflakeConnectionGetObjects
+func SnowflakeConnectionGetObjects(cnxn *C.struct_AdbcConnection, depth C.int, catalog, dbSchema, tableName *C.cchar_t, tableType **C.cchar_t, columnName *C.cchar_t,
+	out *C.struct_ArrowArrayStream, err *C.struct_AdbcError) C.AdbcStatusCode {
+
+	conn := checkConnInit(cnxn, err, "AdbcConnectionGetObjects")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	rdr, e := conn.cnxn.GetObjects(context.Background(), adbc.ObjectDepth(depth), toStrPtr(catalog), toStrPtr(dbSchema), toStrPtr(tableName), toStrPtr(columnName), toStrSlice(tableType))
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+	cdata.ExportRecordReader(rdr, toCdataStream(out))
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeConnectionGetTableSchema
+func SnowflakeConnectionGetTableSchema(cnxn *C.struct_AdbcConnection, catalog, dbSchema, tableName *C.cchar_t, schema *C.struct_ArrowSchema, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcConnectionGetTableSchema")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	sc, e := conn.cnxn.GetTableSchema(context.Background(), toStrPtr(catalog), toStrPtr(dbSchema), C.GoString(tableName))
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+	cdata.ExportArrowSchema(sc, toCdataSchema(schema))
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeConnectionGetTableTypes
+func SnowflakeConnectionGetTableTypes(cnxn *C.struct_AdbcConnection, out *C.struct_ArrowArrayStream, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcConnectionGetTableTypes")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	rdr, e := conn.cnxn.GetTableTypes(context.Background())
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+	cdata.ExportRecordReader(rdr, toCdataStream(out))
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeConnectionReadPartition
+func SnowflakeConnectionReadPartition(cnxn *C.struct_AdbcConnection, serialized *C.cuint8_t, serializedLen C.size_t, out *C.struct_ArrowArrayStream, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcConnectionReadPartition")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	rdr, e := conn.cnxn.ReadPartition(context.Background(), fromCArr[byte](serialized, int(serializedLen)))
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+	cdata.ExportRecordReader(rdr, toCdataStream(out))
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeConnectionCommit
+func SnowflakeConnectionCommit(cnxn *C.struct_AdbcConnection, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcConnectionCommit")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	return C.AdbcStatusCode(errToAdbcErr(err, conn.cnxn.Commit(context.Background())))
+}
+
+//export SnowflakeConnectionRollback
+func SnowflakeConnectionRollback(cnxn *C.struct_AdbcConnection, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcConnectionRollback")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	return C.AdbcStatusCode(errToAdbcErr(err, conn.cnxn.Rollback(context.Background())))
+}
+
+func checkStmtInit(stmt *C.struct_AdbcStatement, err *C.struct_AdbcError, fname string) adbc.Statement {
+	if stmt == nil {
+		setErr(err, "%s: statement not allocated", fname)
+		return nil
+	}
+
+	if stmt.private_data == nil {
+		setErr(err, "%s: statement not initialized", fname)
+		return nil
+	}
+
+	return (*(*cgo.Handle)(stmt.private_data)).Value().(adbc.Statement)
+}
+
+//export SnowflakeStatementNew
+func SnowflakeStatementNew(cnxn *C.struct_AdbcConnection, stmt *C.struct_AdbcStatement, err *C.struct_AdbcError) C.AdbcStatusCode {
+	conn := checkConnInit(cnxn, err, "AdbcStatementNew")
+	if conn == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	st, e := conn.cnxn.NewStatement()
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+
+	h := cgo.NewHandle(st)
+	stmt.private_data = createHandle(h)
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeStatementRelease
+func SnowflakeStatementRelease(stmt *C.struct_AdbcStatement, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if stmt == nil {
+		setErr(err, "AdbcStatementRelease: statement not allocated")
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	if stmt.private_data == nil {
+		setErr(err, "AdbcStatementRelease: statement not initialized")
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	h := (*(*cgo.Handle)(stmt.private_data))
+	st := h.Value().(adbc.Statement)
+	C.free(stmt.private_data)
+	stmt.private_data = nil
+
+	e := st.Close()
+	h.Delete()
+	// manually trigger GC for two reasons:
+	//  1. ASAN expects the release callback to be called before
+	//     the process ends, but GC is not deterministic. So by manually
+	//     triggering the GC we ensure the release callback gets called.
+	//  2. Creates deterministic GC behavior by all Release functions
+	//     triggering a garbage collection
+	runtime.GC()
+	return C.AdbcStatusCode(errToAdbcErr(err, e))
+}
+
+//export SnowflakeStatementPrepare
+func SnowflakeStatementPrepare(stmt *C.struct_AdbcStatement, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementPrepare")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	return C.AdbcStatusCode(errToAdbcErr(err, st.Prepare(context.Background())))
+}
+
+//export SnowflakeStatementExecuteQuery
+func SnowflakeStatementExecuteQuery(stmt *C.struct_AdbcStatement, out *C.struct_ArrowArrayStream, affected *C.int64_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementExecuteQuery")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	if out == nil {
+		n, e := st.ExecuteUpdate(context.Background())
+		if e != nil {
+			return C.AdbcStatusCode(errToAdbcErr(err, e))
+		}
+
+		if affected != nil {
+			*affected = C.int64_t(n)
+		}
+	} else {
+		rdr, n, e := st.ExecuteQuery(context.Background())
+		if e != nil {
+			return C.AdbcStatusCode(errToAdbcErr(err, e))
+		}
+
+		if affected != nil {
+			*affected = C.int64_t(n)
+		}
+
+		cdata.ExportRecordReader(rdr, toCdataStream(out))
+	}
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeStatementSetSqlQuery
+func SnowflakeStatementSetSqlQuery(stmt *C.struct_AdbcStatement, query *C.cchar_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementSetSqlQuery")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	return C.AdbcStatusCode(errToAdbcErr(err, st.SetSqlQuery(C.GoString(query))))
+}
+
+//export SnowflakeStatementSetSubstraitPlan
+func SnowflakeStatementSetSubstraitPlan(stmt *C.struct_AdbcStatement, plan *C.cuint8_t, length C.size_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementSetSubstraitPlan")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	return C.AdbcStatusCode(errToAdbcErr(err, st.SetSubstraitPlan(fromCArr[byte](plan, int(length)))))
+}
+
+//export SnowflakeStatementBind
+func SnowflakeStatementBind(stmt *C.struct_AdbcStatement, values *C.struct_ArrowArray, schema *C.struct_ArrowSchema, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementBind")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	rec, e := cdata.ImportCRecordBatch(toCdataArray(values), toCdataSchema(schema))
+	if e != nil {
+		// if there was an error, we need to manually release the input
+		cdata.ReleaseCArrowArray(toCdataArray(values))
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+	defer rec.Release()
+
+	return C.AdbcStatusCode(errToAdbcErr(err, st.Bind(context.Background(), rec)))
+}
+
+//export SnowflakeStatementBindStream
+func SnowflakeStatementBindStream(stmt *C.struct_AdbcStatement, stream *C.struct_ArrowArrayStream, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementBindStream")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	rdr := cdata.ImportCArrayStream(toCdataStream(stream), nil)
+	return C.AdbcStatusCode(errToAdbcErr(err, st.BindStream(context.Background(), rdr.(array.RecordReader))))
+}
+
+//export SnowflakeStatementGetParameterSchema
+func SnowflakeStatementGetParameterSchema(stmt *C.struct_AdbcStatement, schema *C.struct_ArrowSchema, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementGetParameterSchema")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	sc, e := st.GetParameterSchema()
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+
+	cdata.ExportArrowSchema(sc, toCdataSchema(schema))
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeStatementSetOption
+func SnowflakeStatementSetOption(stmt *C.struct_AdbcStatement, key, value *C.cchar_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementSetOption")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	return C.AdbcStatusCode(errToAdbcErr(err, st.SetOption(C.GoString(key), C.GoString(value))))
+}
+
+//export releasePartitions
+func releasePartitions(partitions *C.struct_AdbcPartitions) {
+	if partitions.private_data == nil {
+		return
+	}
+
+	C.free(unsafe.Pointer(partitions.partitions))
+	C.free(unsafe.Pointer(partitions.partition_lengths))
+	C.free(partitions.private_data)
+	partitions.partitions = nil
+	partitions.partition_lengths = nil
+	partitions.private_data = nil
+}
+
+//export SnowflakeStatementExecutePartitions
+func SnowflakeStatementExecutePartitions(stmt *C.struct_AdbcStatement, schema *C.struct_ArrowSchema, partitions *C.struct_AdbcPartitions, affected *C.int64_t, err *C.struct_AdbcError) C.AdbcStatusCode {
+	st := checkStmtInit(stmt, err, "AdbcStatementExecutePartitions")
+	if st == nil {
+		return C.ADBC_STATUS_INVALID_STATE
+	}
+
+	sc, part, n, e := st.ExecutePartitions(context.Background())
+	if e != nil {
+		return C.AdbcStatusCode(errToAdbcErr(err, e))
+	}
+
+	if partitions == nil {
+		setErr(err, "AdbcStatementExecutePartitions: partitions output struct is null")
+		return C.ADBC_STATUS_INVALID_ARGUMENT
+	}
+
+	if affected != nil {
+		*affected = C.int64_t(n)
+	}
+
+	if sc != nil && schema != nil {
+		cdata.ExportArrowSchema(sc, toCdataSchema(schema))
+	}
+
+	partitions.num_partitions = C.size_t(part.NumPartitions)
+	partitions.partitions = (**C.cuint8_t)(C.malloc(C.size_t(unsafe.Sizeof((*C.uint8_t)(nil)) * uintptr(part.NumPartitions))))
+	partitions.partition_lengths = (*C.size_t)(C.malloc(C.size_t(unsafe.Sizeof(C.size_t(0)) * uintptr(part.NumPartitions))))
+
+	// Copy into C-allocated memory to avoid violating CGO rules
+	totalLen := 0
+	for _, p := range part.PartitionIDs {
+		totalLen += len(p)
+	}
+	partitions.private_data = C.malloc(C.size_t(totalLen))
+	dst := fromCArr[byte]((*byte)(partitions.private_data), totalLen)
+
+	partIDs := fromCArr[*C.cuint8_t](partitions.partitions, int(partitions.num_partitions))
+	partLens := fromCArr[C.size_t](partitions.partition_lengths, int(partitions.num_partitions))
+	for i, p := range part.PartitionIDs {
+		partIDs[i] = (*C.cuint8_t)(&dst[0])
+		copy(dst, p)
+		dst = dst[len(p):]
+		partLens[i] = C.size_t(len(p))
+	}
+
+	partitions.release = (*[0]byte)(C.releasePartitions)
+	return C.ADBC_STATUS_OK
+}
+
+//export SnowflakeDriverInit
+func SnowflakeDriverInit(version C.int, rawDriver *C.void, err *C.struct_AdbcError) C.AdbcStatusCode {
+	if version != C.ADBC_VERSION_1_0_0 {
+		setErr(err, "Only version %d supported, got %d", int(C.ADBC_VERSION_1_0_0), int(version))
+		return C.ADBC_STATUS_NOT_IMPLEMENTED
+	}
+
+	driver := (*C.struct_AdbcDriver)(unsafe.Pointer(rawDriver))
+	C.memset(unsafe.Pointer(driver), 0, C.sizeof_struct_AdbcDriver)
+	driver.DatabaseInit = (*[0]byte)(C.SnowflakeDatabaseInit)
+	driver.DatabaseNew = (*[0]byte)(C.SnowflakeDatabaseNew)
+	driver.DatabaseRelease = (*[0]byte)(C.SnowflakeDatabaseRelease)
+	driver.DatabaseSetOption = (*[0]byte)(C.SnowflakeDatabaseSetOption)
+
+	driver.ConnectionNew = (*[0]byte)(C.SnowflakeConnectionNew)
+	driver.ConnectionInit = (*[0]byte)(C.SnowflakeConnectionInit)
+	driver.ConnectionRelease = (*[0]byte)(C.SnowflakeConnectionRelease)
+	driver.ConnectionSetOption = (*[0]byte)(C.SnowflakeConnectionSetOption)
+	driver.ConnectionGetInfo = (*[0]byte)(C.SnowflakeConnectionGetInfo)
+	driver.ConnectionGetObjects = (*[0]byte)(C.SnowflakeConnectionGetObjects)
+	driver.ConnectionGetTableSchema = (*[0]byte)(C.SnowflakeConnectionGetTableSchema)
+	driver.ConnectionGetTableTypes = (*[0]byte)(C.SnowflakeConnectionGetTableTypes)
+	driver.ConnectionReadPartition = (*[0]byte)(C.SnowflakeConnectionReadPartition)
+	driver.ConnectionCommit = (*[0]byte)(C.SnowflakeConnectionCommit)
+	driver.ConnectionRollback = (*[0]byte)(C.SnowflakeConnectionRollback)
+
+	driver.StatementNew = (*[0]byte)(C.SnowflakeStatementNew)
+	driver.StatementRelease = (*[0]byte)(C.SnowflakeStatementRelease)
+	driver.StatementSetOption = (*[0]byte)(C.SnowflakeStatementSetOption)
+	driver.StatementSetSqlQuery = (*[0]byte)(C.SnowflakeStatementSetSqlQuery)
+	driver.StatementSetSubstraitPlan = (*[0]byte)(C.SnowflakeStatementSetSubstraitPlan)
+	driver.StatementBind = (*[0]byte)(C.SnowflakeStatementBind)
+	driver.StatementBindStream = (*[0]byte)(C.SnowflakeStatementBindStream)
+	driver.StatementExecuteQuery = (*[0]byte)(C.SnowflakeStatementExecuteQuery)
+	driver.StatementExecutePartitions = (*[0]byte)(C.SnowflakeStatementExecutePartitions)
+	driver.StatementGetParameterSchema = (*[0]byte)(C.SnowflakeStatementGetParameterSchema)
+	driver.StatementPrepare = (*[0]byte)(C.SnowflakeStatementPrepare)
+
+	return C.ADBC_STATUS_OK
+}
+
+func main() {}
diff --git a/go/adbc/pkg/snowflake/utils.c b/go/adbc/pkg/snowflake/utils.c
new file mode 100644
index 0000000..8c360b0
--- /dev/null
+++ b/go/adbc/pkg/snowflake/utils.c
@@ -0,0 +1,200 @@
+// Code generated by _tmpl/utils.c.tmpl. DO NOT EDIT.
+
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+// clang-format off
+//go:build driverlib
+//  clang-format on
+
+#include "utils.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void Snowflake_release_error(struct AdbcError* error) {
+  free(error->message);
+  error->message = NULL;
+  error->release = NULL;
+}
+
+AdbcStatusCode AdbcDatabaseNew(struct AdbcDatabase* database, struct AdbcError* error) {
+  return SnowflakeDatabaseNew(database, error);
+}
+
+AdbcStatusCode AdbcDatabaseSetOption(struct AdbcDatabase* database, const char* key,
+                                     const char* value, struct AdbcError* error) {
+  return SnowflakeDatabaseSetOption(database, key, value, error);
+}
+
+AdbcStatusCode AdbcDatabaseInit(struct AdbcDatabase* database, struct AdbcError* error) {
+  return SnowflakeDatabaseInit(database, error);
+}
+
+AdbcStatusCode AdbcDatabaseRelease(struct AdbcDatabase* database,
+                                   struct AdbcError* error) {
+  return SnowflakeDatabaseRelease(database, error);
+}
+
+AdbcStatusCode AdbcConnectionNew(struct AdbcConnection* connection,
+                                 struct AdbcError* error) {
+  return SnowflakeConnectionNew(connection, error);
+}
+
+AdbcStatusCode AdbcConnectionSetOption(struct AdbcConnection* connection, const char* key,
+                                       const char* value, struct AdbcError* error) {
+  return SnowflakeConnectionSetOption(connection, key, value, error);
+}
+
+AdbcStatusCode AdbcConnectionInit(struct AdbcConnection* connection,
+                                  struct AdbcDatabase* database,
+                                  struct AdbcError* error) {
+  return SnowflakeConnectionInit(connection, database, error);
+}
+
+AdbcStatusCode AdbcConnectionRelease(struct AdbcConnection* connection,
+                                     struct AdbcError* error) {
+  return SnowflakeConnectionRelease(connection, error);
+}
+
+AdbcStatusCode AdbcConnectionGetInfo(struct AdbcConnection* connection,
+                                     uint32_t* info_codes, size_t info_codes_length,
+                                     struct ArrowArrayStream* out,
+                                     struct AdbcError* error) {
+  return SnowflakeConnectionGetInfo(connection, info_codes, info_codes_length, out,
+                                    error);
+}
+
+AdbcStatusCode AdbcConnectionGetObjects(struct AdbcConnection* connection, int depth,
+                                        const char* catalog, const char* db_schema,
+                                        const char* table_name, const char** table_type,
+                                        const char* column_name,
+                                        struct ArrowArrayStream* out,
+                                        struct AdbcError* error) {
+  return SnowflakeConnectionGetObjects(connection, depth, catalog, db_schema, table_name,
+                                       table_type, column_name, out, error);
+}
+
+AdbcStatusCode AdbcConnectionGetTableSchema(struct AdbcConnection* connection,
+                                            const char* catalog, const char* db_schema,
+                                            const char* table_name,
+                                            struct ArrowSchema* schema,
+                                            struct AdbcError* error) {
+  return SnowflakeConnectionGetTableSchema(connection, catalog, db_schema, table_name,
+                                           schema, error);
+}
+
+AdbcStatusCode AdbcConnectionGetTableTypes(struct AdbcConnection* connection,
+                                           struct ArrowArrayStream* out,
+                                           struct AdbcError* error) {
+  return SnowflakeConnectionGetTableTypes(connection, out, error);
+}
+
+AdbcStatusCode AdbcConnectionReadPartition(struct AdbcConnection* connection,
+                                           const uint8_t* serialized_partition,
+                                           size_t serialized_length,
+                                           struct ArrowArrayStream* out,
+                                           struct AdbcError* error) {
+  return SnowflakeConnectionReadPartition(connection, serialized_partition,
+                                          serialized_length, out, error);
+}
+
+AdbcStatusCode AdbcConnectionCommit(struct AdbcConnection* connection,
+                                    struct AdbcError* error) {
+  return SnowflakeConnectionCommit(connection, error);
+}
+
+AdbcStatusCode AdbcConnectionRollback(struct AdbcConnection* connection,
+                                      struct AdbcError* error) {
+  return SnowflakeConnectionRollback(connection, error);
+}
+
+AdbcStatusCode AdbcStatementNew(struct AdbcConnection* connection,
+                                struct AdbcStatement* statement,
+                                struct AdbcError* error) {
+  return SnowflakeStatementNew(connection, statement, error);
+}
+
+AdbcStatusCode AdbcStatementRelease(struct AdbcStatement* statement,
+                                    struct AdbcError* error) {
+  return SnowflakeStatementRelease(statement, error);
+}
+
+AdbcStatusCode AdbcStatementExecuteQuery(struct AdbcStatement* statement,
+                                         struct ArrowArrayStream* out,
+                                         int64_t* rows_affected,
+                                         struct AdbcError* error) {
+  return SnowflakeStatementExecuteQuery(statement, out, rows_affected, error);
+}
+
+AdbcStatusCode AdbcStatementPrepare(struct AdbcStatement* statement,
+                                    struct AdbcError* error) {
+  return SnowflakeStatementPrepare(statement, error);
+}
+
+AdbcStatusCode AdbcStatementSetSqlQuery(struct AdbcStatement* statement,
+                                        const char* query, struct AdbcError* error) {
+  return SnowflakeStatementSetSqlQuery(statement, query, error);
+}
+
+AdbcStatusCode AdbcStatementSetSubstraitPlan(struct AdbcStatement* statement,
+                                             const uint8_t* plan, size_t length,
+                                             struct AdbcError* error) {
+  return SnowflakeStatementSetSubstraitPlan(statement, plan, length, error);
+}
+
+AdbcStatusCode AdbcStatementBind(struct AdbcStatement* statement,
+                                 struct ArrowArray* values, struct ArrowSchema* schema,
+                                 struct AdbcError* error) {
+  return SnowflakeStatementBind(statement, values, schema, error);
+}
+
+AdbcStatusCode AdbcStatementBindStream(struct AdbcStatement* statement,
+                                       struct ArrowArrayStream* stream,
+                                       struct AdbcError* error) {
+  return SnowflakeStatementBindStream(statement, stream, error);
+}
+
+AdbcStatusCode AdbcStatementGetParameterSchema(struct AdbcStatement* statement,
+                                               struct ArrowSchema* schema,
+                                               struct AdbcError* error) {
+  return SnowflakeStatementGetParameterSchema(statement, schema, error);
+}
+
+AdbcStatusCode AdbcStatementSetOption(struct AdbcStatement* statement, const char* key,
+                                      const char* value, struct AdbcError* error) {
+  return SnowflakeStatementSetOption(statement, key, value, error);
+}
+
+AdbcStatusCode AdbcStatementExecutePartitions(struct AdbcStatement* statement,
+                                              struct ArrowSchema* schema,
+                                              struct AdbcPartitions* partitions,
+                                              int64_t* rows_affected,
+                                              struct AdbcError* error) {
+  return SnowflakeStatementExecutePartitions(statement, schema, partitions, rows_affected,
+                                             error);
+}
+
+ADBC_EXPORT
+AdbcStatusCode AdbcDriverInit(int version, void* driver, struct AdbcError* error) {
+  return SnowflakeDriverInit(version, driver, error);
+}
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/go/adbc/pkg/snowflake/utils.h b/go/adbc/pkg/snowflake/utils.h
new file mode 100644
index 0000000..453d7a0
--- /dev/null
+++ b/go/adbc/pkg/snowflake/utils.h
@@ -0,0 +1,97 @@
+// Code generated by _tmpl/utils.h.tmpl. DO NOT EDIT.
+
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+// clang-format off
+//go:build driverlib
+//  clang-format on
+
+#pragma once
+
+#include <stdlib.h>
+#include "../../drivermgr/adbc.h"
+
+AdbcStatusCode SnowflakeDatabaseNew(struct AdbcDatabase* db, struct AdbcError* err);
+AdbcStatusCode SnowflakeDatabaseSetOption(struct AdbcDatabase* db, const char* key,
+                                          const char* value, struct AdbcError* err);
+AdbcStatusCode SnowflakeDatabaseInit(struct AdbcDatabase* db, struct AdbcError* err);
+AdbcStatusCode SnowflakeDatabaseRelease(struct AdbcDatabase* db, struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionNew(struct AdbcConnection* cnxn, struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionSetOption(struct AdbcConnection* cnxn, const char* key,
+                                            const char* val, struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionInit(struct AdbcConnection* cnxn,
+                                       struct AdbcDatabase* db, struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionRelease(struct AdbcConnection* cnxn,
+                                          struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionGetInfo(struct AdbcConnection* cnxn, uint32_t* codes,
+                                          size_t len, struct ArrowArrayStream* out,
+                                          struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionGetObjects(
+    struct AdbcConnection* cnxn, int depth, const char* catalog, const char* dbSchema,
+    const char* tableName, const char** tableType, const char* columnName,
+    struct ArrowArrayStream* out, struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionGetTableSchema(
+    struct AdbcConnection* cnxn, const char* catalog, const char* dbSchema,
+    const char* tableName, struct ArrowSchema* schema, struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionGetTableTypes(struct AdbcConnection* cnxn,
+                                                struct ArrowArrayStream* out,
+                                                struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionReadPartition(struct AdbcConnection* cnxn,
+                                                const uint8_t* serialized,
+                                                size_t serializedLen,
+                                                struct ArrowArrayStream* out,
+                                                struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionCommit(struct AdbcConnection* cnxn,
+                                         struct AdbcError* err);
+AdbcStatusCode SnowflakeConnectionRollback(struct AdbcConnection* cnxn,
+                                           struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementNew(struct AdbcConnection* cnxn,
+                                     struct AdbcStatement* stmt, struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementRelease(struct AdbcStatement* stmt,
+                                         struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementPrepare(struct AdbcStatement* stmt,
+                                         struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementExecuteQuery(struct AdbcStatement* stmt,
+                                              struct ArrowArrayStream* out,
+                                              int64_t* affected, struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementSetSqlQuery(struct AdbcStatement* stmt,
+                                             const char* query, struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementSetSubstraitPlan(struct AdbcStatement* stmt,
+                                                  const uint8_t* plan, size_t length,
+                                                  struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementBind(struct AdbcStatement* stmt,
+                                      struct ArrowArray* values,
+                                      struct ArrowSchema* schema, struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementBindStream(struct AdbcStatement* stmt,
+                                            struct ArrowArrayStream* stream,
+                                            struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementGetParameterSchema(struct AdbcStatement* stmt,
+                                                    struct ArrowSchema* schema,
+                                                    struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementSetOption(struct AdbcStatement* stmt, const char* key,
+                                           const char* value, struct AdbcError* err);
+AdbcStatusCode SnowflakeStatementExecutePartitions(struct AdbcStatement* stmt,
+                                                   struct ArrowSchema* schema,
+                                                   struct AdbcPartitions* partitions,
+                                                   int64_t* affected,
+                                                   struct AdbcError* err);
+AdbcStatusCode SnowflakeDriverInit(int version, void* rawDriver, struct AdbcError* err);
+
+static inline void SnowflakeerrRelease(struct AdbcError* error) { error->release(error); }
+
+void Snowflake_release_error(struct AdbcError* error);
diff --git a/go/adbc/standard_schemas.go b/go/adbc/standard_schemas.go
index 6bf6c52..eb3d61f 100644
--- a/go/adbc/standard_schemas.go
+++ b/go/adbc/standard_schemas.go
@@ -91,4 +91,12 @@ var (
 		{Name: "catalog_name", Type: arrow.BinaryTypes.String, Nullable: true},
 		{Name: "catalog_db_schemas", Type: arrow.ListOf(DBSchemaSchema), Nullable: true},
 	}, nil)
+
+	GetTableSchemaSchema = arrow.NewSchema([]arrow.Field{
+		{Name: "catalog_name", Type: arrow.BinaryTypes.String, Nullable: true},
+		{Name: "db_schema_name", Type: arrow.BinaryTypes.String, Nullable: true},
+		{Name: "table_name", Type: arrow.BinaryTypes.String},
+		{Name: "table_type", Type: arrow.BinaryTypes.String},
+		{Name: "table_schema", Type: arrow.BinaryTypes.Binary},
+	}, nil)
 )
diff --git a/go/adbc/utils/utils.go b/go/adbc/utils/utils.go
new file mode 100644
index 0000000..84ba0f0
--- /dev/null
+++ b/go/adbc/utils/utils.go
@@ -0,0 +1,67 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package utils
+
+import "github.com/apache/arrow/go/v12/arrow"
+
+func RemoveSchemaMetadata(schema *arrow.Schema) *arrow.Schema {
+	fields := make([]arrow.Field, len(schema.Fields()))
+	for i, field := range schema.Fields() {
+		fields[i] = removeFieldMetadata(&field)
+	}
+	return arrow.NewSchema(fields, nil)
+}
+
+func removeFieldMetadata(field *arrow.Field) arrow.Field {
+	fieldType := field.Type
+
+	if nestedType, ok := field.Type.(arrow.NestedType); ok {
+		childFields := make([]arrow.Field, len(nestedType.Fields()))
+		for i, field := range nestedType.Fields() {
+			childFields[i] = removeFieldMetadata(&field)
+		}
+
+		switch ty := field.Type.(type) {
+		case *arrow.DenseUnionType:
+			fieldType = arrow.DenseUnionOf(childFields, ty.TypeCodes())
+		case *arrow.FixedSizeListType:
+			fieldType = arrow.FixedSizeListOfField(ty.Len(), childFields[0])
+		case *arrow.ListType:
+			fieldType = arrow.ListOfField(childFields[0])
+		case *arrow.LargeListType:
+			fieldType = arrow.LargeListOfField(childFields[0])
+		case *arrow.MapType:
+			mapType := arrow.MapOf(childFields[0].Type, childFields[1].Type)
+			mapType.KeysSorted = ty.KeysSorted
+			fieldType = mapType
+		case *arrow.SparseUnionType:
+			fieldType = arrow.SparseUnionOf(childFields, ty.TypeCodes())
+		case *arrow.StructType:
+			fieldType = arrow.StructOf(childFields...)
+		default:
+			// XXX: ignore it
+		}
+	}
+
+	return arrow.Field{
+		Name:     field.Name,
+		Type:     fieldType,
+		Nullable: field.Nullable,
+		Metadata: arrow.Metadata{},
+	}
+}
diff --git a/go/adbc/validation/validation.go b/go/adbc/validation/validation.go
index c31c377..5540a9c 100644
--- a/go/adbc/validation/validation.go
+++ b/go/adbc/validation/validation.go
@@ -27,9 +27,9 @@ import (
 	"testing"
 
 	"github.com/apache/arrow-adbc/go/adbc"
+	"github.com/apache/arrow-adbc/go/adbc/utils"
 	"github.com/apache/arrow/go/v12/arrow"
 	"github.com/apache/arrow/go/v12/arrow/array"
-	"github.com/apache/arrow/go/v12/arrow/flight/flightsql"
 	"github.com/apache/arrow/go/v12/arrow/memory"
 	"github.com/stretchr/testify/suite"
 )
@@ -58,6 +58,12 @@ type DriverQuirks interface {
 	GetMetadata(adbc.InfoCode) interface{}
 	// Create a sample table from an arrow record
 	CreateSampleTable(tableName string, r arrow.Record) error
+	// Field Metadata for Sample Table for comparison
+	SampleTableSchemaMetadata(tblName string, dt arrow.DataType) arrow.Metadata
+	// Whether the driver supports bulk ingest
+	SupportsBulkIngest() bool
+	// have the driver drop a table with the correct SQL syntax
+	DropTable(adbc.Connection, string) error
 
 	Alloc() memory.Allocator
 }
@@ -213,8 +219,13 @@ func (c *ConnectionTests) TestMetadataGetInfo() {
 			code := codeCol.Value(i)
 
 			child := valUnion.Field(valUnion.ChildID(i))
-			// currently we only define utf8 values for metadata
-			c.Equal(c.Quirks.GetMetadata(adbc.InfoCode(code)), child.(*array.String).Value(i), adbc.InfoCode(code).String())
+			if child.IsNull(i) {
+				exp := c.Quirks.GetMetadata(adbc.InfoCode(code))
+				c.Nilf(exp, "got nil for info %s, expected: %s", adbc.InfoCode(code), exp)
+			} else {
+				// currently we only define utf8 values for metadata
+				c.Equal(c.Quirks.GetMetadata(adbc.InfoCode(code)), child.(*array.String).Value(i), adbc.InfoCode(code).String())
+			}
 		}
 	}
 }
@@ -243,13 +254,9 @@ func (c *ConnectionTests) TestMetadataGetTableSchema() {
 
 	expectedSchema := arrow.NewSchema([]arrow.Field{
 		{Name: "ints", Type: arrow.PrimitiveTypes.Int64, Nullable: true,
-			Metadata: arrow.MetadataFrom(map[string]string{
-				flightsql.ScaleKey: "15", flightsql.IsReadOnlyKey: "0", flightsql.IsAutoIncrementKey: "0",
-				flightsql.TableNameKey: "sample_test", flightsql.PrecisionKey: "10"})},
+			Metadata: c.Quirks.SampleTableSchemaMetadata("sample_test", arrow.PrimitiveTypes.Int64)},
 		{Name: "strings", Type: arrow.BinaryTypes.String, Nullable: true,
-			Metadata: arrow.MetadataFrom(map[string]string{
-				flightsql.ScaleKey: "15", flightsql.IsReadOnlyKey: "0", flightsql.IsAutoIncrementKey: "0",
-				flightsql.TableNameKey: "sample_test"})},
+			Metadata: c.Quirks.SampleTableSchemaMetadata("sample_test", arrow.BinaryTypes.String)},
 	}, nil)
 
 	c.Truef(expectedSchema.Equal(sc), "expected: %s\ngot: %s", expectedSchema, sc)
@@ -268,6 +275,88 @@ func (c *ConnectionTests) TestMetadataGetTableTypes() {
 	c.True(rdr.Next())
 }
 
+func (c *ConnectionTests) TestMetadataGetObjectsColumns() {
+	ctx := context.Background()
+	cnxn, _ := c.DB.Open(ctx)
+	defer cnxn.Close()
+
+	c.Require().NoError(c.Quirks.DropTable(cnxn, "bulk_ingest"))
+	rec, _, err := array.RecordFromJSON(c.Quirks.Alloc(), arrow.NewSchema(
+		[]arrow.Field{
+			{Name: "int64s", Type: arrow.PrimitiveTypes.Int64, Nullable: true},
+			{Name: "strings", Type: arrow.BinaryTypes.String, Nullable: true},
+		}, nil), strings.NewReader(`[
+			{"int64s": 42, "strings": "foo"},
+			{"int64s": -42, "strings": null},
+			{"int64s": null, "strings": ""}
+		]`))
+	c.Require().NoError(err)
+	defer rec.Release()
+
+	c.Require().NoError(c.Quirks.CreateSampleTable("bulk_ingest", rec))
+
+	filter := "in%"
+	tests := []struct {
+		name      string
+		filter    *string
+		colnames  []string
+		positions []int32
+	}{
+		{"no filter", nil, []string{"int64s", "strings"}, []int32{1, 2}},
+		{"filter: in%", &filter, []string{"int64s"}, []int32{1}},
+	}
+
+	for _, tt := range tests {
+		c.Run(tt.name, func() {
+			rdr, err := cnxn.GetObjects(ctx, adbc.ObjectDepthColumns, nil, nil, nil, tt.filter, nil)
+			c.Require().NoError(err)
+			defer rdr.Release()
+
+			c.Truef(adbc.GetObjectsSchema.Equal(rdr.Schema()), "expected: %s\ngot: %s", adbc.GetObjectsSchema, rdr.Schema())
+			c.True(rdr.Next())
+			rec := rdr.Record()
+			c.Greater(rec.NumRows(), int64(0))
+			var (
+				foundExpected        = false
+				catalogDbSchemasList = rec.Column(1).(*array.List)
+				catalogDbSchemas     = catalogDbSchemasList.ListValues().(*array.Struct)
+				dbSchemaTablesList   = catalogDbSchemas.Field(1).(*array.List)
+				dbSchemaTables       = dbSchemaTablesList.ListValues().(*array.Struct)
+				tableColumnsList     = dbSchemaTables.Field(2).(*array.List)
+				tableColumns         = tableColumnsList.ListValues().(*array.Struct)
+
+				colnames  = make([]string, 0)
+				positions = make([]int32, 0)
+			)
+			for row := 0; row < int(rec.NumRows()); row++ {
+				dbSchemaIdxStart, dbSchemaIdxEnd := catalogDbSchemasList.ValueOffsets(row)
+				for dbSchemaIdx := dbSchemaIdxStart; dbSchemaIdx < dbSchemaIdxEnd; dbSchemaIdx++ {
+					tblIdxStart, tblIdxEnd := dbSchemaTablesList.ValueOffsets(int(dbSchemaIdx))
+					for tblIdx := tblIdxStart; tblIdx < tblIdxEnd; tblIdx++ {
+						tableName := dbSchemaTables.Field(0).(*array.String).Value(int(tblIdx))
+
+						if strings.EqualFold("bulk_ingest", tableName) {
+							foundExpected = true
+
+							colIdxStart, colIdxEnd := tableColumnsList.ValueOffsets(int(tblIdx))
+							for colIdx := colIdxStart; colIdx < colIdxEnd; colIdx++ {
+								name := tableColumns.Field(0).(*array.String).Value(int(colIdx))
+								colnames = append(colnames, strings.ToLower(name))
+								positions = append(positions, tableColumns.Field(1).(*array.Int32).Value(int(colIdx)))
+							}
+						}
+					}
+				}
+			}
+
+			c.False(rdr.Next())
+			c.True(foundExpected)
+			c.Equal(tt.colnames, colnames)
+			c.Equal(tt.positions, positions)
+		})
+	}
+}
+
 type StatementTests struct {
 	suite.Suite
 
@@ -501,3 +590,213 @@ func (s *StatementTests) TestSqlPrepareErrorParamCountMismatch() {
 	_, _, err = stmt.ExecuteQuery(s.ctx)
 	s.Error(err)
 }
+
+func (s *StatementTests) TestSqlIngestInts() {
+	if !s.Quirks.SupportsBulkIngest() {
+		s.T().SkipNow()
+	}
+
+	s.Require().NoError(s.Quirks.DropTable(s.Cnxn, "bulk_ingest"))
+
+	schema := arrow.NewSchema([]arrow.Field{{
+		Name: "int64s", Type: arrow.PrimitiveTypes.Int64, Nullable: true}}, nil)
+
+	batchbldr := array.NewRecordBuilder(s.Quirks.Alloc(), schema)
+	defer batchbldr.Release()
+	bldr := batchbldr.Field(0).(*array.Int64Builder)
+	bldr.AppendValues([]int64{42, -42, 0}, []bool{true, true, false})
+	batch := batchbldr.NewRecord()
+	defer batch.Release()
+
+	stmt, err := s.Cnxn.NewStatement()
+	s.Require().NoError(err)
+	defer stmt.Close()
+
+	s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+	s.Require().NoError(stmt.Bind(s.ctx, batch))
+
+	affected, err := stmt.ExecuteUpdate(s.ctx)
+	s.Require().NoError(err)
+	if affected != -1 && affected != 3 {
+		s.FailNowf("invalid number of affected rows", "should be -1 or 3, got: %d", affected)
+	}
+
+	// use order by clause to ensure we get the same order as the input batch
+	s.Require().NoError(stmt.SetSqlQuery(`SELECT * FROM bulk_ingest ORDER BY "int64s" DESC NULLS LAST`))
+	rdr, rows, err := stmt.ExecuteQuery(s.ctx)
+	s.Require().NoError(err)
+	if rows != -1 && rows != 3 {
+		s.FailNowf("invalid number of returned rows", "should be -1 or 3, got: %d", rows)
+	}
+	defer rdr.Release()
+
+	s.Truef(schema.Equal(utils.RemoveSchemaMetadata(rdr.Schema())), "expected: %s\n got: %s", schema, rdr.Schema())
+	s.Require().True(rdr.Next())
+	rec := rdr.Record()
+	s.EqualValues(3, rec.NumRows())
+	s.EqualValues(1, rec.NumCols())
+
+	s.Truef(array.Equal(rec.Column(0), batch.Column(0)), "expected: %s\ngot: %s", batch.Column(0), rec.Column(0))
+
+	s.Require().False(rdr.Next())
+	s.Require().NoError(rdr.Err())
+}
+
+func (s *StatementTests) TestSqlIngestAppend() {
+	if !s.Quirks.SupportsBulkIngest() {
+		s.T().SkipNow()
+	}
+
+	s.Require().NoError(s.Quirks.DropTable(s.Cnxn, "bulk_ingest"))
+
+	schema := arrow.NewSchema([]arrow.Field{{
+		Name: "int64s", Type: arrow.PrimitiveTypes.Int64, Nullable: true}}, nil)
+
+	batchbldr := array.NewRecordBuilder(s.Quirks.Alloc(), schema)
+	defer batchbldr.Release()
+	bldr := batchbldr.Field(0).(*array.Int64Builder)
+	bldr.AppendValues([]int64{42}, []bool{true})
+	batch := batchbldr.NewRecord()
+	defer batch.Release()
+
+	// ingest and create table
+	stmt, err := s.Cnxn.NewStatement()
+	s.Require().NoError(err)
+	defer stmt.Close()
+
+	s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+	s.Require().NoError(stmt.Bind(s.ctx, batch))
+
+	affected, err := stmt.ExecuteUpdate(s.ctx)
+	s.Require().NoError(err)
+	if affected != -1 && affected != 1 {
+		s.FailNowf("invalid number of affected rows", "should be -1 or 1, got: %d", affected)
+	}
+
+	// now append
+	bldr.AppendValues([]int64{-42, 0}, []bool{true, false})
+	batch2 := batchbldr.NewRecord()
+	defer batch2.Release()
+
+	s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+	s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestMode, adbc.OptionValueIngestModeAppend))
+	s.Require().NoError(stmt.Bind(s.ctx, batch2))
+
+	affected, err = stmt.ExecuteUpdate(s.ctx)
+	s.Require().NoError(err)
+	if affected != -1 && affected != 2 {
+		s.FailNowf("invalid number of affected rows", "should be -1 or 2, got: %d", affected)
+	}
+
+	// use order by clause to ensure we get the same order as the input batch
+	s.Require().NoError(stmt.SetSqlQuery(`SELECT * FROM bulk_ingest ORDER BY "int64s" DESC NULLS LAST`))
+	rdr, rows, err := stmt.ExecuteQuery(s.ctx)
+	s.Require().NoError(err)
+	if rows != -1 && rows != 3 {
+		s.FailNowf("invalid number of returned rows", "should be -1 or 3, got: %d", rows)
+	}
+	defer rdr.Release()
+
+	s.Truef(schema.Equal(utils.RemoveSchemaMetadata(rdr.Schema())), "expected: %s\n got: %s", schema, rdr.Schema())
+	s.Require().True(rdr.Next())
+	rec := rdr.Record()
+	s.EqualValues(3, rec.NumRows())
+	s.EqualValues(1, rec.NumCols())
+
+	exp, err := array.Concatenate([]arrow.Array{batch.Column(0), batch2.Column(0)}, s.Quirks.Alloc())
+	s.Require().NoError(err)
+	defer exp.Release()
+	s.Truef(array.Equal(rec.Column(0), exp), "expected: %s\ngot: %s", exp, rec.Column(0))
+
+	s.Require().False(rdr.Next())
+	s.Require().NoError(rdr.Err())
+}
+
+func (s *StatementTests) TestSqlIngestErrors() {
+	if !s.Quirks.SupportsBulkIngest() {
+		s.T().SkipNow()
+	}
+
+	stmt, err := s.Cnxn.NewStatement()
+	s.Require().NoError(err)
+	defer stmt.Close()
+
+	s.Run("ingest without bind", func() {
+		var e adbc.Error
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+
+		_, _, err := stmt.ExecuteQuery(s.ctx)
+		s.ErrorAs(err, &e)
+		s.Equal(adbc.StatusInvalidState, e.Code)
+	})
+
+	s.Run("append to nonexistent table", func() {
+		s.Require().NoError(s.Quirks.DropTable(s.Cnxn, "bulk_ingest"))
+		schema := arrow.NewSchema([]arrow.Field{{
+			Name: "int64s", Type: arrow.PrimitiveTypes.Int64, Nullable: true}}, nil)
+
+		batchbldr := array.NewRecordBuilder(s.Quirks.Alloc(), schema)
+		defer batchbldr.Release()
+		bldr := batchbldr.Field(0).(*array.Int64Builder)
+		bldr.AppendValues([]int64{42, -42, 0}, []bool{true, true, false})
+		batch := batchbldr.NewRecord()
+		defer batch.Release()
+
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestMode, adbc.OptionValueIngestModeAppend))
+		s.Require().NoError(stmt.Bind(s.ctx, batch))
+
+		var e adbc.Error
+		_, _, err := stmt.ExecuteQuery(s.ctx)
+		s.ErrorAs(err, &e)
+		s.NotEqual(adbc.StatusOK, e.Code)
+		// SQLSTATE 42S02 == table or view not found
+		s.Equal([5]byte{'4', '2', 'S', '0', '2'}, e.SqlState)
+	})
+
+	s.Run("overwrite and incompatible schema", func() {
+		s.Require().NoError(s.Quirks.DropTable(s.Cnxn, "bulk_ingest"))
+		schema := arrow.NewSchema([]arrow.Field{{
+			Name: "int64s", Type: arrow.PrimitiveTypes.Int64, Nullable: true}}, nil)
+
+		batchbldr := array.NewRecordBuilder(s.Quirks.Alloc(), schema)
+		defer batchbldr.Release()
+		bldr := batchbldr.Field(0).(*array.Int64Builder)
+		bldr.AppendValues([]int64{42, -42, 0}, []bool{true, true, false})
+		batch := batchbldr.NewRecord()
+		defer batch.Release()
+
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestMode, adbc.OptionValueIngestModeCreate))
+		s.Require().NoError(stmt.Bind(s.ctx, batch))
+
+		// create it
+		_, err := stmt.ExecuteUpdate(s.ctx)
+		s.Require().NoError(err)
+
+		// error if we try to create again
+		s.Require().NoError(stmt.Bind(s.ctx, batch))
+
+		var e adbc.Error
+		_, err = stmt.ExecuteUpdate(s.ctx)
+		s.ErrorAs(err, &e)
+		s.Equal(adbc.StatusInternal, e.Code)
+
+		// try to append an incompatible schema
+		schema, _ = schema.AddField(1, arrow.Field{Name: "coltwo", Type: arrow.PrimitiveTypes.Int64, Nullable: true})
+		batchbldr = array.NewRecordBuilder(s.Quirks.Alloc(), schema)
+		defer batchbldr.Release()
+		batchbldr.Field(0).AppendNull()
+		batchbldr.Field(1).AppendNull()
+		batch = batchbldr.NewRecord()
+		defer batch.Release()
+
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestTargetTable, "bulk_ingest"))
+		s.Require().NoError(stmt.SetOption(adbc.OptionKeyIngestMode, adbc.OptionValueIngestModeAppend))
+		s.Require().NoError(stmt.Bind(s.ctx, batch))
+
+		_, err = stmt.ExecuteUpdate(s.ctx)
+		s.ErrorAs(err, &e)
+		s.NotEqual(adbc.StatusOK, e.Code)
+	})
+}
diff --git a/license.tpl b/license.tpl
new file mode 100644
index 0000000..00a484b
--- /dev/null
+++ b/license.tpl
@@ -0,0 +1,321 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+--------------------------------------------------------------------------------
+
+This project includes code from Apache Arrow Nanoarrow.
+
+* c/vendor/nanoarrow is the source of nanoarrow
+
+Copyright: 2022 The Apache Software Foundation.
+Home page: https://arrow.apache.org/
+License: http://www.apache.org/licenses/LICENSE-2.0
+
+--------------------------------------------------------------------------------
+
+The files python/*/*/_version.py and python/*/*/_static_version.py
+contain code from
+
+https://github.com/jbweston/miniver
+
+which is made available under the Creative Commons CC0 license.
+
+--------------------------------------------------------------------------------
+
+The files under ci/conda/.ci-support have the following license
+
+BSD 3-clause license
+Copyright (c) 2015-2022, conda-forge
+All rights reserved.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency Go is statically linked in certain binary distributions,
+like the Python wheels. The Go project is under the BSD 3-clause license +
+PATENTS weak patent termination clause
+(https://github.com/golang/go/blob/master/PATENTS).
+
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency libpq is statically linked in certain binary
+distributions, like the Python wheels. libpq has the following license:
+
+Portions Copyright © 1996-2022, The PostgreSQL Global Development Group
+
+Portions Copyright © 1994, The Regents of the University of California
+
+Permission to use, copy, modify, and distribute this software and its
+documentation for any purpose, without fee, and without a written
+agreement is hereby granted, provided that the above copyright notice
+and this paragraph and the following two paragraphs appear in all
+copies.
+
+IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY
+FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,
+INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND
+ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN
+ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,
+INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE
+PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF
+CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT,
+UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency OpenSSL is statically linked in certain binary
+distributions, like the Python wheels. OpenSSL version 3 has the
+following license:
+
+Copyright 1995-2021 The OpenSSL Project Authors. All Rights Reserved.
+
+Licensed under the Apache License 2.0 (the "License").  You may not use
+this file except in compliance with the License.  You can obtain a copy
+in the file LICENSE in the source distribution or at
+https://www.openssl.org/source/license.html
+
+--------------------------------------------------------------------------------
+
+3rdparty dependency SQLite is statically linked in certain binary
+distributions, like the Python wheels. SQLite is public domain.
+
+{{ range .}}
+--------------------------------------------------------------------------------
+
+3rdparty dependency {{ .Name }}
+is statically linked in certain binary distributions, like the Python wheels.
+{{ .Name }} is under the {{ .LicenseName }} license.
+{{ if ne .LicenseName "Apache-2.0" -}}
+{{.LicenseText}}
+{{- end }}
+{{- end }}
diff --git a/python/adbc_driver_flightsql/adbc_driver_flightsql/__init__.py b/python/adbc_driver_flightsql/adbc_driver_flightsql/__init__.py
index f4e9ddf..898d8e1 100644
--- a/python/adbc_driver_flightsql/adbc_driver_flightsql/__init__.py
+++ b/python/adbc_driver_flightsql/adbc_driver_flightsql/__init__.py
@@ -103,7 +103,7 @@ class StatementOptions(enum.Enum):
     #: The number of batches to queue per partition. Defaults to 5.
     #:
     #: This controls how much we read ahead on result sets.
-    QUEUE_SIZE = "adbc.flight.sql.rpc.queue_size"
+    QUEUE_SIZE = "adbc.rpc.result_queue_size"
     #: Add an arbitrary header to all outgoing requests.
     #:
     #: This option should prefix the name of the header to add