You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by ko...@apache.org on 2018/12/06 03:26:33 UTC

[arrow] branch master updated: ARROW-3209: [C++] Rename libarrow_gpu to libarrow_cuda

This is an automated email from the ASF dual-hosted git repository.

kou pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/arrow.git


The following commit(s) were added to refs/heads/master by this push:
     new 898e06c  ARROW-3209: [C++] Rename libarrow_gpu to libarrow_cuda
898e06c is described below

commit 898e06c957096009f0470c1bb9441ecb1915745f
Author: Kouhei Sutou <ko...@clear-code.com>
AuthorDate: Thu Dec 6 12:26:21 2018 +0900

    ARROW-3209: [C++] Rename libarrow_gpu to libarrow_cuda
    
    Also rename the arrow::gpu namespace to arrow::cuda.
    
    Author: Kouhei Sutou <ko...@clear-code.com>
    Author: Antoine Pitrou <an...@python.org>
    
    Closes #3088 from pitrou/ARROW-3209-rename-arrow-gpu-to-cuda and squashes the following commits:
    
    d16eb713 <Kouhei Sutou>  Support arrow-cuda.pc in build directory
    32381216 <Kouhei Sutou>  Fix path
    219ffa28 <Kouhei Sutou>  Rename GPU to CUDA
    f8683cac <Kouhei Sutou>  Rename GPU to CUDA
    c4f7f381 <Kouhei Sutou>  Rename red-arrow-gpu to red-arrow-cuda
    267de1b8 <Kouhei Sutou>  Rename libarrow-gpu-glib to libarrow-cuda-glib
    9687346a <Antoine Pitrou> ARROW-3209:  Rename libarrow_gpu to libarrow_cuda
---
 c_glib/.gitignore                                  |   2 +-
 c_glib/Makefile.am                                 |   2 +-
 .../Makefile.am                                    |  80 +-
 .../arrow-cuda-glib.h}                             |   2 +-
 .../arrow-cuda-glib.hpp}                           |   2 +-
 .../arrow-cuda-glib.pc.in}                         |   8 +-
 c_glib/arrow-cuda-glib/cuda.cpp                    | 942 +++++++++++++++++++++
 c_glib/arrow-cuda-glib/cuda.h                      | 182 ++++
 c_glib/arrow-cuda-glib/cuda.hpp                    |  54 ++
 c_glib/arrow-cuda-glib/meson.build                 |  79 ++
 c_glib/arrow-gpu-glib/cuda.cpp                     | 942 ---------------------
 c_glib/arrow-gpu-glib/cuda.h                       | 183 ----
 c_glib/arrow-gpu-glib/cuda.hpp                     |  54 --
 c_glib/arrow-gpu-glib/meson.build                  |  79 --
 c_glib/configure.ac                                |  50 +-
 c_glib/doc/arrow-glib/Makefile.am                  |  10 +-
 c_glib/doc/arrow-glib/meson.build                  |   8 +-
 c_glib/doc/plasma-glib/Makefile.am                 |  10 +-
 c_glib/doc/plasma-glib/meson.build                 |   4 +-
 c_glib/meson.build                                 |  14 +-
 c_glib/plasma-glib/Makefile.am                     |  67 +-
 c_glib/plasma-glib/client.cpp                      |  32 +-
 c_glib/plasma-glib/meson.build                     |  14 +-
 c_glib/test/plasma/test-plasma-client.rb           |   2 +-
 c_glib/test/run-test.rb                            |   2 +-
 c_glib/test/run-test.sh                            |   2 +-
 c_glib/test/{test-gpu-cuda.rb => test-cuda.rb}     |  32 +-
 cpp/CMakeLists.txt                                 |   4 +-
 cpp/README.md                                      |  10 +-
 cpp/cmake_modules/FindArrowCuda.cmake              |  12 +-
 cpp/src/arrow/CMakeLists.txt                       |   4 +-
 cpp/src/arrow/gpu/CMakeLists.txt                   |  34 +-
 .../gpu/{arrow-gpu.pc.in => arrow-cuda.pc.in}      |   6 +-
 cpp/src/arrow/gpu/cuda-benchmark.cc                |   4 +-
 cpp/src/arrow/gpu/cuda-test.cc                     |   6 +-
 cpp/src/arrow/gpu/cuda_arrow_ipc.cc                |   4 +-
 cpp/src/arrow/gpu/cuda_arrow_ipc.h                 |   4 +-
 cpp/src/arrow/gpu/cuda_common.h                    |   4 +-
 cpp/src/arrow/gpu/cuda_context.cc                  |   5 +-
 cpp/src/arrow/gpu/cuda_context.h                   |   4 +-
 cpp/src/arrow/gpu/cuda_memory.cc                   |   4 +-
 cpp/src/arrow/gpu/cuda_memory.h                    |   4 +-
 cpp/src/plasma/CMakeLists.txt                      |   8 +-
 cpp/src/plasma/client.cc                           |  24 +-
 cpp/src/plasma/common.h                            |   6 +-
 cpp/src/plasma/plasma.h                            |   6 +-
 cpp/src/plasma/protocol.cc                         |  14 +-
 cpp/src/plasma/store.cc                            |  18 +-
 cpp/src/plasma/store.h                             |   4 +-
 cpp/src/plasma/test/client_tests.cc                |  10 +-
 dev/release/rat_exclude_files.txt                  |  10 +-
 dev/tasks/linux-packages/debian/control            |  38 +-
 .../debian/gir1.2-arrow-cuda-1.0.install           |   1 +
 .../debian/gir1.2-arrow-gpu-1.0.install            |   1 -
 .../debian/libarrow-cuda-dev.install               |   3 +
 .../debian/libarrow-cuda-glib-dev.install          |   5 +
 .../debian/libarrow-cuda-glib12.install            |   1 +
 .../linux-packages/debian/libarrow-cuda12.install  |   1 +
 .../linux-packages/debian/libarrow-gpu-dev.install |   3 -
 .../debian/libarrow-gpu-glib-dev.install           |   5 -
 .../debian/libarrow-gpu-glib12.install             |   1 -
 .../linux-packages/debian/libarrow-gpu12.install   |   1 -
 dev/tasks/linux-packages/debian/rules              |   2 +-
 dev/tasks/tasks.yml                                |  44 +-
 python/CMakeLists.txt                              |   9 +-
 python/pyarrow/includes/libarrow_cuda.pxd          |  25 +-
 ruby/README.md                                     |  10 +-
 ruby/{red-arrow-gpu => red-arrow-cuda}/.gitignore  |   2 +-
 ruby/{red-arrow-gpu => red-arrow-cuda}/Gemfile     |   0
 ruby/{red-arrow-gpu => red-arrow-cuda}/LICENSE.txt |   0
 ruby/{red-arrow-gpu => red-arrow-cuda}/NOTICE.txt  |   0
 ruby/red-arrow-cuda/README.md                      |  62 ++
 ruby/{red-arrow-gpu => red-arrow-cuda}/Rakefile    |   0
 .../dependency-check/Rakefile                      |   6 +-
 .../lib/arrow-cuda.rb}                             |   6 +-
 .../lib/arrow-cuda/device-manager.rb}              |   4 +-
 .../lib/arrow-cuda}/loader.rb                      |   6 +-
 .../red-arrow-cuda.gemspec}                        |  12 +-
 .../test/helper.rb                                 |   2 +-
 .../test/run-test.rb                               |   0
 .../test/test-cuda.rb                              |   6 +-
 ruby/{red-arrow-gpu => red-arrow-cuda}/version.rb  |   6 +-
 ruby/red-arrow-gpu/README.md                       |  62 --
 83 files changed, 1699 insertions(+), 1692 deletions(-)

diff --git a/c_glib/.gitignore b/c_glib/.gitignore
index cc7a193..18f952e 100644
--- a/c_glib/.gitignore
+++ b/c_glib/.gitignore
@@ -51,12 +51,12 @@ Makefile.in
 /libtool
 /m4/
 /stamp-h1
+/arrow-cuda-glib/*.pc
 /arrow-glib/enums.c
 /arrow-glib/enums.h
 /arrow-glib/stamp-*
 /arrow-glib/version.h
 /arrow-glib/*.pc
-/arrow-gpu-glib/*.pc
 /gandiva-glib/*.pc
 /parquet-glib/*.pc
 /plasma-glib/*.pc
diff --git a/c_glib/Makefile.am b/c_glib/Makefile.am
index d21555e..149894c 100644
--- a/c_glib/Makefile.am
+++ b/c_glib/Makefile.am
@@ -19,7 +19,7 @@ ACLOCAL_AMFLAGS = -I m4 ${ACLOCAL_FLAGS}
 
 SUBDIRS =					\
 	arrow-glib				\
-	arrow-gpu-glib				\
+	arrow-cuda-glib				\
 	gandiva-glib				\
 	parquet-glib				\
 	plasma-glib				\
diff --git a/c_glib/arrow-gpu-glib/Makefile.am b/c_glib/arrow-cuda-glib/Makefile.am
similarity index 64%
rename from c_glib/arrow-gpu-glib/Makefile.am
rename to c_glib/arrow-cuda-glib/Makefile.am
index a124903..2e3848d 100644
--- a/c_glib/arrow-gpu-glib/Makefile.am
+++ b/c_glib/arrow-cuda-glib/Makefile.am
@@ -24,51 +24,51 @@ AM_CPPFLAGS =					\
 	-I$(top_builddir)			\
 	-I$(top_srcdir)
 
-if HAVE_ARROW_GPU
+if HAVE_ARROW_CUDA
 lib_LTLIBRARIES =				\
-	libarrow-gpu-glib.la
+	libarrow-cuda-glib.la
 
-libarrow_gpu_glib_la_CXXFLAGS =			\
+libarrow_cuda_glib_la_CXXFLAGS =		\
 	$(GLIB_CFLAGS)				\
 	$(ARROW_CFLAGS)				\
-	$(ARROW_GPU_CFLAGS)			\
+	$(ARROW_CUDA_CFLAGS)			\
 	$(GARROW_CXXFLAGS)
 
-libarrow_gpu_glib_la_LDFLAGS =			\
+libarrow_cuda_glib_la_LDFLAGS =			\
 	-version-info $(LT_VERSION_INFO)	\
 	-no-undefined
 
-libarrow_gpu_glib_la_LIBADD =			\
+libarrow_cuda_glib_la_LIBADD =			\
 	$(GLIB_LIBS)				\
 	$(ARROW_LIBS)				\
-	$(ARROW_GPU_LIBS)			\
+	$(ARROW_CUDA_LIBS)			\
 	../arrow-glib/libarrow-glib.la
 
-libarrow_gpu_glib_la_headers =			\
-	arrow-gpu-glib.h			\
+libarrow_cuda_glib_la_headers =			\
+	arrow-cuda-glib.h			\
 	cuda.h
 
-libarrow_gpu_glib_la_sources =			\
+libarrow_cuda_glib_la_sources =			\
 	cuda.cpp				\
-	$(libarrow_gpu_glib_la_headers)
+	$(libarrow_cuda_glib_la_headers)
 
-libarrow_gpu_glib_la_cpp_headers =		\
-	arrow-gpu-glib.hpp			\
+libarrow_cuda_glib_la_cpp_headers =		\
+	arrow-cuda-glib.hpp			\
 	cuda.hpp
 
-libarrow_gpu_glib_la_SOURCES =			\
-	$(libarrow_gpu_glib_la_sources)		\
-	$(libarrow_gpu_glib_la_cpp_headers)
+libarrow_cuda_glib_la_SOURCES =			\
+	$(libarrow_cuda_glib_la_sources)	\
+	$(libarrow_cuda_glib_la_cpp_headers)
 
-arrow_gpu_glib_includedir =			\
-	$(includedir)/arrow-gpu-glib
-arrow_gpu_glib_include_HEADERS =		\
-	$(libarrow_gpu_glib_la_headers)		\
-	$(libarrow_gpu_glib_la_cpp_headers)
+arrow_cuda_glib_includedir =			\
+	$(includedir)/arrow-cuda-glib
+arrow_cuda_glib_include_HEADERS =		\
+	$(libarrow_cuda_glib_la_headers)	\
+	$(libarrow_cuda_glib_la_cpp_headers)
 
 pkgconfigdir = $(libdir)/pkgconfig
 pkgconfig_DATA =				\
-	arrow-gpu-glib.pc
+	arrow-cuda-glib.pc
 
 if HAVE_INTROSPECTION
 -include $(INTROSPECTION_MAKEFILE)
@@ -85,39 +85,39 @@ endif
 INTROSPECTION_COMPILER_ARGS =			\
 	--includedir=$(abs_builddir)/../arrow-glib
 
-ArrowGPU-1.0.gir: libarrow-gpu-glib.la
-ArrowGPU_1_0_gir_PACKAGES =			\
+ArrowCUDA-1.0.gir: libarrow-cuda-glib.la
+ArrowCUDA_1_0_gir_PACKAGES =			\
 	arrow-glib
-ArrowGPU_1_0_gir_EXPORT_PACKAGES =		\
-	arrow-gpu-glib
-ArrowGPU_1_0_gir_INCLUDES =			\
+ArrowCUDA_1_0_gir_EXPORT_PACKAGES =		\
+	arrow-cuda-glib
+ArrowCUDA_1_0_gir_INCLUDES =			\
 	Arrow-1.0
-ArrowGPU_1_0_gir_CFLAGS =			\
+ArrowCUDA_1_0_gir_CFLAGS =			\
 	$(AM_CPPFLAGS)
-ArrowGPU_1_0_gir_LIBS =
-ArrowGPU_1_0_gir_FILES =			\
-	$(libarrow_gpu_glib_la_sources)
-ArrowGPU_1_0_gir_SCANNERFLAGS =					\
+ArrowCUDA_1_0_gir_LIBS =
+ArrowCUDA_1_0_gir_FILES =			\
+	$(libarrow_cuda_glib_la_sources)
+ArrowCUDA_1_0_gir_SCANNERFLAGS =				\
 	--library-path=$(ARROW_LIB_DIR)				\
 	--warn-all						\
 	--add-include-path=$(abs_builddir)/../arrow-glib	\
-	--identifier-prefix=GArrowGPU				\
-	--symbol-prefix=garrow_gpu
+	--identifier-prefix=GArrowCUDA				\
+	--symbol-prefix=garrow_cuda
 if OS_MACOS
-ArrowGPU_1_0_gir_LIBS +=			\
+ArrowCUDA_1_0_gir_LIBS +=			\
 	 arrow-glib				\
-	 arrow-gpu-glib
-ArrowGPU_1_0_gir_SCANNERFLAGS +=				\
+	 arrow-cuda-glib
+ArrowCUDA_1_0_gir_SCANNERFLAGS +=				\
 	--no-libtool						\
 	--library-path=$(abs_builddir)/../arrow-glib/.libs	\
 	--library-path=$(abs_builddir)/.libs
 else
-ArrowGPU_1_0_gir_LIBS +=				\
+ArrowCUDA_1_0_gir_LIBS +=				\
 	$(abs_builddir)/../arrow-glib/libarrow-glib.la	\
-	libarrow-gpu-glib.la
+	libarrow-cuda-glib.la
 endif
 
-INTROSPECTION_GIRS += ArrowGPU-1.0.gir
+INTROSPECTION_GIRS += ArrowCUDA-1.0.gir
 
 girdir = $(datadir)/gir-1.0
 gir_DATA = $(INTROSPECTION_GIRS)
diff --git a/c_glib/arrow-gpu-glib/arrow-gpu-glib.h b/c_glib/arrow-cuda-glib/arrow-cuda-glib.h
similarity index 96%
rename from c_glib/arrow-gpu-glib/arrow-gpu-glib.h
rename to c_glib/arrow-cuda-glib/arrow-cuda-glib.h
index 1538c9a..b3c7f21 100644
--- a/c_glib/arrow-gpu-glib/arrow-gpu-glib.h
+++ b/c_glib/arrow-cuda-glib/arrow-cuda-glib.h
@@ -21,4 +21,4 @@
 
 #include <arrow-glib/arrow-glib.h>
 
-#include <arrow-gpu-glib/cuda.h>
+#include <arrow-cuda-glib/cuda.h>
diff --git a/c_glib/arrow-gpu-glib/arrow-gpu-glib.hpp b/c_glib/arrow-cuda-glib/arrow-cuda-glib.hpp
similarity index 95%
rename from c_glib/arrow-gpu-glib/arrow-gpu-glib.hpp
rename to c_glib/arrow-cuda-glib/arrow-cuda-glib.hpp
index 92017d8..e79b43a 100644
--- a/c_glib/arrow-gpu-glib/arrow-gpu-glib.hpp
+++ b/c_glib/arrow-cuda-glib/arrow-cuda-glib.hpp
@@ -21,4 +21,4 @@
 
 #include <arrow-glib/arrow-glib.hpp>
 
-#include <arrow-gpu-glib/cuda.hpp>
+#include <arrow-cuda-glib/cuda.hpp>
diff --git a/c_glib/arrow-gpu-glib/arrow-gpu-glib.pc.in b/c_glib/arrow-cuda-glib/arrow-cuda-glib.pc.in
similarity index 85%
rename from c_glib/arrow-gpu-glib/arrow-gpu-glib.pc.in
rename to c_glib/arrow-cuda-glib/arrow-cuda-glib.pc.in
index 38a6bae..de0ce97 100644
--- a/c_glib/arrow-gpu-glib/arrow-gpu-glib.pc.in
+++ b/c_glib/arrow-cuda-glib/arrow-cuda-glib.pc.in
@@ -20,9 +20,9 @@ exec_prefix=@exec_prefix@
 libdir=@libdir@
 includedir=@includedir@
 
-Name: Apache Arrow GPU GLib
-Description: C API for Apache Arrow GPU based on GLib
+Name: Apache Arrow CUDA GLib
+Description: C API for Apache Arrow CUDA based on GLib
 Version: @VERSION@
-Libs: -L${libdir} -larrow-gpu-glib
+Libs: -L${libdir} -larrow-cuda-glib
 Cflags: -I${includedir}
-Requires: arrow-glib
+Requires: arrow-glib arrow-cuda
diff --git a/c_glib/arrow-cuda-glib/cuda.cpp b/c_glib/arrow-cuda-glib/cuda.cpp
new file mode 100644
index 0000000..3f82f8f
--- /dev/null
+++ b/c_glib/arrow-cuda-glib/cuda.cpp
@@ -0,0 +1,942 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifdef HAVE_CONFIG_H
+#  include <config.h>
+#endif
+
+#include <arrow-glib/buffer.hpp>
+#include <arrow-glib/error.hpp>
+#include <arrow-glib/input-stream.hpp>
+#include <arrow-glib/output-stream.hpp>
+#include <arrow-glib/readable.hpp>
+#include <arrow-glib/record-batch.hpp>
+#include <arrow-glib/schema.hpp>
+
+#include <arrow-cuda-glib/cuda.hpp>
+
+G_BEGIN_DECLS
+
+/**
+ * SECTION: cuda
+ * @section_id: cuda-classes
+ * @title: CUDA related classes
+ * @include: arrow-cuda-glib/arrow-cuda-glib.h
+ *
+ * The following classes provide CUDA support for Apache Arrow data.
+ *
+ * #GArrowCUDADeviceManager is the starting point. You need at
+ * least one #GArrowCUDAContext to process Apache Arrow data on
+ * NVIDIA GPU.
+ *
+ * #GArrowCUDAContext is a class to keep context for one GPU. You
+ * need to create #GArrowCUDAContext for each GPU that you want to
+ * use. You can create #GArrowCUDAContext by
+ * garrow_cuda_device_manager_get_context().
+ *
+ * #GArrowCUDABuffer is a class for data on GPU. You can copy data
+ * on GPU to/from CPU by garrow_cuda_buffer_copy_to_host() and
+ * garrow_cuda_buffer_copy_from_host(). You can share data on GPU
+ * with other processes by garrow_cuda_buffer_export() and
+ * garrow_cuda_buffer_new_ipc().
+ *
+ * #GArrowCUDAHostBuffer is a class for data on CPU that is
+ * directly accessible from GPU.
+ *
+ * #GArrowCUDAIPCMemoryHandle is a class to share data on GPU with
+ * other processes. You can export your data on GPU to other processes
+ * by garrow_cuda_buffer_export() and
+ * garrow_cuda_ipc_memory_handle_new(). You can import other
+ * process data on GPU by garrow_cuda_ipc_memory_handle_new() and
+ * garrow_cuda_buffer_new_ipc().
+ *
+ * #GArrowCUDABufferInputStream is a class to read data in
+ * #GArrowCUDABuffer.
+ *
+ * #GArrowCUDABufferOutputStream is a class to write data into
+ * #GArrowCUDABuffer.
+ */
+
+G_DEFINE_TYPE(GArrowCUDADeviceManager,
+              garrow_cuda_device_manager,
+              G_TYPE_OBJECT)
+
+static void
+garrow_cuda_device_manager_init(GArrowCUDADeviceManager *object)
+{
+}
+
+static void
+garrow_cuda_device_manager_class_init(GArrowCUDADeviceManagerClass *klass)
+{
+}
+
+/**
+ * garrow_cuda_device_manager_new:
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: A newly created #GArrowCUDADeviceManager on success,
+ *   %NULL on error.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDADeviceManager *
+garrow_cuda_device_manager_new(GError **error)
+{
+  arrow::cuda::CudaDeviceManager *manager;
+  auto status = arrow::cuda::CudaDeviceManager::GetInstance(&manager);
+  if (garrow_error_check(error, status, "[cuda][device-manager][new]")) {
+    auto manager = g_object_new(GARROW_CUDA_TYPE_DEVICE_MANAGER,
+                                NULL);
+    return GARROW_CUDA_DEVICE_MANAGER(manager);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_device_manager_get_context:
+ * @manager: A #GArrowCUDADeviceManager.
+ * @gpu_number: A GPU device number for the target context.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowCUDAContext on
+ *   success, %NULL on error. Contexts for the same GPU device number
+ *   share the same data internally.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDAContext *
+garrow_cuda_device_manager_get_context(GArrowCUDADeviceManager *manager,
+                                       gint gpu_number,
+                                       GError **error)
+{
+  arrow::cuda::CudaDeviceManager *arrow_manager;
+  arrow::cuda::CudaDeviceManager::GetInstance(&arrow_manager);
+  std::shared_ptr<arrow::cuda::CudaContext> context;
+  auto status = arrow_manager->GetContext(gpu_number, &context);
+  if (garrow_error_check(error, status,
+                         "[cuda][device-manager][get-context]]")) {
+    return garrow_cuda_context_new_raw(&context);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_device_manager_get_n_devices:
+ * @manager: A #GArrowCUDADeviceManager.
+ *
+ * Returns: The number of GPU devices.
+ *
+ * Since: 0.8.0
+ */
+gsize
+garrow_cuda_device_manager_get_n_devices(GArrowCUDADeviceManager *manager)
+{
+  arrow::cuda::CudaDeviceManager *arrow_manager;
+  arrow::cuda::CudaDeviceManager::GetInstance(&arrow_manager);
+  return arrow_manager->num_devices();
+}
+
+
+typedef struct GArrowCUDAContextPrivate_ {
+  std::shared_ptr<arrow::cuda::CudaContext> context;
+} GArrowCUDAContextPrivate;
+
+enum {
+  PROP_CONTEXT = 1
+};
+
+G_DEFINE_TYPE_WITH_PRIVATE(GArrowCUDAContext,
+                           garrow_cuda_context,
+                           G_TYPE_OBJECT)
+
+#define GARROW_CUDA_CONTEXT_GET_PRIVATE(object) \
+  static_cast<GArrowCUDAContextPrivate *>(      \
+    garrow_cuda_context_get_instance_private(   \
+      GARROW_CUDA_CONTEXT(object)))
+
+static void
+garrow_cuda_context_finalize(GObject *object)
+{
+  auto priv = GARROW_CUDA_CONTEXT_GET_PRIVATE(object);
+
+  priv->context = nullptr;
+
+  G_OBJECT_CLASS(garrow_cuda_context_parent_class)->finalize(object);
+}
+
+static void
+garrow_cuda_context_set_property(GObject *object,
+                                 guint prop_id,
+                                 const GValue *value,
+                                 GParamSpec *pspec)
+{
+  auto priv = GARROW_CUDA_CONTEXT_GET_PRIVATE(object);
+
+  switch (prop_id) {
+  case PROP_CONTEXT:
+    priv->context =
+      *static_cast<std::shared_ptr<arrow::cuda::CudaContext> *>(g_value_get_pointer(value));
+    break;
+  default:
+    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
+    break;
+  }
+}
+
+static void
+garrow_cuda_context_get_property(GObject *object,
+                                 guint prop_id,
+                                 GValue *value,
+                                 GParamSpec *pspec)
+{
+  switch (prop_id) {
+  default:
+    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
+    break;
+  }
+}
+
+static void
+garrow_cuda_context_init(GArrowCUDAContext *object)
+{
+}
+
+static void
+garrow_cuda_context_class_init(GArrowCUDAContextClass *klass)
+{
+  GParamSpec *spec;
+
+  auto gobject_class = G_OBJECT_CLASS(klass);
+
+  gobject_class->finalize     = garrow_cuda_context_finalize;
+  gobject_class->set_property = garrow_cuda_context_set_property;
+  gobject_class->get_property = garrow_cuda_context_get_property;
+
+  /**
+   * GArrowCUDAContext:context:
+   *
+   * Since: 0.8.0
+   */
+  spec = g_param_spec_pointer("context",
+                              "Context",
+                              "The raw std::shared_ptr<arrow::cuda::CudaContext>",
+                              static_cast<GParamFlags>(G_PARAM_WRITABLE |
+                                                       G_PARAM_CONSTRUCT_ONLY));
+  g_object_class_install_property(gobject_class, PROP_CONTEXT, spec);
+}
+
+/**
+ * garrow_cuda_context_get_allocated_size:
+ * @context: A #GArrowCUDAContext.
+ *
+ * Returns: The allocated memory by this context in bytes.
+ *
+ * Since: 0.8.0
+ */
+gint64
+garrow_cuda_context_get_allocated_size(GArrowCUDAContext *context)
+{
+  auto arrow_context = garrow_cuda_context_get_raw(context);
+  return arrow_context->bytes_allocated();
+}
+
+
+G_DEFINE_TYPE(GArrowCUDABuffer,
+              garrow_cuda_buffer,
+              GARROW_TYPE_BUFFER)
+
+static void
+garrow_cuda_buffer_init(GArrowCUDABuffer *object)
+{
+}
+
+static void
+garrow_cuda_buffer_class_init(GArrowCUDABufferClass *klass)
+{
+}
+
+/**
+ * garrow_cuda_buffer_new:
+ * @context: A #GArrowCUDAContext.
+ * @size: The number of bytes to be allocated on GPU device for this context.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowCUDABuffer on
+ *   success, %NULL on error.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDABuffer *
+garrow_cuda_buffer_new(GArrowCUDAContext *context,
+                       gint64 size,
+                       GError **error)
+{
+  auto arrow_context = garrow_cuda_context_get_raw(context);
+  std::shared_ptr<arrow::cuda::CudaBuffer> arrow_buffer;
+  auto status = arrow_context->Allocate(size, &arrow_buffer);
+  if (garrow_error_check(error, status, "[cuda][buffer][new]")) {
+    return garrow_cuda_buffer_new_raw(&arrow_buffer);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_buffer_new_ipc:
+ * @context: A #GArrowCUDAContext.
+ * @handle: A #GArrowCUDAIPCMemoryHandle to be communicated.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowCUDABuffer on
+ *   success, %NULL on error. The buffer has data from the IPC target.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDABuffer *
+garrow_cuda_buffer_new_ipc(GArrowCUDAContext *context,
+                           GArrowCUDAIPCMemoryHandle *handle,
+                           GError **error)
+{
+  auto arrow_context = garrow_cuda_context_get_raw(context);
+  auto arrow_handle = garrow_cuda_ipc_memory_handle_get_raw(handle);
+  std::shared_ptr<arrow::cuda::CudaBuffer> arrow_buffer;
+  auto status = arrow_context->OpenIpcBuffer(*arrow_handle, &arrow_buffer);
+  if (garrow_error_check(error, status,
+                         "[cuda][buffer][new-ipc]")) {
+    return garrow_cuda_buffer_new_raw(&arrow_buffer);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_buffer_new_record_batch:
+ * @context: A #GArrowCUDAContext.
+ * @record_batch: A #GArrowRecordBatch to be serialized.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowCUDABuffer on
+ *   success, %NULL on error. The buffer has serialized record batch
+ *   data.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDABuffer *
+garrow_cuda_buffer_new_record_batch(GArrowCUDAContext *context,
+                                    GArrowRecordBatch *record_batch,
+                                    GError **error)
+{
+  auto arrow_context = garrow_cuda_context_get_raw(context);
+  auto arrow_record_batch = garrow_record_batch_get_raw(record_batch);
+  std::shared_ptr<arrow::cuda::CudaBuffer> arrow_buffer;
+  auto status = arrow::cuda::SerializeRecordBatch(*arrow_record_batch,
+                                                  arrow_context.get(),
+                                                  &arrow_buffer);
+  if (garrow_error_check(error, status,
+                         "[cuda][buffer][new-record-batch]")) {
+    return garrow_cuda_buffer_new_raw(&arrow_buffer);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_buffer_copy_to_host:
+ * @buffer: A #GArrowCUDABuffer.
+ * @position: The offset of memory on GPU device to be copied.
+ * @size: The size of memory on GPU device to be copied in bytes.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A #GBytes that have copied memory on CPU
+ *   host on success, %NULL on error.
+ *
+ * Since: 0.8.0
+ */
+GBytes *
+garrow_cuda_buffer_copy_to_host(GArrowCUDABuffer *buffer,
+                                gint64 position,
+                                gint64 size,
+                                GError **error)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  auto data = static_cast<uint8_t *>(g_malloc(size));
+  auto status = arrow_buffer->CopyToHost(position, size, data);
+  if (garrow_error_check(error, status, "[cuda][buffer][copy-to-host]")) {
+    return g_bytes_new_take(data, size);
+  } else {
+    g_free(data);
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_buffer_copy_from_host:
+ * @buffer: A #GArrowCUDABuffer.
+ * @data: (array length=size): Data on CPU host to be copied.
+ * @size: The size of data on CPU host to be copied in bytes.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: %TRUE on success, %FALSE if there was an error.
+ *
+ * Since: 0.8.0
+ */
+gboolean
+garrow_cuda_buffer_copy_from_host(GArrowCUDABuffer *buffer,
+                                  const guint8 *data,
+                                  gint64 size,
+                                  GError **error)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  auto status = arrow_buffer->CopyFromHost(0, data, size);
+  return garrow_error_check(error,
+                            status,
+                            "[cuda][buffer][copy-from-host]");
+}
+
+/**
+ * garrow_cuda_buffer_export:
+ * @buffer: A #GArrowCUDABuffer.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created
+ *   #GArrowCUDAIPCMemoryHandle to handle the exported buffer on
+ *   success, %NULL on error
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDAIPCMemoryHandle *
+garrow_cuda_buffer_export(GArrowCUDABuffer *buffer, GError **error)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  std::shared_ptr<arrow::cuda::CudaIpcMemHandle> arrow_handle;
+  auto status = arrow_buffer->ExportForIpc(&arrow_handle);
+  if (garrow_error_check(error, status, "[cuda][buffer][export-for-ipc]")) {
+    return garrow_cuda_ipc_memory_handle_new_raw(&arrow_handle);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_buffer_get_context:
+ * @buffer: A #GArrowCUDABuffer.
+ *
+ * Returns: (transfer full): A newly created #GArrowCUDAContext for the
+ *   buffer. Contexts for the same buffer share the same data internally.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDAContext *
+garrow_cuda_buffer_get_context(GArrowCUDABuffer *buffer)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  auto arrow_context = arrow_buffer->context();
+  return garrow_cuda_context_new_raw(&arrow_context);
+}
+
+/**
+ * garrow_cuda_buffer_read_record_batch:
+ * @buffer: A #GArrowCUDABuffer.
+ * @schema: A #GArrowSchema for record batch.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowRecordBatch on
+ *   success, %NULL on error. The record batch data is located on GPU.
+ *
+ * Since: 0.8.0
+ */
+GArrowRecordBatch *
+garrow_cuda_buffer_read_record_batch(GArrowCUDABuffer *buffer,
+                                     GArrowSchema *schema,
+                                     GError **error)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  auto arrow_schema = garrow_schema_get_raw(schema);
+  auto pool = arrow::default_memory_pool();
+  std::shared_ptr<arrow::RecordBatch> arrow_record_batch;
+  auto status = arrow::cuda::ReadRecordBatch(arrow_schema,
+                                             arrow_buffer,
+                                             pool,
+                                             &arrow_record_batch);
+  if (garrow_error_check(error, status,
+                         "[cuda][buffer][read-record-batch]")) {
+    return garrow_record_batch_new_raw(&arrow_record_batch);
+  } else {
+    return NULL;
+  }
+}
+
+
+G_DEFINE_TYPE(GArrowCUDAHostBuffer,
+              garrow_cuda_host_buffer,
+              GARROW_TYPE_MUTABLE_BUFFER)
+
+static void
+garrow_cuda_host_buffer_init(GArrowCUDAHostBuffer *object)
+{
+}
+
+static void
+garrow_cuda_host_buffer_class_init(GArrowCUDAHostBufferClass *klass)
+{
+}
+
+/**
+ * garrow_cuda_host_buffer_new:
+ * @gpu_number: A GPU device number for the target context.
+ * @size: The number of bytes to be allocated on CPU host.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: A newly created #GArrowCUDAHostBuffer on success,
+ *   %NULL on error. The allocated memory is accessible from GPU
+ *   device for the @context.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDAHostBuffer *
+garrow_cuda_host_buffer_new(gint gpu_number, gint64 size, GError **error)
+{
+  arrow::cuda::CudaDeviceManager *manager;
+  auto status = arrow::cuda::CudaDeviceManager::GetInstance(&manager);
+  std::shared_ptr<arrow::cuda::CudaHostBuffer> arrow_buffer;
+  status = manager->AllocateHost(gpu_number, size, &arrow_buffer);
+  if (garrow_error_check(error, status, "[cuda][host-buffer][new]")) {
+    return garrow_cuda_host_buffer_new_raw(&arrow_buffer);
+  } else {
+    return NULL;
+  }
+}
+
+
+typedef struct GArrowCUDAIPCMemoryHandlePrivate_ {
+  std::shared_ptr<arrow::cuda::CudaIpcMemHandle> ipc_memory_handle;
+} GArrowCUDAIPCMemoryHandlePrivate;
+
+enum {
+  PROP_IPC_MEMORY_HANDLE = 1
+};
+
+G_DEFINE_TYPE_WITH_PRIVATE(GArrowCUDAIPCMemoryHandle,
+                           garrow_cuda_ipc_memory_handle,
+                           G_TYPE_OBJECT)
+
+#define GARROW_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(object)       \
+  static_cast<GArrowCUDAIPCMemoryHandlePrivate *>(              \
+    garrow_cuda_ipc_memory_handle_get_instance_private(         \
+      GARROW_CUDA_IPC_MEMORY_HANDLE(object)))
+
+static void
+garrow_cuda_ipc_memory_handle_finalize(GObject *object)
+{
+  auto priv = GARROW_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(object);
+
+  priv->ipc_memory_handle = nullptr;
+
+  G_OBJECT_CLASS(garrow_cuda_ipc_memory_handle_parent_class)->finalize(object);
+}
+
+static void
+garrow_cuda_ipc_memory_handle_set_property(GObject *object,
+                                           guint prop_id,
+                                           const GValue *value,
+                                           GParamSpec *pspec)
+{
+  auto priv = GARROW_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(object);
+
+  switch (prop_id) {
+  case PROP_IPC_MEMORY_HANDLE:
+    priv->ipc_memory_handle =
+      *static_cast<std::shared_ptr<arrow::cuda::CudaIpcMemHandle> *>(g_value_get_pointer(value));
+    break;
+  default:
+    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
+    break;
+  }
+}
+
+static void
+garrow_cuda_ipc_memory_handle_get_property(GObject *object,
+                                           guint prop_id,
+                                           GValue *value,
+                                           GParamSpec *pspec)
+{
+  switch (prop_id) {
+  default:
+    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
+    break;
+  }
+}
+
+static void
+garrow_cuda_ipc_memory_handle_init(GArrowCUDAIPCMemoryHandle *object)
+{
+}
+
+static void
+garrow_cuda_ipc_memory_handle_class_init(GArrowCUDAIPCMemoryHandleClass *klass)
+{
+  GParamSpec *spec;
+
+  auto gobject_class = G_OBJECT_CLASS(klass);
+
+  gobject_class->finalize     = garrow_cuda_ipc_memory_handle_finalize;
+  gobject_class->set_property = garrow_cuda_ipc_memory_handle_set_property;
+  gobject_class->get_property = garrow_cuda_ipc_memory_handle_get_property;
+
+  /**
+   * GArrowCUDAIPCMemoryHandle:ipc-memory-handle:
+   *
+   * Since: 0.8.0
+   */
+  spec = g_param_spec_pointer("ipc-memory-handle",
+                              "IPC Memory Handle",
+                              "The raw std::shared_ptr<arrow::cuda::CudaIpcMemHandle>",
+                              static_cast<GParamFlags>(G_PARAM_WRITABLE |
+                                                       G_PARAM_CONSTRUCT_ONLY));
+  g_object_class_install_property(gobject_class, PROP_IPC_MEMORY_HANDLE, spec);
+}
+
+/**
+ * garrow_cuda_ipc_memory_handle_new:
+ * @data: (array length=size): A serialized #GArrowCUDAIPCMemoryHandle.
+ * @size: The size of data.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowCUDAIPCMemoryHandle
+ *   on success, %NULL on error.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDAIPCMemoryHandle *
+garrow_cuda_ipc_memory_handle_new(const guint8 *data,
+                                  gsize size,
+                                  GError **error)
+{
+  std::shared_ptr<arrow::cuda::CudaIpcMemHandle> arrow_handle;
+  auto status = arrow::cuda::CudaIpcMemHandle::FromBuffer(data, &arrow_handle);
+  if (garrow_error_check(error, status,
+                         "[cuda][ipc-memory-handle][new]")) {
+    return garrow_cuda_ipc_memory_handle_new_raw(&arrow_handle);
+  } else {
+    return NULL;
+  }
+}
+
+/**
+ * garrow_cuda_ipc_memory_handle_serialize:
+ * @handle: A #GArrowCUDAIPCMemoryHandle.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: (transfer full): A newly created #GArrowBuffer on success,
+ *   %NULL on error. The buffer has serialized @handle. The serialized
+ *   @handle can be deserialized by garrow_gpu_cuda_ipc_memory_handle_new()
+ *   in other process.
+ *
+ * Since: 0.8.0
+ */
+GArrowBuffer *
+garrow_cuda_ipc_memory_handle_serialize(GArrowCUDAIPCMemoryHandle *handle,
+                                        GError **error)
+{
+  auto arrow_handle = garrow_cuda_ipc_memory_handle_get_raw(handle);
+  std::shared_ptr<arrow::Buffer> arrow_buffer;
+  auto status = arrow_handle->Serialize(arrow::default_memory_pool(),
+                                        &arrow_buffer);
+  if (garrow_error_check(error, status,
+                         "[cuda][ipc-memory-handle][serialize]")) {
+    return garrow_buffer_new_raw(&arrow_buffer);
+  } else {
+    return NULL;
+  }
+}
+
+GArrowBuffer *
+garrow_cuda_buffer_input_stream_new_raw_readable_interface(std::shared_ptr<arrow::Buffer> *arrow_buffer)
+{
+  auto buffer = GARROW_BUFFER(g_object_new(GARROW_CUDA_TYPE_BUFFER,
+                                           "buffer", arrow_buffer,
+                                           NULL));
+  return buffer;
+}
+
+static std::shared_ptr<arrow::io::Readable>
+garrow_cuda_buffer_input_stream_get_raw_readable_interface(GArrowReadable *readable)
+{
+  auto input_stream = GARROW_INPUT_STREAM(readable);
+  auto arrow_input_stream = garrow_input_stream_get_raw(input_stream);
+  return arrow_input_stream;
+}
+
+static void
+garrow_cuda_buffer_input_stream_readable_interface_init(GArrowReadableInterface *iface)
+{
+  iface->new_raw =
+    garrow_cuda_buffer_input_stream_new_raw_readable_interface;
+  iface->get_raw =
+    garrow_cuda_buffer_input_stream_get_raw_readable_interface;
+}
+
+G_DEFINE_TYPE_WITH_CODE(
+  GArrowCUDABufferInputStream,
+  garrow_cuda_buffer_input_stream,
+  GARROW_TYPE_BUFFER_INPUT_STREAM,
+  G_IMPLEMENT_INTERFACE(
+    GARROW_TYPE_READABLE,
+    garrow_cuda_buffer_input_stream_readable_interface_init))
+
+static void
+garrow_cuda_buffer_input_stream_init(GArrowCUDABufferInputStream *object)
+{
+}
+
+static void
+garrow_cuda_buffer_input_stream_class_init(GArrowCUDABufferInputStreamClass *klass)
+{
+}
+
+/**
+ * garrow_cuda_buffer_input_stream_new:
+ * @buffer: A #GArrowCUDABuffer.
+ *
+ * Returns: (transfer full): A newly created
+ *   #GArrowCUDABufferInputStream.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDABufferInputStream *
+garrow_cuda_buffer_input_stream_new(GArrowCUDABuffer *buffer)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  auto arrow_reader =
+    std::make_shared<arrow::cuda::CudaBufferReader>(arrow_buffer);
+  return garrow_cuda_buffer_input_stream_new_raw(&arrow_reader);
+}
+
+
+G_DEFINE_TYPE(GArrowCUDABufferOutputStream,
+              garrow_cuda_buffer_output_stream,
+              GARROW_TYPE_OUTPUT_STREAM)
+
+static void
+garrow_cuda_buffer_output_stream_init(GArrowCUDABufferOutputStream *object)
+{
+}
+
+static void
+garrow_cuda_buffer_output_stream_class_init(GArrowCUDABufferOutputStreamClass *klass)
+{
+}
+
+/**
+ * garrow_cuda_buffer_output_stream_new:
+ * @buffer: A #GArrowCUDABuffer.
+ *
+ * Returns: (transfer full): A newly created
+ *   #GArrowCUDABufferOutputStream.
+ *
+ * Since: 0.8.0
+ */
+GArrowCUDABufferOutputStream *
+garrow_cuda_buffer_output_stream_new(GArrowCUDABuffer *buffer)
+{
+  auto arrow_buffer = garrow_cuda_buffer_get_raw(buffer);
+  auto arrow_writer =
+    std::make_shared<arrow::cuda::CudaBufferWriter>(arrow_buffer);
+  return garrow_cuda_buffer_output_stream_new_raw(&arrow_writer);
+}
+
+/**
+ * garrow_cuda_buffer_output_stream_set_buffer_size:
+ * @stream: A #GArrowCUDABufferOutputStream.
+ * @size: A size of CPU buffer in bytes.
+ * @error: (nullable): Return location for a #GError or %NULL.
+ *
+ * Returns: %TRUE on success, %FALSE if there was an error.
+ *
+ * Sets CPU buffer size. to limit `cudaMemcpy()` calls. If CPU buffer
+ * size is `0`, buffering is disabled.
+ *
+ * The default is `0`.
+ *
+ * Since: 0.8.0
+ */
+gboolean
+garrow_cuda_buffer_output_stream_set_buffer_size(GArrowCUDABufferOutputStream *stream,
+                                                 gint64 size,
+                                                 GError **error)
+{
+  auto arrow_stream = garrow_cuda_buffer_output_stream_get_raw(stream);
+  auto status = arrow_stream->SetBufferSize(size);
+  return garrow_error_check(error,
+                            status,
+                            "[cuda][buffer-output-stream][set-buffer-size]");
+}
+
+/**
+ * garrow_cuda_buffer_output_stream_get_buffer_size:
+ * @stream: A #GArrowCUDABufferOutputStream.
+ *
+ * Returns: The CPU buffer size in bytes.
+ *
+ * See garrow_cuda_buffer_output_stream_set_buffer_size() for CPU
+ * buffer size details.
+ *
+ * Since: 0.8.0
+ */
+gint64
+garrow_cuda_buffer_output_stream_get_buffer_size(GArrowCUDABufferOutputStream *stream)
+{
+  auto arrow_stream = garrow_cuda_buffer_output_stream_get_raw(stream);
+  return arrow_stream->buffer_size();
+}
+
+/**
+ * garrow_cuda_buffer_output_stream_get_buffered_size:
+ * @stream: A #GArrowCUDABufferOutputStream.
+ *
+ * Returns: The size of buffered data in bytes.
+ *
+ * Since: 0.8.0
+ */
+gint64
+garrow_cuda_buffer_output_stream_get_buffered_size(GArrowCUDABufferOutputStream *stream)
+{
+  auto arrow_stream = garrow_cuda_buffer_output_stream_get_raw(stream);
+  return arrow_stream->num_bytes_buffered();
+}
+
+
+G_END_DECLS
+
+GArrowCUDAContext *
+garrow_cuda_context_new_raw(std::shared_ptr<arrow::cuda::CudaContext> *arrow_context)
+{
+  return GARROW_CUDA_CONTEXT(g_object_new(GARROW_CUDA_TYPE_CONTEXT,
+                                          "context", arrow_context,
+                                          NULL));
+}
+
+std::shared_ptr<arrow::cuda::CudaContext>
+garrow_cuda_context_get_raw(GArrowCUDAContext *context)
+{
+  if (!context)
+    return nullptr;
+
+  auto priv = GARROW_CUDA_CONTEXT_GET_PRIVATE(context);
+  return priv->context;
+}
+
+GArrowCUDAIPCMemoryHandle *
+garrow_cuda_ipc_memory_handle_new_raw(std::shared_ptr<arrow::cuda::CudaIpcMemHandle> *arrow_handle)
+{
+  auto handle = g_object_new(GARROW_CUDA_TYPE_IPC_MEMORY_HANDLE,
+                             "ipc-memory-handle", arrow_handle,
+                             NULL);
+  return GARROW_CUDA_IPC_MEMORY_HANDLE(handle);
+}
+
+std::shared_ptr<arrow::cuda::CudaIpcMemHandle>
+garrow_cuda_ipc_memory_handle_get_raw(GArrowCUDAIPCMemoryHandle *handle)
+{
+  if (!handle)
+    return nullptr;
+
+  auto priv = GARROW_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(handle);
+  return priv->ipc_memory_handle;
+}
+
+GArrowCUDABuffer *
+garrow_cuda_buffer_new_raw(std::shared_ptr<arrow::cuda::CudaBuffer> *arrow_buffer)
+{
+  return GARROW_CUDA_BUFFER(g_object_new(GARROW_CUDA_TYPE_BUFFER,
+                                         "buffer", arrow_buffer,
+                                         NULL));
+}
+
+std::shared_ptr<arrow::cuda::CudaBuffer>
+garrow_cuda_buffer_get_raw(GArrowCUDABuffer *buffer)
+{
+  if (!buffer)
+    return nullptr;
+
+  auto arrow_buffer = garrow_buffer_get_raw(GARROW_BUFFER(buffer));
+  return std::static_pointer_cast<arrow::cuda::CudaBuffer>(arrow_buffer);
+}
+
+GArrowCUDAHostBuffer *
+garrow_cuda_host_buffer_new_raw(std::shared_ptr<arrow::cuda::CudaHostBuffer> *arrow_buffer)
+{
+  auto buffer = g_object_new(GARROW_CUDA_TYPE_HOST_BUFFER,
+                             "buffer", arrow_buffer,
+                             NULL);
+  return GARROW_CUDA_HOST_BUFFER(buffer);
+}
+
+std::shared_ptr<arrow::cuda::CudaHostBuffer>
+garrow_cuda_host_buffer_get_raw(GArrowCUDAHostBuffer *buffer)
+{
+  if (!buffer)
+    return nullptr;
+
+  auto arrow_buffer = garrow_buffer_get_raw(GARROW_BUFFER(buffer));
+  return std::static_pointer_cast<arrow::cuda::CudaHostBuffer>(arrow_buffer);
+}
+
+GArrowCUDABufferInputStream *
+garrow_cuda_buffer_input_stream_new_raw(std::shared_ptr<arrow::cuda::CudaBufferReader> *arrow_reader)
+{
+  auto input_stream = g_object_new(GARROW_CUDA_TYPE_BUFFER_INPUT_STREAM,
+                                   "input-stream", arrow_reader,
+                                   NULL);
+  return GARROW_CUDA_BUFFER_INPUT_STREAM(input_stream);
+}
+
+std::shared_ptr<arrow::cuda::CudaBufferReader>
+garrow_cuda_buffer_input_stream_get_raw(GArrowCUDABufferInputStream *input_stream)
+{
+  if (!input_stream)
+    return nullptr;
+
+  auto arrow_reader =
+    garrow_input_stream_get_raw(GARROW_INPUT_STREAM(input_stream));
+  return std::static_pointer_cast<arrow::cuda::CudaBufferReader>(arrow_reader);
+}
+
+GArrowCUDABufferOutputStream *
+garrow_cuda_buffer_output_stream_new_raw(std::shared_ptr<arrow::cuda::CudaBufferWriter> *arrow_writer)
+{
+  auto output_stream = g_object_new(GARROW_CUDA_TYPE_BUFFER_OUTPUT_STREAM,
+                                    "output-stream", arrow_writer,
+                                    NULL);
+  return GARROW_CUDA_BUFFER_OUTPUT_STREAM(output_stream);
+}
+
+std::shared_ptr<arrow::cuda::CudaBufferWriter>
+garrow_cuda_buffer_output_stream_get_raw(GArrowCUDABufferOutputStream *output_stream)
+{
+  if (!output_stream)
+    return nullptr;
+
+  auto arrow_writer =
+    garrow_output_stream_get_raw(GARROW_OUTPUT_STREAM(output_stream));
+  return std::static_pointer_cast<arrow::cuda::CudaBufferWriter>(arrow_writer);
+}
diff --git a/c_glib/arrow-cuda-glib/cuda.h b/c_glib/arrow-cuda-glib/cuda.h
new file mode 100644
index 0000000..6cdef99
--- /dev/null
+++ b/c_glib/arrow-cuda-glib/cuda.h
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#pragma once
+
+#include <arrow-glib/arrow-glib.h>
+
+G_BEGIN_DECLS
+
+#define GARROW_CUDA_TYPE_DEVICE_MANAGER (garrow_cuda_device_manager_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDADeviceManager,
+                         garrow_cuda_device_manager,
+                         GARROW_CUDA,
+                         DEVICE_MANAGER,
+                         GObject)
+struct _GArrowCUDADeviceManagerClass
+{
+  GObjectClass parent_class;
+};
+
+#define GARROW_CUDA_TYPE_CONTEXT (garrow_cuda_context_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDAContext,
+                         garrow_cuda_context,
+                         GARROW_CUDA,
+                         CONTEXT,
+                         GObject)
+struct _GArrowCUDAContextClass
+{
+  GObjectClass parent_class;
+};
+
+#define GARROW_CUDA_TYPE_BUFFER (garrow_cuda_buffer_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDABuffer,
+                         garrow_cuda_buffer,
+                         GARROW_CUDA,
+                         BUFFER,
+                         GArrowBuffer)
+struct _GArrowCUDABufferClass
+{
+  GArrowBufferClass parent_class;
+};
+
+#define GARROW_CUDA_TYPE_HOST_BUFFER (garrow_cuda_host_buffer_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDAHostBuffer,
+                         garrow_cuda_host_buffer,
+                         GARROW_CUDA,
+                         HOST_BUFFER,
+                         GArrowMutableBuffer)
+struct _GArrowCUDAHostBufferClass
+{
+  GArrowMutableBufferClass parent_class;
+};
+
+#define GARROW_CUDA_TYPE_IPC_MEMORY_HANDLE      \
+  (garrow_cuda_ipc_memory_handle_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDAIPCMemoryHandle,
+                         garrow_cuda_ipc_memory_handle,
+                         GARROW_CUDA,
+                         IPC_MEMORY_HANDLE,
+                         GObject)
+struct _GArrowCUDAIPCMemoryHandleClass
+{
+  GObjectClass parent_class;
+};
+
+#define GARROW_CUDA_TYPE_BUFFER_INPUT_STREAM    \
+  (garrow_cuda_buffer_input_stream_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDABufferInputStream,
+                         garrow_cuda_buffer_input_stream,
+                         GARROW_CUDA,
+                         BUFFER_INPUT_STREAM,
+                         GArrowBufferInputStream)
+struct _GArrowCUDABufferInputStreamClass
+{
+  GArrowBufferInputStreamClass parent_class;
+};
+
+#define GARROW_CUDA_TYPE_BUFFER_OUTPUT_STREAM   \
+  (garrow_cuda_buffer_output_stream_get_type())
+G_DECLARE_DERIVABLE_TYPE(GArrowCUDABufferOutputStream,
+                         garrow_cuda_buffer_output_stream,
+                         GARROW_CUDA,
+                         BUFFER_OUTPUT_STREAM,
+                         GArrowOutputStream)
+struct _GArrowCUDABufferOutputStreamClass
+{
+  GArrowOutputStreamClass parent_class;
+};
+
+GArrowCUDADeviceManager *
+garrow_cuda_device_manager_new(GError **error);
+
+GArrowCUDAContext *
+garrow_cuda_device_manager_get_context(GArrowCUDADeviceManager *manager,
+                                       gint gpu_number,
+                                       GError **error);
+gsize
+garrow_cuda_device_manager_get_n_devices(GArrowCUDADeviceManager *manager);
+
+gint64
+garrow_cuda_context_get_allocated_size(GArrowCUDAContext *context);
+
+
+GArrowCUDABuffer *
+garrow_cuda_buffer_new(GArrowCUDAContext *context,
+                       gint64 size,
+                       GError **error);
+GArrowCUDABuffer *
+garrow_cuda_buffer_new_ipc(GArrowCUDAContext *context,
+                           GArrowCUDAIPCMemoryHandle *handle,
+                           GError **error);
+GArrowCUDABuffer *
+garrow_cuda_buffer_new_record_batch(GArrowCUDAContext *context,
+                                    GArrowRecordBatch *record_batch,
+                                    GError **error);
+GBytes *
+garrow_cuda_buffer_copy_to_host(GArrowCUDABuffer *buffer,
+                                gint64 position,
+                                gint64 size,
+                                GError **error);
+gboolean
+garrow_cuda_buffer_copy_from_host(GArrowCUDABuffer *buffer,
+                                  const guint8 *data,
+                                  gint64 size,
+                                  GError **error);
+GArrowCUDAIPCMemoryHandle *
+garrow_cuda_buffer_export(GArrowCUDABuffer *buffer,
+                          GError **error);
+GArrowCUDAContext *
+garrow_cuda_buffer_get_context(GArrowCUDABuffer *buffer);
+GArrowRecordBatch *
+garrow_cuda_buffer_read_record_batch(GArrowCUDABuffer *buffer,
+                                     GArrowSchema *schema,
+                                     GError **error);
+
+
+GArrowCUDAHostBuffer *
+garrow_cuda_host_buffer_new(gint gpu_number,
+                            gint64 size,
+                            GError **error);
+
+GArrowCUDAIPCMemoryHandle *
+garrow_cuda_ipc_memory_handle_new(const guint8 *data,
+                                  gsize size,
+                                  GError **error);
+
+GArrowBuffer *
+garrow_cuda_ipc_memory_handle_serialize(GArrowCUDAIPCMemoryHandle *handle,
+                                        GError **error);
+
+GArrowCUDABufferInputStream *
+garrow_cuda_buffer_input_stream_new(GArrowCUDABuffer *buffer);
+
+GArrowCUDABufferOutputStream *
+garrow_cuda_buffer_output_stream_new(GArrowCUDABuffer *buffer);
+
+gboolean
+garrow_cuda_buffer_output_stream_set_buffer_size(GArrowCUDABufferOutputStream *stream,
+                                                 gint64 size,
+                                                 GError **error);
+gint64
+garrow_cuda_buffer_output_stream_get_buffer_size(GArrowCUDABufferOutputStream *stream);
+gint64
+garrow_cuda_buffer_output_stream_get_buffered_size(GArrowCUDABufferOutputStream *stream);
+
+G_END_DECLS
diff --git a/c_glib/arrow-cuda-glib/cuda.hpp b/c_glib/arrow-cuda-glib/cuda.hpp
new file mode 100644
index 0000000..0f8985a
--- /dev/null
+++ b/c_glib/arrow-cuda-glib/cuda.hpp
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#pragma once
+
+#include <arrow/gpu/cuda_api.h>
+
+#include <arrow-cuda-glib/cuda.h>
+
+GArrowCUDAContext *
+garrow_cuda_context_new_raw(std::shared_ptr<arrow::cuda::CudaContext> *arrow_context);
+std::shared_ptr<arrow::cuda::CudaContext>
+garrow_cuda_context_get_raw(GArrowCUDAContext *context);
+
+GArrowCUDAIPCMemoryHandle *
+garrow_cuda_ipc_memory_handle_new_raw(std::shared_ptr<arrow::cuda::CudaIpcMemHandle> *arrow_handle);
+std::shared_ptr<arrow::cuda::CudaIpcMemHandle>
+garrow_cuda_ipc_memory_handle_get_raw(GArrowCUDAIPCMemoryHandle *handle);
+
+GArrowCUDABuffer *
+garrow_cuda_buffer_new_raw(std::shared_ptr<arrow::cuda::CudaBuffer> *arrow_buffer);
+std::shared_ptr<arrow::cuda::CudaBuffer>
+garrow_cuda_buffer_get_raw(GArrowCUDABuffer *buffer);
+
+GArrowCUDAHostBuffer *
+garrow_cuda_host_buffer_new_raw(std::shared_ptr<arrow::cuda::CudaHostBuffer> *arrow_buffer);
+std::shared_ptr<arrow::cuda::CudaHostBuffer>
+garrow_cuda_host_buffer_get_raw(GArrowCUDAHostBuffer *buffer);
+
+GArrowCUDABufferInputStream *
+garrow_cuda_buffer_input_stream_new_raw(std::shared_ptr<arrow::cuda::CudaBufferReader> *arrow_reader);
+std::shared_ptr<arrow::cuda::CudaBufferReader>
+garrow_cuda_buffer_input_stream_get_raw(GArrowCUDABufferInputStream *input_stream);
+
+GArrowCUDABufferOutputStream *
+garrow_cuda_buffer_output_stream_new_raw(std::shared_ptr<arrow::cuda::CudaBufferWriter> *arrow_writer);
+std::shared_ptr<arrow::cuda::CudaBufferWriter>
+garrow_cuda_buffer_output_stream_get_raw(GArrowCUDABufferOutputStream *output_stream);
diff --git a/c_glib/arrow-cuda-glib/meson.build b/c_glib/arrow-cuda-glib/meson.build
new file mode 100644
index 0000000..e5b9f47
--- /dev/null
+++ b/c_glib/arrow-cuda-glib/meson.build
@@ -0,0 +1,79 @@
+# -*- indent-tabs-mode: nil -*-
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+sources = files(
+  'cuda.cpp',
+)
+
+c_headers = files(
+  'arrow-cuda-glib.h',
+  'cuda.h',
+)
+
+cpp_headers = files(
+  'arrow-cuda-glib.hpp',
+  'cuda.hpp',
+)
+
+headers = c_headers + cpp_headers
+install_headers(headers, subdir: 'arrow-cuda-glib')
+
+
+dependencies = [
+  arrow_cuda,
+  arrow_glib,
+]
+libarrow_cuda_glib = library('arrow-cuda-glib',
+                             sources: sources,
+                             install: true,
+                             dependencies: dependencies,
+                             include_directories: base_include_directories,
+                             soversion: so_version,
+                             version: library_version)
+arrow_cuda_glib = declare_dependency(link_with: libarrow_cuda_glib,
+                                     include_directories: base_include_directories,
+                                     dependencies: dependencies)
+
+pkgconfig.generate(filebase: 'arrow-cuda-glib',
+                   name: 'Apache Arrow CUDA GLib',
+                   description: 'C API for Apache Arrow CUDA based on GLib',
+                   version: version,
+                   requires: ['arrow-glib', 'arrow-cuda'],
+                   libraries: [libarrow_cuda_glib])
+
+gir_dependencies = [
+  declare_dependency(sources: arrow_glib_gir),
+]
+gir_extra_args = [
+  '--warn-all',
+  '--include-uninstalled=./arrow-glib/Arrow-1.0.gir',
+]
+arrow_cuda_glib_gir = gnome.generate_gir(libarrow_cuda_glib,
+                                         dependencies: gir_dependencies,
+                                         sources: sources + c_headers,
+                                         namespace: 'ArrowCUDA',
+                                         nsversion: api_version,
+                                         identifier_prefix: 'GArrowCUDA',
+                                         symbol_prefix: 'garrow_cuda',
+                                         export_packages: 'arrow-cuda-glib',
+                                         includes: [
+                                           'Arrow-1.0',
+                                         ],
+                                         install: true,
+                                         extra_args: gir_extra_args)
diff --git a/c_glib/arrow-gpu-glib/cuda.cpp b/c_glib/arrow-gpu-glib/cuda.cpp
deleted file mode 100644
index 6d2e48f..0000000
--- a/c_glib/arrow-gpu-glib/cuda.cpp
+++ /dev/null
@@ -1,942 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-#ifdef HAVE_CONFIG_H
-#  include <config.h>
-#endif
-
-#include <arrow-glib/buffer.hpp>
-#include <arrow-glib/error.hpp>
-#include <arrow-glib/input-stream.hpp>
-#include <arrow-glib/output-stream.hpp>
-#include <arrow-glib/readable.hpp>
-#include <arrow-glib/record-batch.hpp>
-#include <arrow-glib/schema.hpp>
-
-#include <arrow-gpu-glib/cuda.hpp>
-
-G_BEGIN_DECLS
-
-/**
- * SECTION: cuda
- * @section_id: cuda-classes
- * @title: CUDA related classes
- * @include: arrow-gpu-glib/arrow-gpu-glib.h
- *
- * The following classes provide CUDA support for Apache Arrow data.
- *
- * #GArrowGPUCUDADeviceManager is the starting point. You need at
- * least one #GArrowGPUCUDAContext to process Apache Arrow data on
- * NVIDIA GPU.
- *
- * #GArrowGPUCUDAContext is a class to keep context for one GPU. You
- * need to create #GArrowGPUCUDAContext for each GPU that you want to
- * use. You can create #GArrowGPUCUDAContext by
- * garrow_gpu_cuda_device_manager_get_context().
- *
- * #GArrowGPUCUDABuffer is a class for data on GPU. You can copy data
- * on GPU to/from CPU by garrow_gpu_cuda_buffer_copy_to_host() and
- * garrow_gpu_cuda_buffer_copy_from_host(). You can share data on GPU
- * with other processes by garrow_gpu_cuda_buffer_export() and
- * garrow_gpu_cuda_buffer_new_ipc().
- *
- * #GArrowGPUCUDAHostBuffer is a class for data on CPU that is
- * directly accessible from GPU.
- *
- * #GArrowGPUCUDAIPCMemoryHandle is a class to share data on GPU with
- * other processes. You can export your data on GPU to other processes
- * by garrow_gpu_cuda_buffer_export() and
- * garrow_gpu_cuda_ipc_memory_handle_new(). You can import other
- * process data on GPU by garrow_gpu_cuda_ipc_memory_handle_new() and
- * garrow_gpu_cuda_buffer_new_ipc().
- *
- * #GArrowGPUCUDABufferInputStream is a class to read data in
- * #GArrowGPUCUDABuffer.
- *
- * #GArrowGPUCUDABufferOutputStream is a class to write data into
- * #GArrowGPUCUDABuffer.
- */
-
-G_DEFINE_TYPE(GArrowGPUCUDADeviceManager,
-              garrow_gpu_cuda_device_manager,
-              G_TYPE_OBJECT)
-
-static void
-garrow_gpu_cuda_device_manager_init(GArrowGPUCUDADeviceManager *object)
-{
-}
-
-static void
-garrow_gpu_cuda_device_manager_class_init(GArrowGPUCUDADeviceManagerClass *klass)
-{
-}
-
-/**
- * garrow_gpu_cuda_device_manager_new:
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: A newly created #GArrowGPUCUDADeviceManager on success,
- *   %NULL on error.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDADeviceManager *
-garrow_gpu_cuda_device_manager_new(GError **error)
-{
-  arrow::gpu::CudaDeviceManager *manager;
-  auto status = arrow::gpu::CudaDeviceManager::GetInstance(&manager);
-  if (garrow_error_check(error, status, "[gpu][cuda][device-manager][new]")) {
-    auto manager = g_object_new(GARROW_GPU_TYPE_CUDA_DEVICE_MANAGER,
-                                NULL);
-    return GARROW_GPU_CUDA_DEVICE_MANAGER(manager);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_device_manager_get_context:
- * @manager: A #GArrowGPUCUDADeviceManager.
- * @gpu_number: A GPU device number for the target context.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowGPUCUDAContext on
- *   success, %NULL on error. Contexts for the same GPU device number
- *   share the same data internally.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDAContext *
-garrow_gpu_cuda_device_manager_get_context(GArrowGPUCUDADeviceManager *manager,
-                                           gint gpu_number,
-                                           GError **error)
-{
-  arrow::gpu::CudaDeviceManager *arrow_manager;
-  arrow::gpu::CudaDeviceManager::GetInstance(&arrow_manager);
-  std::shared_ptr<arrow::gpu::CudaContext> context;
-  auto status = arrow_manager->GetContext(gpu_number, &context);
-  if (garrow_error_check(error, status,
-                         "[gpu][cuda][device-manager][get-context]]")) {
-    return garrow_gpu_cuda_context_new_raw(&context);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_device_manager_get_n_devices:
- * @manager: A #GArrowGPUCUDADeviceManager.
- *
- * Returns: The number of GPU devices.
- *
- * Since: 0.8.0
- */
-gsize
-garrow_gpu_cuda_device_manager_get_n_devices(GArrowGPUCUDADeviceManager *manager)
-{
-  arrow::gpu::CudaDeviceManager *arrow_manager;
-  arrow::gpu::CudaDeviceManager::GetInstance(&arrow_manager);
-  return arrow_manager->num_devices();
-}
-
-
-typedef struct GArrowGPUCUDAContextPrivate_ {
-  std::shared_ptr<arrow::gpu::CudaContext> context;
-} GArrowGPUCUDAContextPrivate;
-
-enum {
-  PROP_CONTEXT = 1
-};
-
-G_DEFINE_TYPE_WITH_PRIVATE(GArrowGPUCUDAContext,
-                           garrow_gpu_cuda_context,
-                           G_TYPE_OBJECT)
-
-#define GARROW_GPU_CUDA_CONTEXT_GET_PRIVATE(object)     \
-  static_cast<GArrowGPUCUDAContextPrivate *>(           \
-    garrow_gpu_cuda_context_get_instance_private(       \
-      GARROW_GPU_CUDA_CONTEXT(object)))
-
-static void
-garrow_gpu_cuda_context_finalize(GObject *object)
-{
-  auto priv = GARROW_GPU_CUDA_CONTEXT_GET_PRIVATE(object);
-
-  priv->context = nullptr;
-
-  G_OBJECT_CLASS(garrow_gpu_cuda_context_parent_class)->finalize(object);
-}
-
-static void
-garrow_gpu_cuda_context_set_property(GObject *object,
-                                     guint prop_id,
-                                     const GValue *value,
-                                     GParamSpec *pspec)
-{
-  auto priv = GARROW_GPU_CUDA_CONTEXT_GET_PRIVATE(object);
-
-  switch (prop_id) {
-  case PROP_CONTEXT:
-    priv->context =
-      *static_cast<std::shared_ptr<arrow::gpu::CudaContext> *>(g_value_get_pointer(value));
-    break;
-  default:
-    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
-    break;
-  }
-}
-
-static void
-garrow_gpu_cuda_context_get_property(GObject *object,
-                                     guint prop_id,
-                                     GValue *value,
-                                     GParamSpec *pspec)
-{
-  switch (prop_id) {
-  default:
-    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
-    break;
-  }
-}
-
-static void
-garrow_gpu_cuda_context_init(GArrowGPUCUDAContext *object)
-{
-}
-
-static void
-garrow_gpu_cuda_context_class_init(GArrowGPUCUDAContextClass *klass)
-{
-  GParamSpec *spec;
-
-  auto gobject_class = G_OBJECT_CLASS(klass);
-
-  gobject_class->finalize     = garrow_gpu_cuda_context_finalize;
-  gobject_class->set_property = garrow_gpu_cuda_context_set_property;
-  gobject_class->get_property = garrow_gpu_cuda_context_get_property;
-
-  /**
-   * GArrowGPUCUDAContext:context:
-   *
-   * Since: 0.8.0
-   */
-  spec = g_param_spec_pointer("context",
-                              "Context",
-                              "The raw std::shared_ptr<arrow::gpu::CudaContext>",
-                              static_cast<GParamFlags>(G_PARAM_WRITABLE |
-                                                       G_PARAM_CONSTRUCT_ONLY));
-  g_object_class_install_property(gobject_class, PROP_CONTEXT, spec);
-}
-
-/**
- * garrow_gpu_cuda_context_get_allocated_size:
- * @context: A #GArrowGPUCUDAContext.
- *
- * Returns: The allocated memory by this context in bytes.
- *
- * Since: 0.8.0
- */
-gint64
-garrow_gpu_cuda_context_get_allocated_size(GArrowGPUCUDAContext *context)
-{
-  auto arrow_context = garrow_gpu_cuda_context_get_raw(context);
-  return arrow_context->bytes_allocated();
-}
-
-
-G_DEFINE_TYPE(GArrowGPUCUDABuffer,
-              garrow_gpu_cuda_buffer,
-              GARROW_TYPE_BUFFER)
-
-static void
-garrow_gpu_cuda_buffer_init(GArrowGPUCUDABuffer *object)
-{
-}
-
-static void
-garrow_gpu_cuda_buffer_class_init(GArrowGPUCUDABufferClass *klass)
-{
-}
-
-/**
- * garrow_gpu_cuda_buffer_new:
- * @context: A #GArrowGPUCUDAContext.
- * @size: The number of bytes to be allocated on GPU device for this context.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowGPUCUDABuffer on
- *   success, %NULL on error.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new(GArrowGPUCUDAContext *context,
-                           gint64 size,
-                           GError **error)
-{
-  auto arrow_context = garrow_gpu_cuda_context_get_raw(context);
-  std::shared_ptr<arrow::gpu::CudaBuffer> arrow_buffer;
-  auto status = arrow_context->Allocate(size, &arrow_buffer);
-  if (garrow_error_check(error, status, "[gpu][cuda][buffer][new]")) {
-    return garrow_gpu_cuda_buffer_new_raw(&arrow_buffer);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_buffer_new_ipc:
- * @context: A #GArrowGPUCUDAContext.
- * @handle: A #GArrowGPUCUDAIPCMemoryHandle to be communicated.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowGPUCUDABuffer on
- *   success, %NULL on error. The buffer has data from the IPC target.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new_ipc(GArrowGPUCUDAContext *context,
-                               GArrowGPUCUDAIPCMemoryHandle *handle,
-                               GError **error)
-{
-  auto arrow_context = garrow_gpu_cuda_context_get_raw(context);
-  auto arrow_handle = garrow_gpu_cuda_ipc_memory_handle_get_raw(handle);
-  std::shared_ptr<arrow::gpu::CudaBuffer> arrow_buffer;
-  auto status = arrow_context->OpenIpcBuffer(*arrow_handle, &arrow_buffer);
-  if (garrow_error_check(error, status,
-                         "[gpu][cuda][buffer][new-ipc]")) {
-    return garrow_gpu_cuda_buffer_new_raw(&arrow_buffer);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_buffer_new_record_batch:
- * @context: A #GArrowGPUCUDAContext.
- * @record_batch: A #GArrowRecordBatch to be serialized.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowGPUCUDABuffer on
- *   success, %NULL on error. The buffer has serialized record batch
- *   data.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new_record_batch(GArrowGPUCUDAContext *context,
-                                        GArrowRecordBatch *record_batch,
-                                        GError **error)
-{
-  auto arrow_context = garrow_gpu_cuda_context_get_raw(context);
-  auto arrow_record_batch = garrow_record_batch_get_raw(record_batch);
-  std::shared_ptr<arrow::gpu::CudaBuffer> arrow_buffer;
-  auto status = arrow::gpu::SerializeRecordBatch(*arrow_record_batch,
-                                                 arrow_context.get(),
-                                                 &arrow_buffer);
-  if (garrow_error_check(error, status,
-                         "[gpu][cuda][buffer][new-record-batch]")) {
-    return garrow_gpu_cuda_buffer_new_raw(&arrow_buffer);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_buffer_copy_to_host:
- * @buffer: A #GArrowGPUCUDABuffer.
- * @position: The offset of memory on GPU device to be copied.
- * @size: The size of memory on GPU device to be copied in bytes.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A #GBytes that have copied memory on CPU
- *   host on success, %NULL on error.
- *
- * Since: 0.8.0
- */
-GBytes *
-garrow_gpu_cuda_buffer_copy_to_host(GArrowGPUCUDABuffer *buffer,
-                                    gint64 position,
-                                    gint64 size,
-                                    GError **error)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  auto data = static_cast<uint8_t *>(g_malloc(size));
-  auto status = arrow_buffer->CopyToHost(position, size, data);
-  if (garrow_error_check(error, status, "[gpu][cuda][buffer][copy-to-host]")) {
-    return g_bytes_new_take(data, size);
-  } else {
-    g_free(data);
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_buffer_copy_from_host:
- * @buffer: A #GArrowGPUCUDABuffer.
- * @data: (array length=size): Data on CPU host to be copied.
- * @size: The size of data on CPU host to be copied in bytes.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: %TRUE on success, %FALSE if there was an error.
- *
- * Since: 0.8.0
- */
-gboolean
-garrow_gpu_cuda_buffer_copy_from_host(GArrowGPUCUDABuffer *buffer,
-                                      const guint8 *data,
-                                      gint64 size,
-                                      GError **error)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  auto status = arrow_buffer->CopyFromHost(0, data, size);
-  return garrow_error_check(error,
-                            status,
-                            "[gpu][cuda][buffer][copy-from-host]");
-}
-
-/**
- * garrow_gpu_cuda_buffer_export:
- * @buffer: A #GArrowGPUCUDABuffer.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created
- *   #GArrowGPUCUDAIPCMemoryHandle to handle the exported buffer on
- *   success, %NULL on error
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDAIPCMemoryHandle *
-garrow_gpu_cuda_buffer_export(GArrowGPUCUDABuffer *buffer, GError **error)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  std::shared_ptr<arrow::gpu::CudaIpcMemHandle> arrow_handle;
-  auto status = arrow_buffer->ExportForIpc(&arrow_handle);
-  if (garrow_error_check(error, status, "[gpu][cuda][buffer][export-for-ipc]")) {
-    return garrow_gpu_cuda_ipc_memory_handle_new_raw(&arrow_handle);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_buffer_get_context:
- * @buffer: A #GArrowGPUCUDABuffer.
- *
- * Returns: (transfer full): A newly created #GArrowGPUCUDAContext for the
- *   buffer. Contexts for the same buffer share the same data internally.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDAContext *
-garrow_gpu_cuda_buffer_get_context(GArrowGPUCUDABuffer *buffer)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  auto arrow_context = arrow_buffer->context();
-  return garrow_gpu_cuda_context_new_raw(&arrow_context);
-}
-
-/**
- * garrow_gpu_cuda_buffer_read_record_batch:
- * @buffer: A #GArrowGPUCUDABuffer.
- * @schema: A #GArrowSchema for record batch.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowRecordBatch on
- *   success, %NULL on error. The record batch data is located on GPU.
- *
- * Since: 0.8.0
- */
-GArrowRecordBatch *
-garrow_gpu_cuda_buffer_read_record_batch(GArrowGPUCUDABuffer *buffer,
-                                         GArrowSchema *schema,
-                                         GError **error)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  auto arrow_schema = garrow_schema_get_raw(schema);
-  auto pool = arrow::default_memory_pool();
-  std::shared_ptr<arrow::RecordBatch> arrow_record_batch;
-  auto status = arrow::gpu::ReadRecordBatch(arrow_schema,
-                                            arrow_buffer,
-                                            pool,
-                                            &arrow_record_batch);
-  if (garrow_error_check(error, status,
-                         "[gpu][cuda][buffer][read-record-batch]")) {
-    return garrow_record_batch_new_raw(&arrow_record_batch);
-  } else {
-    return NULL;
-  }
-}
-
-
-G_DEFINE_TYPE(GArrowGPUCUDAHostBuffer,
-              garrow_gpu_cuda_host_buffer,
-              GARROW_TYPE_MUTABLE_BUFFER)
-
-static void
-garrow_gpu_cuda_host_buffer_init(GArrowGPUCUDAHostBuffer *object)
-{
-}
-
-static void
-garrow_gpu_cuda_host_buffer_class_init(GArrowGPUCUDAHostBufferClass *klass)
-{
-}
-
-/**
- * garrow_gpu_cuda_host_buffer_new:
- * @gpu_number: A GPU device number for the target context.
- * @size: The number of bytes to be allocated on CPU host.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: A newly created #GArrowGPUCUDAHostBuffer on success,
- *   %NULL on error. The allocated memory is accessible from GPU
- *   device for the @context.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDAHostBuffer *
-garrow_gpu_cuda_host_buffer_new(gint gpu_number, gint64 size, GError **error)
-{
-  arrow::gpu::CudaDeviceManager *manager;
-  auto status = arrow::gpu::CudaDeviceManager::GetInstance(&manager);
-  std::shared_ptr<arrow::gpu::CudaHostBuffer> arrow_buffer;
-  status = manager->AllocateHost(gpu_number, size, &arrow_buffer);
-  if (garrow_error_check(error, status, "[gpu][cuda][host-buffer][new]")) {
-    return garrow_gpu_cuda_host_buffer_new_raw(&arrow_buffer);
-  } else {
-    return NULL;
-  }
-}
-
-
-typedef struct GArrowGPUCUDAIPCMemoryHandlePrivate_ {
-  std::shared_ptr<arrow::gpu::CudaIpcMemHandle> ipc_memory_handle;
-} GArrowGPUCUDAIPCMemoryHandlePrivate;
-
-enum {
-  PROP_IPC_MEMORY_HANDLE = 1
-};
-
-G_DEFINE_TYPE_WITH_PRIVATE(GArrowGPUCUDAIPCMemoryHandle,
-                           garrow_gpu_cuda_ipc_memory_handle,
-                           G_TYPE_OBJECT)
-
-#define GARROW_GPU_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(object)   \
-  static_cast<GArrowGPUCUDAIPCMemoryHandlePrivate *>(           \
-    garrow_gpu_cuda_ipc_memory_handle_get_instance_private(     \
-      GARROW_GPU_CUDA_IPC_MEMORY_HANDLE(object)))
-
-static void
-garrow_gpu_cuda_ipc_memory_handle_finalize(GObject *object)
-{
-  auto priv = GARROW_GPU_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(object);
-
-  priv->ipc_memory_handle = nullptr;
-
-  G_OBJECT_CLASS(garrow_gpu_cuda_ipc_memory_handle_parent_class)->finalize(object);
-}
-
-static void
-garrow_gpu_cuda_ipc_memory_handle_set_property(GObject *object,
-                                               guint prop_id,
-                                               const GValue *value,
-                                               GParamSpec *pspec)
-{
-  auto priv = GARROW_GPU_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(object);
-
-  switch (prop_id) {
-  case PROP_IPC_MEMORY_HANDLE:
-    priv->ipc_memory_handle =
-      *static_cast<std::shared_ptr<arrow::gpu::CudaIpcMemHandle> *>(g_value_get_pointer(value));
-    break;
-  default:
-    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
-    break;
-  }
-}
-
-static void
-garrow_gpu_cuda_ipc_memory_handle_get_property(GObject *object,
-                                               guint prop_id,
-                                               GValue *value,
-                                               GParamSpec *pspec)
-{
-  switch (prop_id) {
-  default:
-    G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
-    break;
-  }
-}
-
-static void
-garrow_gpu_cuda_ipc_memory_handle_init(GArrowGPUCUDAIPCMemoryHandle *object)
-{
-}
-
-static void
-garrow_gpu_cuda_ipc_memory_handle_class_init(GArrowGPUCUDAIPCMemoryHandleClass *klass)
-{
-  GParamSpec *spec;
-
-  auto gobject_class = G_OBJECT_CLASS(klass);
-
-  gobject_class->finalize     = garrow_gpu_cuda_ipc_memory_handle_finalize;
-  gobject_class->set_property = garrow_gpu_cuda_ipc_memory_handle_set_property;
-  gobject_class->get_property = garrow_gpu_cuda_ipc_memory_handle_get_property;
-
-  /**
-   * GArrowGPUCUDAIPCMemoryHandle:ipc-memory-handle:
-   *
-   * Since: 0.8.0
-   */
-  spec = g_param_spec_pointer("ipc-memory-handle",
-                              "IPC Memory Handle",
-                              "The raw std::shared_ptr<arrow::gpu::CudaIpcMemHandle>",
-                              static_cast<GParamFlags>(G_PARAM_WRITABLE |
-                                                       G_PARAM_CONSTRUCT_ONLY));
-  g_object_class_install_property(gobject_class, PROP_IPC_MEMORY_HANDLE, spec);
-}
-
-/**
- * garrow_gpu_cuda_ipc_memory_handle_new:
- * @data: (array length=size): A serialized #GArrowGPUCUDAIPCMemoryHandle.
- * @size: The size of data.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowGPUCUDAIPCMemoryHandle
- *   on success, %NULL on error.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDAIPCMemoryHandle *
-garrow_gpu_cuda_ipc_memory_handle_new(const guint8 *data,
-                                      gsize size,
-                                      GError **error)
-{
-  std::shared_ptr<arrow::gpu::CudaIpcMemHandle> arrow_handle;
-  auto status = arrow::gpu::CudaIpcMemHandle::FromBuffer(data, &arrow_handle);
-  if (garrow_error_check(error, status,
-                         "[gpu][cuda][ipc-memory-handle][new]")) {
-    return garrow_gpu_cuda_ipc_memory_handle_new_raw(&arrow_handle);
-  } else {
-    return NULL;
-  }
-}
-
-/**
- * garrow_gpu_cuda_ipc_memory_handle_serialize:
- * @handle: A #GArrowGPUCUDAIPCMemoryHandle.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: (transfer full): A newly created #GArrowBuffer on success,
- *   %NULL on error. The buffer has serialized @handle. The serialized
- *   @handle can be deserialized by garrow_gpu_cuda_ipc_memory_handle_new()
- *   in other process.
- *
- * Since: 0.8.0
- */
-GArrowBuffer *
-garrow_gpu_cuda_ipc_memory_handle_serialize(GArrowGPUCUDAIPCMemoryHandle *handle,
-                                            GError **error)
-{
-  auto arrow_handle = garrow_gpu_cuda_ipc_memory_handle_get_raw(handle);
-  std::shared_ptr<arrow::Buffer> arrow_buffer;
-  auto status = arrow_handle->Serialize(arrow::default_memory_pool(),
-                                        &arrow_buffer);
-  if (garrow_error_check(error, status,
-                         "[gpu][cuda][ipc-memory-handle][serialize]")) {
-    return garrow_buffer_new_raw(&arrow_buffer);
-  } else {
-    return NULL;
-  }
-}
-
-GArrowBuffer *
-garrow_gpu_cuda_buffer_input_stream_new_raw_readable_interface(std::shared_ptr<arrow::Buffer> *arrow_buffer)
-{
-  auto buffer = GARROW_BUFFER(g_object_new(GARROW_GPU_TYPE_CUDA_BUFFER,
-                                           "buffer", arrow_buffer,
-                                           NULL));
-  return buffer;
-}
-
-static std::shared_ptr<arrow::io::Readable>
-garrow_gpu_cuda_buffer_input_stream_get_raw_readable_interface(GArrowReadable *readable)
-{
-  auto input_stream = GARROW_INPUT_STREAM(readable);
-  auto arrow_input_stream = garrow_input_stream_get_raw(input_stream);
-  return arrow_input_stream;
-}
-
-static void
-garrow_gpu_cuda_buffer_input_stream_readable_interface_init(GArrowReadableInterface *iface)
-{
-  iface->new_raw =
-    garrow_gpu_cuda_buffer_input_stream_new_raw_readable_interface;
-  iface->get_raw =
-    garrow_gpu_cuda_buffer_input_stream_get_raw_readable_interface;
-}
-
-G_DEFINE_TYPE_WITH_CODE(
-  GArrowGPUCUDABufferInputStream,
-  garrow_gpu_cuda_buffer_input_stream,
-  GARROW_TYPE_BUFFER_INPUT_STREAM,
-  G_IMPLEMENT_INTERFACE(
-    GARROW_TYPE_READABLE,
-    garrow_gpu_cuda_buffer_input_stream_readable_interface_init))
-
-static void
-garrow_gpu_cuda_buffer_input_stream_init(GArrowGPUCUDABufferInputStream *object)
-{
-}
-
-static void
-garrow_gpu_cuda_buffer_input_stream_class_init(GArrowGPUCUDABufferInputStreamClass *klass)
-{
-}
-
-/**
- * garrow_gpu_cuda_buffer_input_stream_new:
- * @buffer: A #GArrowGPUCUDABuffer.
- *
- * Returns: (transfer full): A newly created
- *   #GArrowGPUCUDABufferInputStream.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDABufferInputStream *
-garrow_gpu_cuda_buffer_input_stream_new(GArrowGPUCUDABuffer *buffer)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  auto arrow_reader =
-    std::make_shared<arrow::gpu::CudaBufferReader>(arrow_buffer);
-  return garrow_gpu_cuda_buffer_input_stream_new_raw(&arrow_reader);
-}
-
-
-G_DEFINE_TYPE(GArrowGPUCUDABufferOutputStream,
-              garrow_gpu_cuda_buffer_output_stream,
-              GARROW_TYPE_OUTPUT_STREAM)
-
-static void
-garrow_gpu_cuda_buffer_output_stream_init(GArrowGPUCUDABufferOutputStream *object)
-{
-}
-
-static void
-garrow_gpu_cuda_buffer_output_stream_class_init(GArrowGPUCUDABufferOutputStreamClass *klass)
-{
-}
-
-/**
- * garrow_gpu_cuda_buffer_output_stream_new:
- * @buffer: A #GArrowGPUCUDABuffer.
- *
- * Returns: (transfer full): A newly created
- *   #GArrowGPUCUDABufferOutputStream.
- *
- * Since: 0.8.0
- */
-GArrowGPUCUDABufferOutputStream *
-garrow_gpu_cuda_buffer_output_stream_new(GArrowGPUCUDABuffer *buffer)
-{
-  auto arrow_buffer = garrow_gpu_cuda_buffer_get_raw(buffer);
-  auto arrow_writer =
-    std::make_shared<arrow::gpu::CudaBufferWriter>(arrow_buffer);
-  return garrow_gpu_cuda_buffer_output_stream_new_raw(&arrow_writer);
-}
-
-/**
- * garrow_gpu_cuda_buffer_output_stream_set_buffer_size:
- * @stream: A #GArrowGPUCUDABufferOutputStream.
- * @size: A size of CPU buffer in bytes.
- * @error: (nullable): Return location for a #GError or %NULL.
- *
- * Returns: %TRUE on success, %FALSE if there was an error.
- *
- * Sets CPU buffer size. to limit `cudaMemcpy()` calls. If CPU buffer
- * size is `0`, buffering is disabled.
- *
- * The default is `0`.
- *
- * Since: 0.8.0
- */
-gboolean
-garrow_gpu_cuda_buffer_output_stream_set_buffer_size(GArrowGPUCUDABufferOutputStream *stream,
-                                                     gint64 size,
-                                                     GError **error)
-{
-  auto arrow_stream = garrow_gpu_cuda_buffer_output_stream_get_raw(stream);
-  auto status = arrow_stream->SetBufferSize(size);
-  return garrow_error_check(error,
-                            status,
-                            "[gpu][cuda][buffer-output-stream][set-buffer-size]");
-}
-
-/**
- * garrow_gpu_cuda_buffer_output_stream_get_buffer_size:
- * @stream: A #GArrowGPUCUDABufferOutputStream.
- *
- * Returns: The CPU buffer size in bytes.
- *
- * See garrow_gpu_cuda_buffer_output_stream_set_buffer_size() for CPU
- * buffer size details.
- *
- * Since: 0.8.0
- */
-gint64
-garrow_gpu_cuda_buffer_output_stream_get_buffer_size(GArrowGPUCUDABufferOutputStream *stream)
-{
-  auto arrow_stream = garrow_gpu_cuda_buffer_output_stream_get_raw(stream);
-  return arrow_stream->buffer_size();
-}
-
-/**
- * garrow_gpu_cuda_buffer_output_stream_get_buffered_size:
- * @stream: A #GArrowGPUCUDABufferOutputStream.
- *
- * Returns: The size of buffered data in bytes.
- *
- * Since: 0.8.0
- */
-gint64
-garrow_gpu_cuda_buffer_output_stream_get_buffered_size(GArrowGPUCUDABufferOutputStream *stream)
-{
-  auto arrow_stream = garrow_gpu_cuda_buffer_output_stream_get_raw(stream);
-  return arrow_stream->num_bytes_buffered();
-}
-
-
-G_END_DECLS
-
-GArrowGPUCUDAContext *
-garrow_gpu_cuda_context_new_raw(std::shared_ptr<arrow::gpu::CudaContext> *arrow_context)
-{
-  return GARROW_GPU_CUDA_CONTEXT(g_object_new(GARROW_GPU_TYPE_CUDA_CONTEXT,
-                                              "context", arrow_context,
-                                              NULL));
-}
-
-std::shared_ptr<arrow::gpu::CudaContext>
-garrow_gpu_cuda_context_get_raw(GArrowGPUCUDAContext *context)
-{
-  if (!context)
-    return nullptr;
-
-  auto priv = GARROW_GPU_CUDA_CONTEXT_GET_PRIVATE(context);
-  return priv->context;
-}
-
-GArrowGPUCUDAIPCMemoryHandle *
-garrow_gpu_cuda_ipc_memory_handle_new_raw(std::shared_ptr<arrow::gpu::CudaIpcMemHandle> *arrow_handle)
-{
-  auto handle = g_object_new(GARROW_GPU_TYPE_CUDA_IPC_MEMORY_HANDLE,
-                             "ipc-memory-handle", arrow_handle,
-                             NULL);
-  return GARROW_GPU_CUDA_IPC_MEMORY_HANDLE(handle);
-}
-
-std::shared_ptr<arrow::gpu::CudaIpcMemHandle>
-garrow_gpu_cuda_ipc_memory_handle_get_raw(GArrowGPUCUDAIPCMemoryHandle *handle)
-{
-  if (!handle)
-    return nullptr;
-
-  auto priv = GARROW_GPU_CUDA_IPC_MEMORY_HANDLE_GET_PRIVATE(handle);
-  return priv->ipc_memory_handle;
-}
-
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new_raw(std::shared_ptr<arrow::gpu::CudaBuffer> *arrow_buffer)
-{
-  return GARROW_GPU_CUDA_BUFFER(g_object_new(GARROW_GPU_TYPE_CUDA_BUFFER,
-                                             "buffer", arrow_buffer,
-                                             NULL));
-}
-
-std::shared_ptr<arrow::gpu::CudaBuffer>
-garrow_gpu_cuda_buffer_get_raw(GArrowGPUCUDABuffer *buffer)
-{
-  if (!buffer)
-    return nullptr;
-
-  auto arrow_buffer = garrow_buffer_get_raw(GARROW_BUFFER(buffer));
-  return std::static_pointer_cast<arrow::gpu::CudaBuffer>(arrow_buffer);
-}
-
-GArrowGPUCUDAHostBuffer *
-garrow_gpu_cuda_host_buffer_new_raw(std::shared_ptr<arrow::gpu::CudaHostBuffer> *arrow_buffer)
-{
-  auto buffer = g_object_new(GARROW_GPU_TYPE_CUDA_HOST_BUFFER,
-                             "buffer", arrow_buffer,
-                             NULL);
-  return GARROW_GPU_CUDA_HOST_BUFFER(buffer);
-}
-
-std::shared_ptr<arrow::gpu::CudaHostBuffer>
-garrow_gpu_cuda_host_buffer_get_raw(GArrowGPUCUDAHostBuffer *buffer)
-{
-  if (!buffer)
-    return nullptr;
-
-  auto arrow_buffer = garrow_buffer_get_raw(GARROW_BUFFER(buffer));
-  return std::static_pointer_cast<arrow::gpu::CudaHostBuffer>(arrow_buffer);
-}
-
-GArrowGPUCUDABufferInputStream *
-garrow_gpu_cuda_buffer_input_stream_new_raw(std::shared_ptr<arrow::gpu::CudaBufferReader> *arrow_reader)
-{
-  auto input_stream = g_object_new(GARROW_GPU_TYPE_CUDA_BUFFER_INPUT_STREAM,
-                                   "input-stream", arrow_reader,
-                                   NULL);
-  return GARROW_GPU_CUDA_BUFFER_INPUT_STREAM(input_stream);
-}
-
-std::shared_ptr<arrow::gpu::CudaBufferReader>
-garrow_gpu_cuda_buffer_input_stream_get_raw(GArrowGPUCUDABufferInputStream *input_stream)
-{
-  if (!input_stream)
-    return nullptr;
-
-  auto arrow_reader =
-    garrow_input_stream_get_raw(GARROW_INPUT_STREAM(input_stream));
-  return std::static_pointer_cast<arrow::gpu::CudaBufferReader>(arrow_reader);
-}
-
-GArrowGPUCUDABufferOutputStream *
-garrow_gpu_cuda_buffer_output_stream_new_raw(std::shared_ptr<arrow::gpu::CudaBufferWriter> *arrow_writer)
-{
-  auto output_stream = g_object_new(GARROW_GPU_TYPE_CUDA_BUFFER_OUTPUT_STREAM,
-                                    "output-stream", arrow_writer,
-                                    NULL);
-  return GARROW_GPU_CUDA_BUFFER_OUTPUT_STREAM(output_stream);
-}
-
-std::shared_ptr<arrow::gpu::CudaBufferWriter>
-garrow_gpu_cuda_buffer_output_stream_get_raw(GArrowGPUCUDABufferOutputStream *output_stream)
-{
-  if (!output_stream)
-    return nullptr;
-
-  auto arrow_writer =
-    garrow_output_stream_get_raw(GARROW_OUTPUT_STREAM(output_stream));
-  return std::static_pointer_cast<arrow::gpu::CudaBufferWriter>(arrow_writer);
-}
diff --git a/c_glib/arrow-gpu-glib/cuda.h b/c_glib/arrow-gpu-glib/cuda.h
deleted file mode 100644
index f45a46a..0000000
--- a/c_glib/arrow-gpu-glib/cuda.h
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-#pragma once
-
-#include <arrow-glib/arrow-glib.h>
-
-G_BEGIN_DECLS
-
-#define GARROW_GPU_TYPE_CUDA_DEVICE_MANAGER     \
-  (garrow_gpu_cuda_device_manager_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDADeviceManager,
-                         garrow_gpu_cuda_device_manager,
-                         GARROW_GPU,
-                         CUDA_DEVICE_MANAGER,
-                         GObject)
-struct _GArrowGPUCUDADeviceManagerClass
-{
-  GObjectClass parent_class;
-};
-
-#define GARROW_GPU_TYPE_CUDA_CONTEXT (garrow_gpu_cuda_context_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDAContext,
-                         garrow_gpu_cuda_context,
-                         GARROW_GPU,
-                         CUDA_CONTEXT,
-                         GObject)
-struct _GArrowGPUCUDAContextClass
-{
-  GObjectClass parent_class;
-};
-
-#define GARROW_GPU_TYPE_CUDA_BUFFER (garrow_gpu_cuda_buffer_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDABuffer,
-                         garrow_gpu_cuda_buffer,
-                         GARROW_GPU,
-                         CUDA_BUFFER,
-                         GArrowBuffer)
-struct _GArrowGPUCUDABufferClass
-{
-  GArrowBufferClass parent_class;
-};
-
-#define GARROW_GPU_TYPE_CUDA_HOST_BUFFER (garrow_gpu_cuda_host_buffer_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDAHostBuffer,
-                         garrow_gpu_cuda_host_buffer,
-                         GARROW_GPU,
-                         CUDA_HOST_BUFFER,
-                         GArrowMutableBuffer)
-struct _GArrowGPUCUDAHostBufferClass
-{
-  GArrowMutableBufferClass parent_class;
-};
-
-#define GARROW_GPU_TYPE_CUDA_IPC_MEMORY_HANDLE          \
-  (garrow_gpu_cuda_ipc_memory_handle_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDAIPCMemoryHandle,
-                         garrow_gpu_cuda_ipc_memory_handle,
-                         GARROW_GPU,
-                         CUDA_IPC_MEMORY_HANDLE,
-                         GObject)
-struct _GArrowGPUCUDAIPCMemoryHandleClass
-{
-  GObjectClass parent_class;
-};
-
-#define GARROW_GPU_TYPE_CUDA_BUFFER_INPUT_STREAM        \
-  (garrow_gpu_cuda_buffer_input_stream_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDABufferInputStream,
-                         garrow_gpu_cuda_buffer_input_stream,
-                         GARROW_GPU,
-                         CUDA_BUFFER_INPUT_STREAM,
-                         GArrowBufferInputStream)
-struct _GArrowGPUCUDABufferInputStreamClass
-{
-  GArrowBufferInputStreamClass parent_class;
-};
-
-#define GARROW_GPU_TYPE_CUDA_BUFFER_OUTPUT_STREAM               \
-  (garrow_gpu_cuda_buffer_output_stream_get_type())
-G_DECLARE_DERIVABLE_TYPE(GArrowGPUCUDABufferOutputStream,
-                         garrow_gpu_cuda_buffer_output_stream,
-                         GARROW_GPU,
-                         CUDA_BUFFER_OUTPUT_STREAM,
-                         GArrowOutputStream)
-struct _GArrowGPUCUDABufferOutputStreamClass
-{
-  GArrowOutputStreamClass parent_class;
-};
-
-GArrowGPUCUDADeviceManager *
-garrow_gpu_cuda_device_manager_new(GError **error);
-
-GArrowGPUCUDAContext *
-garrow_gpu_cuda_device_manager_get_context(GArrowGPUCUDADeviceManager *manager,
-                                           gint gpu_number,
-                                           GError **error);
-gsize
-garrow_gpu_cuda_device_manager_get_n_devices(GArrowGPUCUDADeviceManager *manager);
-
-gint64
-garrow_gpu_cuda_context_get_allocated_size(GArrowGPUCUDAContext *context);
-
-
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new(GArrowGPUCUDAContext *context,
-                           gint64 size,
-                           GError **error);
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new_ipc(GArrowGPUCUDAContext *context,
-                               GArrowGPUCUDAIPCMemoryHandle *handle,
-                               GError **error);
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new_record_batch(GArrowGPUCUDAContext *context,
-                                        GArrowRecordBatch *record_batch,
-                                        GError **error);
-GBytes *
-garrow_gpu_cuda_buffer_copy_to_host(GArrowGPUCUDABuffer *buffer,
-                                    gint64 position,
-                                    gint64 size,
-                                    GError **error);
-gboolean
-garrow_gpu_cuda_buffer_copy_from_host(GArrowGPUCUDABuffer *buffer,
-                                      const guint8 *data,
-                                      gint64 size,
-                                      GError **error);
-GArrowGPUCUDAIPCMemoryHandle *
-garrow_gpu_cuda_buffer_export(GArrowGPUCUDABuffer *buffer,
-                              GError **error);
-GArrowGPUCUDAContext *
-garrow_gpu_cuda_buffer_get_context(GArrowGPUCUDABuffer *buffer);
-GArrowRecordBatch *
-garrow_gpu_cuda_buffer_read_record_batch(GArrowGPUCUDABuffer *buffer,
-                                         GArrowSchema *schema,
-                                         GError **error);
-
-
-GArrowGPUCUDAHostBuffer *
-garrow_gpu_cuda_host_buffer_new(gint gpu_number,
-                                gint64 size,
-                                GError **error);
-
-GArrowGPUCUDAIPCMemoryHandle *
-garrow_gpu_cuda_ipc_memory_handle_new(const guint8 *data,
-                                      gsize size,
-                                      GError **error);
-
-GArrowBuffer *
-garrow_gpu_cuda_ipc_memory_handle_serialize(GArrowGPUCUDAIPCMemoryHandle *handle,
-                                            GError **error);
-
-GArrowGPUCUDABufferInputStream *
-garrow_gpu_cuda_buffer_input_stream_new(GArrowGPUCUDABuffer *buffer);
-
-GArrowGPUCUDABufferOutputStream *
-garrow_gpu_cuda_buffer_output_stream_new(GArrowGPUCUDABuffer *buffer);
-
-gboolean
-garrow_gpu_cuda_buffer_output_stream_set_buffer_size(GArrowGPUCUDABufferOutputStream *stream,
-                                                     gint64 size,
-                                                     GError **error);
-gint64
-garrow_gpu_cuda_buffer_output_stream_get_buffer_size(GArrowGPUCUDABufferOutputStream *stream);
-gint64
-garrow_gpu_cuda_buffer_output_stream_get_buffered_size(GArrowGPUCUDABufferOutputStream *stream);
-
-G_END_DECLS
diff --git a/c_glib/arrow-gpu-glib/cuda.hpp b/c_glib/arrow-gpu-glib/cuda.hpp
deleted file mode 100644
index 4b5b03c..0000000
--- a/c_glib/arrow-gpu-glib/cuda.hpp
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-#pragma once
-
-#include <arrow/gpu/cuda_api.h>
-
-#include <arrow-gpu-glib/cuda.h>
-
-GArrowGPUCUDAContext *
-garrow_gpu_cuda_context_new_raw(std::shared_ptr<arrow::gpu::CudaContext> *arrow_context);
-std::shared_ptr<arrow::gpu::CudaContext>
-garrow_gpu_cuda_context_get_raw(GArrowGPUCUDAContext *context);
-
-GArrowGPUCUDAIPCMemoryHandle *
-garrow_gpu_cuda_ipc_memory_handle_new_raw(std::shared_ptr<arrow::gpu::CudaIpcMemHandle> *arrow_handle);
-std::shared_ptr<arrow::gpu::CudaIpcMemHandle>
-garrow_gpu_cuda_ipc_memory_handle_get_raw(GArrowGPUCUDAIPCMemoryHandle *handle);
-
-GArrowGPUCUDABuffer *
-garrow_gpu_cuda_buffer_new_raw(std::shared_ptr<arrow::gpu::CudaBuffer> *arrow_buffer);
-std::shared_ptr<arrow::gpu::CudaBuffer>
-garrow_gpu_cuda_buffer_get_raw(GArrowGPUCUDABuffer *buffer);
-
-GArrowGPUCUDAHostBuffer *
-garrow_gpu_cuda_host_buffer_new_raw(std::shared_ptr<arrow::gpu::CudaHostBuffer> *arrow_buffer);
-std::shared_ptr<arrow::gpu::CudaHostBuffer>
-garrow_gpu_cuda_host_buffer_get_raw(GArrowGPUCUDAHostBuffer *buffer);
-
-GArrowGPUCUDABufferInputStream *
-garrow_gpu_cuda_buffer_input_stream_new_raw(std::shared_ptr<arrow::gpu::CudaBufferReader> *arrow_reader);
-std::shared_ptr<arrow::gpu::CudaBufferReader>
-garrow_gpu_cuda_buffer_input_stream_get_raw(GArrowGPUCUDABufferInputStream *input_stream);
-
-GArrowGPUCUDABufferOutputStream *
-garrow_gpu_cuda_buffer_output_stream_new_raw(std::shared_ptr<arrow::gpu::CudaBufferWriter> *arrow_writer);
-std::shared_ptr<arrow::gpu::CudaBufferWriter>
-garrow_gpu_cuda_buffer_output_stream_get_raw(GArrowGPUCUDABufferOutputStream *output_stream);
diff --git a/c_glib/arrow-gpu-glib/meson.build b/c_glib/arrow-gpu-glib/meson.build
deleted file mode 100644
index 680982e..0000000
--- a/c_glib/arrow-gpu-glib/meson.build
+++ /dev/null
@@ -1,79 +0,0 @@
-# -*- indent-tabs-mode: nil -*-
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-sources = files(
-  'cuda.cpp',
-)
-
-c_headers = files(
-  'arrow-gpu-glib.h',
-  'cuda.h',
-)
-
-cpp_headers = files(
-  'arrow-gpu-glib.hpp',
-  'cuda.hpp',
-)
-
-headers = c_headers + cpp_headers
-install_headers(headers, subdir: 'arrow-gpu-glib')
-
-
-dependencies = [
-  arrow_gpu,
-  arrow_glib,
-]
-libarrow_gpu_glib = library('arrow-gpu-glib',
-                            sources: sources,
-                            install: true,
-                            dependencies: dependencies,
-                            include_directories: base_include_directories,
-                            soversion: so_version,
-                            version: library_version)
-arrow_gpu_glib = declare_dependency(link_with: libarrow_gpu_glib,
-                                    include_directories: base_include_directories,
-                                    dependencies: dependencies)
-
-pkgconfig.generate(filebase: 'arrow-gpu-glib',
-                   name: 'Apache Arrow GPU GLib',
-                   description: 'C API for Apache Arrow GPU based on GLib',
-                   version: version,
-                   requires: ['arrow-glib', 'arrow-gpu'],
-                   libraries: [libarrow_gpu_glib])
-
-gir_dependencies = [
-  declare_dependency(sources: arrow_glib_gir),
-]
-gir_extra_args = [
-  '--warn-all',
-  '--include-uninstalled=./arrow-glib/Arrow-1.0.gir',
-]
-arrow_gpu_glib_gir = gnome.generate_gir(libarrow_gpu_glib,
-                                        dependencies: gir_dependencies,
-                                        sources: sources + c_headers,
-                                        namespace: 'ArrowGPU',
-                                        nsversion: api_version,
-                                        identifier_prefix: 'GArrowGPU',
-                                        symbol_prefix: 'garrow_gpu',
-                                        export_packages: 'arrow-gpu-glib',
-                                        includes: [
-                                          'Arrow-1.0',
-                                        ],
-                                        install: true,
-                                        extra_args: gir_extra_args)
diff --git a/c_glib/configure.ac b/c_glib/configure.ac
index b84e3d3..a6d8ed8 100644
--- a/c_glib/configure.ac
+++ b/c_glib/configure.ac
@@ -115,6 +115,7 @@ AC_ARG_WITH(arrow-cpp-build-type,
   [GARROW_ARROW_CPP_BUILD_TYPE="$withval"],
   [GARROW_ARROW_CPP_BUILD_TYPE="release"])
 
+ARROW_CUDA_PKG_CONFIG_PATH=""
 AC_ARG_WITH(arrow-cpp-build-dir,
   [AS_HELP_STRING([--with-arrow-cpp-build-dir=PATH],
                   [Use this option to build with not installed Arrow C++])],
@@ -130,10 +131,10 @@ if test "x$GARROW_ARROW_CPP_BUILD_DIR" = "x"; then
                     [arrow-orc],
                     [HAVE_ARROW_ORC=yes],
                     [HAVE_ARROW_ORC=no])
-  PKG_CHECK_MODULES([ARROW_GPU],
-                    [arrow-gpu],
-                    [HAVE_ARROW_GPU=yes],
-                    [HAVE_ARROW_GPU=no])
+  PKG_CHECK_MODULES([ARROW_CUDA],
+                    [arrow-cuda],
+                    [HAVE_ARROW_CUDA=yes],
+                    [HAVE_ARROW_CUDA=no])
   PKG_CHECK_MODULES([GANDIVA],
                     [gandiva],
                     [HAVE_GANDIVA=yes],
@@ -168,16 +169,19 @@ else
     HAVE_ARROW_ORC=no
   fi
 
-  ARROW_GPU_CFLAGS=""
-  if test -f "${GARROW_ARROW_CPP_BUILD_DIR}/src/arrow/gpu/arrow-gpu.pc"; then
-    HAVE_ARROW_GPU=yes
-    ARROW_GPU_LIBS="-larrow_gpu"
+  ARROW_CUDA_CFLAGS=""
+  if test -f "${GARROW_ARROW_CPP_BUILD_DIR}/src/arrow/gpu/arrow-cuda.pc"; then
+    HAVE_ARROW_CUDA=yes
+    ARROW_CUDA_LIBS="-larrow_cuda"
+    ARROW_CUDA_PKG_CONFIG_PATH="\$(ARROW_BUILD_DIR)/src/arrow/gpu"
   else
-    HAVE_ARROW_GPU=no
-    ARROW_GPU_LIBS=""
+    HAVE_ARROW_CUDA=no
+    ARROW_CUDA_LIBS=""
+    ARROW_CUDA_PKG_CONFIG_PATH=""
   fi
-  AC_SUBST(ARROW_GPU_CFLAGS)
-  AC_SUBST(ARROW_GPU_LIBS)
+  AC_SUBST(ARROW_CUDA_CFLAGS)
+  AC_SUBST(ARROW_CUDA_LIBS)
+  AC_SUBST(ARROW_CUDA_PKG_CONFIG_PATH)
 
   GANDIVA_CFLAGS=""
   if test -f "${GARROW_ARROW_CPP_BUILD_DIR}/src/gandiva/gandiva.pc"; then
@@ -221,14 +225,20 @@ if test "$HAVE_ARROW_ORC" = "yes"; then
   AC_DEFINE(HAVE_ARROW_ORC, [1], [Define to 1 if Apache Arrow supports ORC.])
 fi
 
-AM_CONDITIONAL([HAVE_ARROW_GPU], [test "$HAVE_ARROW_GPU" = "yes"])
-if test "$HAVE_ARROW_GPU" = "yes"; then
-  ARROW_GPU_GLIB_PACKAGE="arrow-gpu-glib"
-  AC_DEFINE(HAVE_ARROW_GPU, [1], [Define to 1 if Apache Arrow supports GPU.])
+AM_CONDITIONAL([HAVE_ARROW_CUDA], [test "$HAVE_ARROW_CUDA" = "yes"])
+if test "$HAVE_ARROW_CUDA" = "yes"; then
+  ARROW_CUDA_GLIB_PACKAGE="arrow-cuda-glib"
+  PLASMA_ARROW_CUDA_PKG_CONFIG_PATH=":\$(abs_top_builddir)/arrow-cuda-glib"
+  if test -n "${ARROW_CUDA_PKG_CONFIG_PATH}"; then
+    PLASMA_ARROW_CUDA_PKG_CONFIG_PATH=":${ARROW_CUDA_PKG_CONFIG_PATH}${PLASMA_ARROW_CUDA_PKG_CONFIG_PATH}"
+  fi
+  AC_DEFINE(HAVE_ARROW_CUDA, [1], [Define to 1 if Apache Arrow supports CUDA.])
 else
-  ARROW_GPU_GLIB_PACKAGE=""
+  ARROW_CUDA_GLIB_PACKAGE=""
+  PLASMA_ARROW_CUDA_PKG_CONFIG_PATH=""
 fi
-AC_SUBST(ARROW_GPU_GLIB_PACKAGE)
+AC_SUBST(ARROW_CUDA_GLIB_PACKAGE)
+AC_SUBST(PLASMA_ARROW_CUDA_PKG_CONFIG_PATH)
 
 AM_CONDITIONAL([HAVE_GANDIVA], [test "$HAVE_GANDIVA" = "yes"])
 if test "$HAVE_GANDIVA" = "yes"; then
@@ -250,12 +260,12 @@ AC_SUBST(exampledir)
 
 AC_CONFIG_FILES([
   Makefile
+  arrow-cuda-glib/Makefile
+  arrow-cuda-glib/arrow-cuda-glib.pc
   arrow-glib/Makefile
   arrow-glib/arrow-glib.pc
   arrow-glib/arrow-orc-glib.pc
   arrow-glib/version.h
-  arrow-gpu-glib/Makefile
-  arrow-gpu-glib/arrow-gpu-glib.pc
   gandiva-glib/Makefile
   gandiva-glib/gandiva-glib.pc
   parquet-glib/Makefile
diff --git a/c_glib/doc/arrow-glib/Makefile.am b/c_glib/doc/arrow-glib/Makefile.am
index ad0c938..db9f00f 100644
--- a/c_glib/doc/arrow-glib/Makefile.am
+++ b/c_glib/doc/arrow-glib/Makefile.am
@@ -55,15 +55,15 @@ AM_CFLAGS =					\
 GTKDOC_LIBS =						\
 	$(top_builddir)/arrow-glib/libarrow-glib.la
 
-if HAVE_ARROW_GPU
+if HAVE_ARROW_CUDA
 DOC_SOURCE_DIR +=				\
-	$(top_srcdir)/arrow-gpu-glib
+	$(top_srcdir)/arrow-cuda-glib
 HFILE_GLOB +=					\
-	$(top_srcdir)/arrow-gpu-glib/*.h
+	$(top_srcdir)/arrow-cuda-glib/*.h
 CFILE_GLOB +=					\
-	$(top_srcdir)/arrow-gpu-glib/*.cpp
+	$(top_srcdir)/arrow-cuda-glib/*.cpp
 GTKDOC_LIBS +=							\
-	$(top_builddir)/arrow-gpu-glib/libarrow-gpu-glib.la
+	$(top_builddir)/arrow-cuda-glib/libarrow-cuda-glib.la
 endif
 
 include $(top_srcdir)/gtk-doc.make
diff --git a/c_glib/doc/arrow-glib/meson.build b/c_glib/doc/arrow-glib/meson.build
index 68050aa..d61a974 100644
--- a/c_glib/doc/arrow-glib/meson.build
+++ b/c_glib/doc/arrow-glib/meson.build
@@ -50,13 +50,13 @@ source_directories = [
 dependencies = [
   arrow_glib,
 ]
-if arrow_gpu.found()
+if arrow_cuda.found()
   source_directories += [
-    join_paths(meson.source_root(), 'arrow-gpu-glib'),
-    join_paths(meson.build_root(), 'arrow-gpu-glib'),
+    join_paths(meson.source_root(), 'arrow-cuda-glib'),
+    join_paths(meson.build_root(), 'arrow-cuda-glib'),
   ]
   dependencies += [
-    arrow_gpu_glib,
+    arrow_cuda_glib,
   ]
 endif
 ignore_headers = []
diff --git a/c_glib/doc/plasma-glib/Makefile.am b/c_glib/doc/plasma-glib/Makefile.am
index f4ef9e5..df872d6 100644
--- a/c_glib/doc/plasma-glib/Makefile.am
+++ b/c_glib/doc/plasma-glib/Makefile.am
@@ -15,10 +15,10 @@
 # specific language governing permissions and limitations
 # under the License.
 
-PLASMA_ARROW_GPU_GTKDOC_LIBS =
-if HAVE_ARROW_GPU
-PLASMA_ARROW_GPU_GTKDOC_LIBS +=					\
-	$(top_builddir)/arrow-gpu-glib/libarrow-gpu-glib.la
+PLASMA_ARROW_CUDA_GTKDOC_LIBS =
+if HAVE_ARROW_CUDA
+PLASMA_ARROW_CUDA_GTKDOC_LIBS +=				\
+	$(top_builddir)/arrow-cuda-glib/libarrow-cuda-glib.la
 endif
 
 if HAVE_PLASMA
@@ -56,7 +56,7 @@ AM_CFLAGS =					\
 
 GTKDOC_LIBS =						\
 	$(top_builddir)/arrow-glib/libarrow-glib.la	\
-	$(PLASMA_ARROW_GPU_GTKDOC_LIBS)			\
+	$(PLASMA_ARROW_CUDA_GTKDOC_LIBS)		\
 	$(top_builddir)/plasma-glib/libplasma-glib.la
 
 include $(top_srcdir)/gtk-doc.make
diff --git a/c_glib/doc/plasma-glib/meson.build b/c_glib/doc/plasma-glib/meson.build
index 95d7db8..9efc53b 100644
--- a/c_glib/doc/plasma-glib/meson.build
+++ b/c_glib/doc/plasma-glib/meson.build
@@ -56,8 +56,8 @@ dependencies = [
   arrow_glib,
   plasma_glib,
 ]
-if arrow_gpu.found()
-  dependencies += [arrow_gpu_glib]
+if arrow_cuda.found()
+  dependencies += [arrow_cuda_glib]
 endif
 ignore_headers = []
 gnome.gtkdoc(project_name,
diff --git a/c_glib/meson.build b/c_glib/meson.build
index 1413605..194421c 100644
--- a/c_glib/meson.build
+++ b/c_glib/meson.build
@@ -64,7 +64,7 @@ endif
 if arrow_cpp_build_lib_dir == ''
   arrow = dependency('arrow')
   have_arrow_orc = dependency('arrow-orc', required: false).found()
-  arrow_gpu = dependency('arrow-gpu', required: false)
+  arrow_cuda = dependency('arrow-cuda', required: false)
   gandiva = dependency('gandiva', required: false)
   parquet = dependency('parquet', required: false)
   plasma = dependency('plasma', required: false)
@@ -89,9 +89,9 @@ main(void)
   have_arrow_orc = cpp_compiler.links(arrow_orc_code,
                                       include_directories: base_include_directories,
                                       dependencies: [arrow])
-  arrow_gpu = cpp_compiler.find_library('arrow_gpu',
-                                        dirs: [arrow_cpp_build_lib_dir],
-                                        required: false)
+  arrow_cuda = cpp_compiler.find_library('arrow_cuda',
+                                         dirs: [arrow_cpp_build_lib_dir],
+                                         required: false)
   gandiva = cpp_compiler.find_library('gandiva',
                                       dirs: [arrow_cpp_build_lib_dir],
                                       required: false)
@@ -104,8 +104,8 @@ main(void)
 endif
 
 subdir('arrow-glib')
-if arrow_gpu.found()
-  subdir('arrow-gpu-glib')
+if arrow_cuda.found()
+  subdir('arrow-cuda-glib')
 endif
 if gandiva.found()
   subdir('gandiva-glib')
@@ -136,7 +136,7 @@ test('unit test',
      run_test,
      env: [
        'ARROW_GLIB_TYPELIB_DIR=@0@/arrow-glib'.format(meson.build_root()),
-       'ARROW_GPU_GLIB_TYPELIB_DIR=@0@/arrow-gpu-glib'.format(meson.build_root()),
+       'ARROW_CUDA_GLIB_TYPELIB_DIR=@0@/arrow-cuda-glib'.format(meson.build_root()),
        'GANDIVA_GLIB_TYPELIB_DIR=@0@/gandiva-glib'.format(meson.build_root()),
        'PARQUET_GLIB_TYPELIB_DIR=@0@/parquet-glib'.format(meson.build_root()),
        'PARQUET_GLIB_TYPELIB_DIR=@0@/plasma-glib'.format(meson.build_root()),
diff --git a/c_glib/plasma-glib/Makefile.am b/c_glib/plasma-glib/Makefile.am
index 2060472..d14638b 100644
--- a/c_glib/plasma-glib/Makefile.am
+++ b/c_glib/plasma-glib/Makefile.am
@@ -31,32 +31,29 @@ AM_CFLAGS =					\
 	$(GARROW_CFLAGS)			\
 	$(GPLASMA_CFLAGS)
 
-PLASMA_ARROW_GPU_LIBS =
-PLASMA_ARROW_GPU_GLIB_PKG_CONFIG_PATH =
-PLASMA_INTROSPECTION_COMPILER_ARROW_GPU_ARGS =
-PLASMA_GIR_ARROW_GPU_PACKAGE =
-PLASMA_GIR_ARROW_GPU_SCANNER_ADD_INCLUDE_PATH =
-PLASMA_GIR_ARROW_GPU_LIBS_MACOS =
-PLASMA_GIR_ARROW_GPU_SCANNER_LIBRARY_PATH_MACOS =
-PLASMA_GIR_ARROW_GPU_LIBS =
-if HAVE_ARROW_GPU
-PLASMA_ARROW_GPU_LIBS +=			\
-	$(ARROW_GPU_LIBS)			\
-	../arrow-gpu-glib/libarrow-gpu-glib.la
-PLASMA_ARROW_GPU_GLIB_PKG_CONFIG_PATH +=	\
-	:${abs_top_builddir}/arrow-gpu-glib
-PLASMA_INTROSPECTION_COMPILER_ARROW_GPU_ARGS +=	\
-	--includedir=$(abs_top_builddir)/arrow-gpu-glib
-PLASMA_GIR_ARROW_GPU_PACKAGE +=			\
-	arrow-gpu-glib
-PLASMA_GIR_ARROW_GPU_SCANNER_ADD_INCLUDE_PATH +=		\
-	--add-include-path=$(abs_top_builddir)/arrow-gpu-glib
-PLASMA_GIR_ARROW_GPU_LIBS_MACOS +=			\
-	arrow-gpu-glib
-PLASMA_GIR_ARROW_GPU_SCANNER_LIBRARY_PATH_MACOS +=		\
-	--library-path=$(abs_top_builddir)/arrow-gpu-glib/.libs
-PLASMA_GIR_ARROW_GPU_LIBS +=					\
-	$(abs_top_builddir)/arrow-gpu-glib/libarrow-gpu-glib.la
+PLASMA_ARROW_CUDA_LIBS =
+PLASMA_INTROSPECTION_COMPILER_ARROW_CUDA_ARGS =
+PLASMA_GIR_ARROW_CUDA_PACKAGE =
+PLASMA_GIR_ARROW_CUDA_SCANNER_ADD_INCLUDE_PATH =
+PLASMA_GIR_ARROW_CUDA_LIBS_MACOS =
+PLASMA_GIR_ARROW_CUDA_SCANNER_LIBRARY_PATH_MACOS =
+PLASMA_GIR_ARROW_CUDA_LIBS =
+if HAVE_ARROW_CUDA
+PLASMA_ARROW_CUDA_LIBS +=				\
+	$(ARROW_CUDA_LIBS)				\
+	../arrow-cuda-glib/libarrow-cuda-glib.la
+PLASMA_INTROSPECTION_COMPILER_ARROW_CUDA_ARGS +=		\
+	--includedir=$(abs_top_builddir)/arrow-cuda-glib
+PLASMA_GIR_ARROW_CUDA_PACKAGE +=		\
+	arrow-cuda-glib
+PLASMA_GIR_ARROW_CUDA_SCANNER_ADD_INCLUDE_PATH +=		\
+	--add-include-path=$(abs_top_builddir)/arrow-cuda-glib
+PLASMA_GIR_ARROW_CUDA_LIBS_MACOS +=		\
+	arrow-cuda-glib
+PLASMA_GIR_ARROW_CUDA_SCANNER_LIBRARY_PATH_MACOS +=			\
+	--library-path=$(abs_top_builddir)/arrow-cuda-glib/.libs
+PLASMA_GIR_ARROW_CUDA_LIBS +=						\
+	$(abs_top_builddir)/arrow-cuda-glib/libarrow-cuda-glib.la
 endif
 
 if HAVE_PLASMA
@@ -79,7 +76,7 @@ libplasma_glib_la_LIBADD =			\
 	$(ARROW_LIBS)				\
 	$(PLASMA_LIBS)				\
 	../arrow-glib/libarrow-glib.la		\
-	$(PLASMA_ARROW_GPU_LIBS)
+	$(PLASMA_ARROW_CUDA_LIBS)
 
 libplasma_glib_la_headers =			\
 	client.h				\
@@ -117,19 +114,19 @@ INTROSPECTION_SCANNER_ARGS =
 INTROSPECTION_SCANNER_ENV =
 if USE_ARROW_BUILD_DIR
 INTROSPECTION_SCANNER_ENV +=			\
-	PKG_CONFIG_PATH=${abs_top_builddir}/arrow-glib$(PLASMA_ARROW_GPU_GLIB_PKG_CONFIG_PATH):$(ARROW_BUILD_DIR)/src/arrow:$${PKG_CONFIG_PATH}
+	PKG_CONFIG_PATH=$(abs_top_builddir)/arrow-glib$(PLASMA_ARROW_CUDA_PKG_CONFIG_PATH):$(ARROW_BUILD_DIR)/src/arrow:$${PKG_CONFIG_PATH}
 else
 INTROSPECTION_SCANNER_ENV +=			\
-	PKG_CONFIG_PATH=${abs_top_builddir}/arrow-glib$(PLASMA_ARROW_GPU_GLIB_PKG_CONFIG_PATH):$${PKG_CONFIG_PATH}
+	PKG_CONFIG_PATH=$(abs_top_builddir)/arrow-glib$(PLASMA_ARROW_CUDA_PKG_CONFIG_PATH):$${PKG_CONFIG_PATH}
 endif
 INTROSPECTION_COMPILER_ARGS =					\
 	--includedir=$(abs_top_builddir)/arrow-glib		\
-	$(PLASMA_INTROSPECTION_COMPILER_ARROW_GPU_INCLUDEDIR)
+	$(PLASMA_INTROSPECTION_COMPILER_ARROW_CUDA_INCLUDEDIR)
 
 Plasma-1.0.gir: libplasma-glib.la
 Plasma_1_0_gir_PACKAGES =			\
 	arrow-glib				\
-	$(PLASMA_GIR_ARROW_GPU_PACKAGE)
+	$(PLASMA_GIR_ARROW_CUDA_PACKAGE)
 Plasma_1_0_gir_EXPORT_PACKAGES =		\
 	plasma-glib
 Plasma_1_0_gir_INCLUDES =			\
@@ -140,7 +137,7 @@ Plasma_1_0_gir_LIBS =
 Plasma_1_0_gir_FILES = $(libplasma_glib_la_sources)
 Plasma_1_0_gir_SCANNERFLAGS =					\
 	--add-include-path=$(abs_top_builddir)/arrow-glib	\
-	$(PLASMA_GIR_ARROW_GPU_SCANNER_ADD_INCLUDE_PATH)	\
+	$(PLASMA_GIR_ARROW_CUDA_SCANNER_ADD_INCLUDE_PATH)	\
 	--library-path=$(ARROW_LIB_DIR)				\
 	--warn-all						\
 	--identifier-prefix=GPlasma				\
@@ -148,17 +145,17 @@ Plasma_1_0_gir_SCANNERFLAGS =					\
 if OS_MACOS
 Plasma_1_0_gir_LIBS +=				\
 	arrow-glib				\
-	$(PLASMA_GIR_ARROW_GPU_LIBS_MACOS)	\
+	$(PLASMA_GIR_ARROW_CUDA_LIBS_MACOS)	\
 	plasma-glib
 Plasma_1_0_gir_SCANNERFLAGS +=					\
 	--no-libtool						\
 	--library-path=$(abs_top_builddir)/arrow-glib/.libs	\
-	$(PLASMA_GIR_ARROW_GPU_SCANNER_LIBRARY_PATH_MACOS)	\
+	$(PLASMA_GIR_ARROW_CUDA_SCANNER_LIBRARY_PATH_MACOS)	\
 	--library-path=$(abs_builddir)/.libs
 else
 Plasma_1_0_gir_LIBS +=					\
 	$(abs_top_builddir)/arrow-glib/libarrow-glib.la	\
-	$(PLASMA_GIR_ARROW_GPU_LIBS)			\
+	$(PLASMA_GIR_ARROW_CUDA_LIBS)			\
 	libplasma-glib.la
 endif
 INTROSPECTION_GIRS += Plasma-1.0.gir
diff --git a/c_glib/plasma-glib/client.cpp b/c_glib/plasma-glib/client.cpp
index 6a2629b..e88cb13 100644
--- a/c_glib/plasma-glib/client.cpp
+++ b/c_glib/plasma-glib/client.cpp
@@ -24,8 +24,8 @@
 #include <arrow-glib/buffer.hpp>
 #include <arrow-glib/error.hpp>
 
-#ifdef HAVE_ARROW_GPU
-#  include <arrow-gpu-glib/cuda.hpp>
+#ifdef HAVE_ARROW_CUDA
+#  include <arrow-cuda-glib/cuda.hpp>
 #endif
 
 #include <plasma-glib/client.hpp>
@@ -311,11 +311,11 @@ gplasma_client_create(GPlasmaClient *client,
     raw_metadata = options_priv->metadata;
     raw_metadata_size = options_priv->metadata_size;
     if (options_priv->gpu_device >= 0) {
-#ifndef HAVE_ARROW_GPU
+#ifndef HAVE_ARROW_CUDA
       g_set_error(error,
                   GARROW_ERROR,
                   GARROW_ERROR_INVALID,
-                  "%s Arrow GPU GLib is needed to use GPU",
+                  "%s Arrow CUDA GLib is needed to use GPU",
                   context);
       return NULL;
 #endif
@@ -335,11 +335,11 @@ gplasma_client_create(GPlasmaClient *client,
       auto plasma_mutable_data =
         std::static_pointer_cast<arrow::MutableBuffer>(plasma_data);
       data = GARROW_BUFFER(garrow_mutable_buffer_new_raw(&plasma_mutable_data));
-#ifdef HAVE_ARROW_GPU
+#ifdef HAVE_ARROW_CUDA
     } else {
       auto plasma_cuda_data =
-        std::static_pointer_cast<arrow::gpu::CudaBuffer>(plasma_data);
-      data = GARROW_BUFFER(garrow_gpu_cuda_buffer_new_raw(&plasma_cuda_data));
+        std::static_pointer_cast<arrow::cuda::CudaBuffer>(plasma_data);
+      data = GARROW_BUFFER(garrow_cuda_buffer_new_raw(&plasma_cuda_data));
 #endif
     }
     GArrowBuffer *metadata = nullptr;
@@ -392,28 +392,28 @@ gplasma_client_refer_object(GPlasmaClient *client,
     GArrowBuffer *data = nullptr;
     GArrowBuffer *metadata = nullptr;
     if (plasma_object_buffer.device_num > 0) {
-#ifdef HAVE_ARROW_GPU
-      std::shared_ptr<arrow::gpu::CudaBuffer> plasma_cuda_data;
-      status = arrow::gpu::CudaBuffer::FromBuffer(plasma_data,
-                                                  &plasma_cuda_data);
+#ifdef HAVE_ARROW_CUDA
+      std::shared_ptr<arrow::cuda::CudaBuffer> plasma_cuda_data;
+      status = arrow::cuda::CudaBuffer::FromBuffer(plasma_data,
+                                                   &plasma_cuda_data);
       if (!garrow_error_check(error, status, context)) {
         return NULL;
       }
-      std::shared_ptr<arrow::gpu::CudaBuffer> plasma_cuda_metadata;
-      status = arrow::gpu::CudaBuffer::FromBuffer(plasma_metadata,
+      std::shared_ptr<arrow::cuda::CudaBuffer> plasma_cuda_metadata;
+      status = arrow::cuda::CudaBuffer::FromBuffer(plasma_metadata,
                                                   &plasma_cuda_metadata);
       if (!garrow_error_check(error, status, context)) {
         return NULL;
       }
 
-      data = GARROW_BUFFER(garrow_gpu_cuda_buffer_new_raw(&plasma_cuda_data));
+      data = GARROW_BUFFER(garrow_cuda_buffer_new_raw(&plasma_cuda_data));
       metadata =
-        GARROW_BUFFER(garrow_gpu_cuda_buffer_new_raw(&plasma_cuda_metadata));
+        GARROW_BUFFER(garrow_cuda_buffer_new_raw(&plasma_cuda_metadata));
 #else
       g_set_error(error,
                   GARROW_ERROR,
                   GARROW_ERROR_INVALID,
-                  "%s Arrow GPU GLib is needed to use GPU",
+                  "%s Arrow CUDA GLib is needed to use GPU",
                   context);
       return NULL;
 #endif
diff --git a/c_glib/plasma-glib/meson.build b/c_glib/plasma-glib/meson.build
index 60a6978..75ebce8 100644
--- a/c_glib/plasma-glib/meson.build
+++ b/c_glib/plasma-glib/meson.build
@@ -61,13 +61,13 @@ gir_extra_args = [
   '--warn-all',
   '--include-uninstalled=./arrow-glib/Arrow-1.0.gir',
 ]
-if arrow_gpu.found()
-  dependencies += [arrow_gpu_glib]
-  cpp_args += ['-DHAVE_ARROW_GPU']
-  pkg_config_requires += ['arrow-gpu-glib']
-  gir_dependencies += [declare_dependency(sources: arrow_gpu_glib_gir)]
-  gir_includes += ['ArrowGPU-1.0']
-  gir_extra_args += ['--include-uninstalled=./arrow-gpu-glib/ArrowGPU-1.0.gir']
+if arrow_cuda.found()
+  dependencies += [arrow_cuda_glib]
+  cpp_args += ['-DHAVE_ARROW_CUDA']
+  pkg_config_requires += ['arrow-cuda-glib']
+  gir_dependencies += [declare_dependency(sources: arrow_cuda_glib_gir)]
+  gir_includes += ['ArrowCUDA-1.0']
+  gir_extra_args += ['--include-uninstalled=./arrow-cuda-glib/ArrowCUDA-1.0.gir']
 endif
 libplasma_glib = library('plasma-glib',
                          sources: sources,
diff --git a/c_glib/test/plasma/test-plasma-client.rb b/c_glib/test/plasma/test-plasma-client.rb
index 4bf9fa9..cbdce86 100644
--- a/c_glib/test/plasma/test-plasma-client.rb
+++ b/c_glib/test/plasma/test-plasma-client.rb
@@ -61,7 +61,7 @@ class TestPlasmaClient < Test::Unit::TestCase
     end
 
     test("options: GPU device") do
-      omit("Arrow GPU is required") unless defined?(::ArrowGPU)
+      omit("Arrow CUDA is required") unless defined?(::ArrowCUDA)
 
       gpu_device = 0
 
diff --git a/c_glib/test/run-test.rb b/c_glib/test/run-test.rb
index 238bb2d..99d72f4 100755
--- a/c_glib/test/run-test.rb
+++ b/c_glib/test/run-test.rb
@@ -38,7 +38,7 @@ module Arrow
 end
 
 begin
-  ArrowGPU = GI.load("ArrowGPU")
+  ArrowCUDA = GI.load("ArrowCUDA")
 rescue GObjectIntrospection::RepositoryError::TypelibNotFound
 end
 
diff --git a/c_glib/test/run-test.sh b/c_glib/test/run-test.sh
index 96585ce..d33555d 100755
--- a/c_glib/test/run-test.sh
+++ b/c_glib/test/run-test.sh
@@ -20,7 +20,7 @@
 test_dir="$(cd $(dirname $0); pwd)"
 build_dir="$(cd .; pwd)"
 
-modules="arrow-glib arrow-gpu-glib gandiva-glib parquet-glib plasma-glib"
+modules="arrow-glib arrow-cuda-glib gandiva-glib parquet-glib plasma-glib"
 
 for module in ${modules}; do
   module_build_dir="${build_dir}/${module}"
diff --git a/c_glib/test/test-gpu-cuda.rb b/c_glib/test/test-cuda.rb
similarity index 80%
rename from c_glib/test/test-gpu-cuda.rb
rename to c_glib/test/test-cuda.rb
index 66ec19d..32d486e 100644
--- a/c_glib/test/test-gpu-cuda.rb
+++ b/c_glib/test/test-cuda.rb
@@ -15,12 +15,12 @@
 # specific language governing permissions and limitations
 # under the License.
 
-class TestGPUCUDA < Test::Unit::TestCase
+class TestCUDA < Test::Unit::TestCase
   include Helper::Buildable
 
   def setup
-    omit("Arrow GPU is required") unless defined?(::ArrowGPU)
-    @manager = ArrowGPU::CUDADeviceManager.new
+    omit("Arrow CUDA is required") unless defined?(::ArrowCUDA)
+    @manager = ArrowCUDA::DeviceManager.new
     omit("At least one GPU is required") if @manager.n_devices.zero?
     @context = @manager.get_context(0)
   end
@@ -29,7 +29,7 @@ class TestGPUCUDA < Test::Unit::TestCase
     def test_allocated_size
       allocated_size_before = @context.allocated_size
       size = 128
-      buffer = ArrowGPU::CUDABuffer.new(@context, size)
+      buffer = ArrowCUDA::Buffer.new(@context, size)
       assert_equal(size,
                    @context.allocated_size - allocated_size_before)
     end
@@ -38,7 +38,7 @@ class TestGPUCUDA < Test::Unit::TestCase
   sub_test_case("Buffer") do
     def setup
       super
-      @buffer = ArrowGPU::CUDABuffer.new(@context, 128)
+      @buffer = ArrowCUDA::Buffer.new(@context, 128)
     end
 
     def test_copy
@@ -50,19 +50,19 @@ class TestGPUCUDA < Test::Unit::TestCase
       @buffer.copy_from_host("Hello World")
       handle = @buffer.export
       serialized_handle = handle.serialize.data
-      Tempfile.open("arrow-gpu-cuda-export") do |output|
+      Tempfile.open("arrow-cuda-export") do |output|
         pid = spawn(RbConfig.ruby, "-e", <<-SCRIPT)
 require "gi"
 
 Gio = GI.load("Gio")
 Arrow = GI.load("Arrow")
-ArrowGPU = GI.load("ArrowGPU")
+ArrowCUDA = GI.load("ArrowCUDA")
 
-manager = ArrowGPU::CUDADeviceManager.new
+manager = ArrowCUDA::ADeviceManager.new
 context = manager.get_context(0)
 serialized_handle = #{serialized_handle.to_s.dump}
-handle = ArrowGPU::CUDAIPCMemoryHandle.new(serialized_handle)
-buffer = ArrowGPU::CUDABuffer.new(context, handle)
+handle = ArrowCUDA::IPCMemoryHandle.new(serialized_handle)
+buffer = ArrowCUDA::Buffer.new(context, handle)
 File.open(#{output.path.dump}, "w") do |output|
   output.print(buffer.copy_to_host(0, 6).to_s)
 end
@@ -85,7 +85,7 @@ end
       ]
       cpu_record_batch = Arrow::RecordBatch.new(schema, 1, columns)
 
-      buffer = ArrowGPU::CUDABuffer.new(@context, cpu_record_batch)
+      buffer = ArrowCUDA::Buffer.new(@context, cpu_record_batch)
       gpu_record_batch = buffer.read_record_batch(schema)
       assert_equal(cpu_record_batch.n_rows,
                    gpu_record_batch.n_rows)
@@ -94,16 +94,16 @@ end
 
   sub_test_case("HostBuffer") do
     def test_new
-      buffer = ArrowGPU::CUDAHostBuffer.new(0, 128)
+      buffer = ArrowCUDA::HostBuffer.new(0, 128)
       assert_equal(128, buffer.size)
     end
   end
 
   sub_test_case("BufferInputStream") do
     def test_new
-      buffer = ArrowGPU::CUDABuffer.new(@context, 128)
+      buffer = ArrowCUDA::Buffer.new(@context, 128)
       buffer.copy_from_host("Hello World")
-      stream = ArrowGPU::CUDABufferInputStream.new(buffer)
+      stream = ArrowCUDA::BufferInputStream.new(buffer)
       begin
         assert_equal("Hello Worl", stream.read(5).copy_to_host(0, 10).to_s)
       ensure
@@ -115,9 +115,9 @@ end
   sub_test_case("BufferOutputStream") do
     def setup
       super
-      @buffer = ArrowGPU::CUDABuffer.new(@context, 128)
+      @buffer = ArrowCUDA::Buffer.new(@context, 128)
       @buffer.copy_from_host("\x00" * @buffer.size)
-      @stream = ArrowGPU::CUDABufferOutputStream.new(@buffer)
+      @stream = ArrowCUDA::BufferOutputStream.new(@buffer)
     end
 
     def cleanup
diff --git a/cpp/CMakeLists.txt b/cpp/CMakeLists.txt
index 14621e4..6deb339 100644
--- a/cpp/CMakeLists.txt
+++ b/cpp/CMakeLists.txt
@@ -150,8 +150,8 @@ Pass multiple labels by dividing with semicolons")
     "Build the Arrow IPC extensions"
     ON)
 
-  option(ARROW_GPU
-    "Build the Arrow GPU extensions (requires CUDA installation)"
+  option(ARROW_CUDA
+    "Build the Arrow CUDA extensions (requires CUDA toolkit)"
     OFF)
 
   option(ARROW_ORC
diff --git a/cpp/README.md b/cpp/README.md
index fcf9137..394b23d 100644
--- a/cpp/README.md
+++ b/cpp/README.md
@@ -204,13 +204,11 @@ The Python library must be built against the same Python version for which you
 are building pyarrow, e.g. Python 2.7 or Python 3.6. NumPy must also be
 installed.
 
-### Building GPU extension library (optional)
+### Building CUDA extension library (optional)
 
-The optional `arrow_gpu` shared library can be built by passing
-`-DARROW_GPU=on`. This requires a CUDA installation to build, and to use many
-of the functions you must have a functioning GPU. Currently only CUDA
-functionality is supported, though if there is demand we can also add OpenCL
-interfaces in this library as needed.
+The optional `arrow_cuda` shared library can be built by passing
+`-DARROW_CUDA=on`. This requires a CUDA installation to build, and to use many
+of the functions you must have a functioning CUDA-compatible GPU.
 
 The CUDA toolchain used to build the library can be customized by using the
 `$CUDA_HOME` environment variable.
diff --git a/cpp/cmake_modules/FindArrowCuda.cmake b/cpp/cmake_modules/FindArrowCuda.cmake
index 8733b61..bac148f 100644
--- a/cpp/cmake_modules/FindArrowCuda.cmake
+++ b/cpp/cmake_modules/FindArrowCuda.cmake
@@ -15,7 +15,7 @@
 # specific language governing permissions and limitations
 # under the License.
 
-# - Find ARROW CUDA (arrow/gpu/cuda_api.h, libarrow_gpu.a, libarrow_gpu.so)
+# - Find ARROW CUDA (arrow/gpu/cuda_api.h, libarrow_cuda.a, libarrow_cuda.so)
 #
 # This module requires Arrow from which it uses
 #   ARROW_FOUND
@@ -31,10 +31,6 @@
 #  ARROW_CUDA_SHARED_IMP_LIB, path to libarrow's import library (MSVC only)
 #  ARROW_CUDA_FOUND, whether arrow has been found
 
-#
-# TODO(ARROW-3209): rename arrow/gpu to arrow/cuda, arrow_gpu to arrow_cuda
-#
-
 include(FindPkgConfig)
 include(GNUInstallDirs)
 
@@ -63,14 +59,14 @@ if (NOT (ARROW_CUDA_INCLUDE_DIR STREQUAL ARROW_INCLUDE_DIR))
   message(WARNING ${ARROW_CUDA_WARN_MSG})
 endif()
 
-find_library(ARROW_CUDA_LIB_PATH NAMES arrow_gpu
+find_library(ARROW_CUDA_LIB_PATH NAMES arrow_cuda
   PATHS
   ${ARROW_SEARCH_LIB_PATH}
   NO_DEFAULT_PATH)
 get_filename_component(ARROW_CUDA_LIBS ${ARROW_CUDA_LIB_PATH} DIRECTORY)
 
 if (MSVC)
-  find_library(ARROW_CUDA_SHARED_LIBRARIES NAMES arrow_gpu
+  find_library(ARROW_CUDA_SHARED_LIBRARIES NAMES arrow_cuda
     PATHS ${ARROW_HOME} NO_DEFAULT_PATH
     PATH_SUFFIXES "bin" )
   get_filename_component(ARROW_CUDA_SHARED_LIBS ${ARROW_CUDA_SHARED_LIBRARIES} PATH )
@@ -79,7 +75,7 @@ endif()
 
 if (ARROW_CUDA_INCLUDE_DIR AND ARROW_CUDA_LIBS)
   set(ARROW_CUDA_FOUND TRUE)
-  set(ARROW_CUDA_LIB_NAME arrow_gpu)
+  set(ARROW_CUDA_LIB_NAME arrow_cuda)
   if (MSVC)
     set(ARROW_CUDA_STATIC_LIB ${ARROW_CUDA_LIBS}/${ARROW_CUDA_LIB_NAME}${ARROW_MSVC_STATIC_LIB_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX})
     set(ARROW_CUDA_SHARED_LIB ${ARROW_CUDA_SHARED_LIBS}/${ARROW_CUDA_LIB_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX})
diff --git a/cpp/src/arrow/CMakeLists.txt b/cpp/src/arrow/CMakeLists.txt
index 336007d..6858f3c 100644
--- a/cpp/src/arrow/CMakeLists.txt
+++ b/cpp/src/arrow/CMakeLists.txt
@@ -78,8 +78,8 @@ if (ARROW_COMPUTE)
   )
 endif()
 
-if (ARROW_GPU)
-  # IPC extensions required to build the GPU library
+if (ARROW_CUDA)
+  # IPC extensions required to build the CUDA library
   set(ARROW_IPC ON)
   add_subdirectory(gpu)
 endif()
diff --git a/cpp/src/arrow/gpu/CMakeLists.txt b/cpp/src/arrow/gpu/CMakeLists.txt
index ed4c125..60407ac 100644
--- a/cpp/src/arrow/gpu/CMakeLists.txt
+++ b/cpp/src/arrow/gpu/CMakeLists.txt
@@ -16,7 +16,7 @@
 # under the License.
 
 #######################################
-# arrow_gpu
+# arrow_cuda
 #######################################
 
 if (DEFINED ENV{CUDA_HOME})
@@ -28,28 +28,28 @@ include_directories(SYSTEM ${CUDA_INCLUDE_DIRS})
 
 message(STATUS "CUDA Libraries: ${CUDA_LIBRARIES}")
 
-set(ARROW_GPU_SRCS
+set(ARROW_CUDA_SRCS
   cuda_arrow_ipc.cc
   cuda_context.cc
   cuda_memory.cc
 )
 
-set(ARROW_GPU_SHARED_LINK_LIBS
+set(ARROW_CUDA_SHARED_LINK_LIBS
   ${CUDA_LIBRARIES}
   ${CUDA_CUDA_LIBRARY}
 )
 
-ADD_ARROW_LIB(arrow_gpu
-  SOURCES ${ARROW_GPU_SRCS}
-  OUTPUTS ARROW_GPU_LIBRARIES
+ADD_ARROW_LIB(arrow_cuda
+  SOURCES ${ARROW_CUDA_SRCS}
+  OUTPUTS ARROW_CUDA_LIBRARIES
   DEPENDENCIES metadata_fbs
   SHARED_LINK_FLAGS ""
-  SHARED_LINK_LIBS arrow_shared ${ARROW_GPU_SHARED_LINK_LIBS}
-  # Static arrow_gpu must also link against CUDA shared libs
-  STATIC_LINK_LIBS ${ARROW_GPU_SHARED_LINK_LIBS}
+  SHARED_LINK_LIBS arrow_shared ${ARROW_CUDA_SHARED_LINK_LIBS}
+  # Static arrow_cuda must also link against CUDA shared libs
+  STATIC_LINK_LIBS ${ARROW_CUDA_SHARED_LINK_LIBS}
 )
 
-foreach(LIB_TARGET ${ARROW_GPU_LIBRARIES})
+foreach(LIB_TARGET ${ARROW_CUDA_LIBRARIES})
   target_compile_definitions(${LIB_TARGET}
     PRIVATE ARROW_EXPORTING)
 endforeach()
@@ -71,28 +71,28 @@ install(FILES
   DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/arrow/gpu")
 
 # pkg-config support
-configure_file(arrow-gpu.pc.in
-  "${CMAKE_CURRENT_BINARY_DIR}/arrow-gpu.pc"
+configure_file(arrow-cuda.pc.in
+  "${CMAKE_CURRENT_BINARY_DIR}/arrow-cuda.pc"
   @ONLY)
 
 install(
-  FILES "${CMAKE_CURRENT_BINARY_DIR}/arrow-gpu.pc"
+  FILES "${CMAKE_CURRENT_BINARY_DIR}/arrow-cuda.pc"
   DESTINATION "${CMAKE_INSTALL_LIBDIR}/pkgconfig/")
 
-set(ARROW_GPU_TEST_LINK_LIBS
-  arrow_gpu_shared
+set(ARROW_CUDA_TEST_LINK_LIBS
+  arrow_cuda_shared
   ${ARROW_TEST_LINK_LIBS})
 
 if (ARROW_BUILD_TESTS)
   ADD_ARROW_TEST(cuda-test
-    STATIC_LINK_LIBS ${ARROW_GPU_TEST_LINK_LIBS}
+    STATIC_LINK_LIBS ${ARROW_CUDA_TEST_LINK_LIBS}
     NO_VALGRIND)
 endif()
 
 if (ARROW_BUILD_BENCHMARKS)
   cuda_add_executable(cuda-benchmark cuda-benchmark.cc)
   target_link_libraries(cuda-benchmark
-    arrow_gpu_shared
+    arrow_cuda_shared
     gtest_static
     ${ARROW_BENCHMARK_LINK_LIBS})
 endif()
diff --git a/cpp/src/arrow/gpu/arrow-gpu.pc.in b/cpp/src/arrow/gpu/arrow-cuda.pc.in
similarity index 89%
rename from cpp/src/arrow/gpu/arrow-gpu.pc.in
rename to cpp/src/arrow/gpu/arrow-cuda.pc.in
index 3889d03..858096f 100644
--- a/cpp/src/arrow/gpu/arrow-gpu.pc.in
+++ b/cpp/src/arrow/gpu/arrow-cuda.pc.in
@@ -18,9 +18,9 @@
 libdir=@CMAKE_INSTALL_FULL_LIBDIR@
 includedir=@CMAKE_INSTALL_FULL_INCLUDEDIR@
 
-Name: Apache Arrow GPU
-Description: GPU integration library for Apache Arrow
+Name: Apache Arrow CUDA
+Description: CUDA integration library for Apache Arrow
 Version: @ARROW_VERSION@
 Requires: arrow
-Libs: -L${libdir} -larrow_gpu
+Libs: -L${libdir} -larrow_cuda
 Cflags: -I${includedir}
diff --git a/cpp/src/arrow/gpu/cuda-benchmark.cc b/cpp/src/arrow/gpu/cuda-benchmark.cc
index 8b3723d..9889373 100644
--- a/cpp/src/arrow/gpu/cuda-benchmark.cc
+++ b/cpp/src/arrow/gpu/cuda-benchmark.cc
@@ -28,7 +28,7 @@
 #include "arrow/gpu/cuda_api.h"
 
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 constexpr int64_t kGpuNumber = 0;
 
@@ -94,5 +94,5 @@ BENCHMARK(BM_Writer_Unbuffered)
     ->MinTime(1.0)
     ->UseRealTime();
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
diff --git a/cpp/src/arrow/gpu/cuda-test.cc b/cpp/src/arrow/gpu/cuda-test.cc
index cb37545..5d85a81 100644
--- a/cpp/src/arrow/gpu/cuda-test.cc
+++ b/cpp/src/arrow/gpu/cuda-test.cc
@@ -29,7 +29,7 @@
 #include "arrow/gpu/cuda_api.h"
 
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 constexpr int kGpuNumber = 0;
 
@@ -323,7 +323,7 @@ TEST_F(TestCudaArrowIpc, BasicWriteRead) {
   ASSERT_OK(ipc::MakeIntRecordBatch(&batch));
 
   std::shared_ptr<CudaBuffer> device_serialized;
-  ASSERT_OK(arrow::gpu::SerializeRecordBatch(*batch, context_.get(), &device_serialized));
+  ASSERT_OK(SerializeRecordBatch(*batch, context_.get(), &device_serialized));
 
   // Test that ReadRecordBatch works properly
   std::shared_ptr<RecordBatch> device_batch;
@@ -343,5 +343,5 @@ TEST_F(TestCudaArrowIpc, BasicWriteRead) {
   CompareBatch(*batch, *cpu_batch);
 }
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
diff --git a/cpp/src/arrow/gpu/cuda_arrow_ipc.cc b/cpp/src/arrow/gpu/cuda_arrow_ipc.cc
index a7262c8..03256a1 100644
--- a/cpp/src/arrow/gpu/cuda_arrow_ipc.cc
+++ b/cpp/src/arrow/gpu/cuda_arrow_ipc.cc
@@ -38,7 +38,7 @@ namespace arrow {
 
 namespace flatbuf = org::apache::arrow::flatbuf;
 
-namespace gpu {
+namespace cuda {
 
 Status SerializeRecordBatch(const RecordBatch& batch, CudaContext* ctx,
                             std::shared_ptr<CudaBuffer>* out) {
@@ -106,5 +106,5 @@ Status ReadRecordBatch(const std::shared_ptr<Schema>& schema,
   return ipc::ReadRecordBatch(*message, schema, out);
 }
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
diff --git a/cpp/src/arrow/gpu/cuda_arrow_ipc.h b/cpp/src/arrow/gpu/cuda_arrow_ipc.h
index 52dd924..4eb85e7 100644
--- a/cpp/src/arrow/gpu/cuda_arrow_ipc.h
+++ b/cpp/src/arrow/gpu/cuda_arrow_ipc.h
@@ -39,7 +39,7 @@ class Message;
 
 }  // namespace ipc
 
-namespace gpu {
+namespace cuda {
 
 /// \brief Write record batch message to GPU device memory
 /// \param[in] batch record batch to write
@@ -71,7 +71,7 @@ Status ReadRecordBatch(const std::shared_ptr<Schema>& schema,
                        const std::shared_ptr<CudaBuffer>& buffer, MemoryPool* pool,
                        std::shared_ptr<RecordBatch>* out);
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
 
 #endif  // ARROW_GPU_CUDA_ARROW_IPC_H
diff --git a/cpp/src/arrow/gpu/cuda_common.h b/cpp/src/arrow/gpu/cuda_common.h
index c06c1a2..a53dd22 100644
--- a/cpp/src/arrow/gpu/cuda_common.h
+++ b/cpp/src/arrow/gpu/cuda_common.h
@@ -25,7 +25,7 @@
 #include <cuda.h>
 
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 #define CUDA_DCHECK(STMT) \
   do {                    \
@@ -45,7 +45,7 @@ namespace gpu {
     }                                                                         \
   } while (0)
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
 
 #endif  // ARROW_GPU_CUDA_COMMON_H
diff --git a/cpp/src/arrow/gpu/cuda_context.cc b/cpp/src/arrow/gpu/cuda_context.cc
index 566ae6f..9e95040 100644
--- a/cpp/src/arrow/gpu/cuda_context.cc
+++ b/cpp/src/arrow/gpu/cuda_context.cc
@@ -28,8 +28,9 @@
 
 #include "arrow/gpu/cuda_common.h"
 #include "arrow/gpu/cuda_memory.h"
+
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 struct CudaDevice {
   int device_num;
@@ -342,5 +343,5 @@ void* CudaContext::handle() const { return impl_->context_handle(); }
 
 int CudaContext::device_number() const { return impl_->device().device_num; }
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
diff --git a/cpp/src/arrow/gpu/cuda_context.h b/cpp/src/arrow/gpu/cuda_context.h
index e59273e..9a67cea 100644
--- a/cpp/src/arrow/gpu/cuda_context.h
+++ b/cpp/src/arrow/gpu/cuda_context.h
@@ -27,7 +27,7 @@
 #include "arrow/gpu/cuda_memory.h"
 
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 // Forward declaration
 class CudaContext;
@@ -138,7 +138,7 @@ class ARROW_EXPORT CudaContext : public std::enable_shared_from_this<CudaContext
   friend CudaDeviceManager::CudaDeviceManagerImpl;
 };
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
 
 #endif  // ARROW_GPU_CUDA_CONTEXT_H
diff --git a/cpp/src/arrow/gpu/cuda_memory.cc b/cpp/src/arrow/gpu/cuda_memory.cc
index e8cc4b5..cf0c51c 100644
--- a/cpp/src/arrow/gpu/cuda_memory.cc
+++ b/cpp/src/arrow/gpu/cuda_memory.cc
@@ -34,7 +34,7 @@
 #include "arrow/gpu/cuda_context.h"
 
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 // ----------------------------------------------------------------------
 // CUDA IPC memory handle
@@ -365,5 +365,5 @@ Status AllocateCudaHostBuffer(int device_number, const int64_t size,
   return manager->AllocateHost(device_number, size, out);
 }
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
diff --git a/cpp/src/arrow/gpu/cuda_memory.h b/cpp/src/arrow/gpu/cuda_memory.h
index 0da58c1..c8f8083 100644
--- a/cpp/src/arrow/gpu/cuda_memory.h
+++ b/cpp/src/arrow/gpu/cuda_memory.h
@@ -27,7 +27,7 @@
 #include "arrow/status.h"
 
 namespace arrow {
-namespace gpu {
+namespace cuda {
 
 class CudaContext;
 class CudaIpcMemHandle;
@@ -215,7 +215,7 @@ ARROW_EXPORT
 Status AllocateCudaHostBuffer(int device_number, const int64_t size,
                               std::shared_ptr<CudaHostBuffer>* out);
 
-}  // namespace gpu
+}  // namespace cuda
 }  // namespace arrow
 
 #endif  // ARROW_GPU_CUDA_MEMORY_H
diff --git a/cpp/src/plasma/CMakeLists.txt b/cpp/src/plasma/CMakeLists.txt
index f9ed4e3..0f8916e 100644
--- a/cpp/src/plasma/CMakeLists.txt
+++ b/cpp/src/plasma/CMakeLists.txt
@@ -83,10 +83,10 @@ set(PLASMA_SRCS
 set(PLASMA_LINK_LIBS arrow_shared)
 set(PLASMA_STATIC_LINK_LIBS arrow_static)
 
-if (ARROW_GPU)
-  set(PLASMA_LINK_LIBS ${PLASMA_LINK_LIBS} arrow_gpu_shared)
-  set(PLASMA_STATIC_LINK_LIBS arrow_gpu_static ${PLASMA_STATIC_LINK_LIBS})
-  add_definitions(-DPLASMA_GPU)
+if (ARROW_CUDA)
+  set(PLASMA_LINK_LIBS ${PLASMA_LINK_LIBS} arrow_cuda_shared)
+  set(PLASMA_STATIC_LINK_LIBS arrow_cuda_static ${PLASMA_STATIC_LINK_LIBS})
+  add_definitions(-DPLASMA_CUDA)
 endif()
 
 ADD_ARROW_LIB(plasma
diff --git a/cpp/src/plasma/client.cc b/cpp/src/plasma/client.cc
index 20dc421..99cf00c 100644
--- a/cpp/src/plasma/client.cc
+++ b/cpp/src/plasma/client.cc
@@ -53,13 +53,13 @@
 #include "plasma/plasma.h"
 #include "plasma/protocol.h"
 
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
 #include "arrow/gpu/cuda_api.h"
 
-using arrow::gpu::CudaBuffer;
-using arrow::gpu::CudaBufferWriter;
-using arrow::gpu::CudaContext;
-using arrow::gpu::CudaDeviceManager;
+using arrow::cuda::CudaBuffer;
+using arrow::cuda::CudaBufferWriter;
+using arrow::cuda::CudaContext;
+using arrow::cuda::CudaDeviceManager;
 #endif
 
 #define XXH_INLINE_ALL 1
@@ -89,7 +89,7 @@ constexpr int64_t kL3CacheSizeBytes = 100000000;
 // ----------------------------------------------------------------------
 // GPU support
 
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
 struct GpuProcessHandle {
   /// Pointer to CUDA buffer that is backing this GPU object.
   std::shared_ptr<CudaBuffer> ptr;
@@ -286,16 +286,16 @@ class PlasmaClient::Impl : public std::enable_shared_from_this<PlasmaClient::Imp
   /// A hash set to record the ids that users want to delete but still in use.
   std::unordered_set<ObjectID> deletion_cache_;
 
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   /// Cuda Device Manager.
-  arrow::gpu::CudaDeviceManager* manager_;
+  arrow::cuda::CudaDeviceManager* manager_;
 #endif
 };
 
 PlasmaBuffer::~PlasmaBuffer() { ARROW_UNUSED(client_->Release(object_id_)); }
 
 PlasmaClient::Impl::Impl() {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   DCHECK_OK(CudaDeviceManager::GetInstance(&manager_));
 #endif
 }
@@ -413,7 +413,7 @@ Status PlasmaClient::Impl::Create(const ObjectID& object_id, int64_t data_size,
       memcpy((*data)->mutable_data() + object.data_size, metadata, metadata_size);
     }
   } else {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
     std::lock_guard<std::mutex> lock(gpu_mutex);
     std::shared_ptr<CudaContext> context;
     RETURN_NOT_OK(manager_->GetContext(device_num - 1, &context));
@@ -497,7 +497,7 @@ Status PlasmaClient::Impl::GetBuffers(
         physical_buf = std::make_shared<Buffer>(
             data + object->data_offset, object->data_size + object->metadata_size);
       } else {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
         physical_buf = gpu_object_map.find(object_ids[i])->second->ptr;
 #else
         ARROW_LOG(FATAL) << "Arrow GPU library is not enabled.";
@@ -560,7 +560,7 @@ Status PlasmaClient::Impl::GetBuffers(
         physical_buf = std::make_shared<Buffer>(
             data + object->data_offset, object->data_size + object->metadata_size);
       } else {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
         std::lock_guard<std::mutex> lock(gpu_mutex);
         auto handle = gpu_object_map.find(object_ids[i]);
         if (handle == gpu_object_map.end()) {
diff --git a/cpp/src/plasma/common.h b/cpp/src/plasma/common.h
index f7cdaf5..7090428 100644
--- a/cpp/src/plasma/common.h
+++ b/cpp/src/plasma/common.h
@@ -34,7 +34,7 @@
 
 #include "arrow/status.h"
 #include "arrow/util/logging.h"
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
 #include "arrow/gpu/cuda_api.h"
 #endif
 
@@ -118,9 +118,9 @@ struct ObjectTableEntry {
   int64_t data_size;
   /// Size of the object metadata in bytes.
   int64_t metadata_size;
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   /// IPC GPU handle to share with clients.
-  std::shared_ptr<::arrow::gpu::CudaIpcMemHandle> ipc_handle;
+  std::shared_ptr<::arrow::cuda::CudaIpcMemHandle> ipc_handle;
 #endif
   /// Number of clients currently using this object.
   int ref_count;
diff --git a/cpp/src/plasma/plasma.h b/cpp/src/plasma/plasma.h
index e63d967..83caec7 100644
--- a/cpp/src/plasma/plasma.h
+++ b/cpp/src/plasma/plasma.h
@@ -40,8 +40,8 @@
 #include "plasma/common.h"
 #include "plasma/common_generated.h"
 
-#ifdef PLASMA_GPU
-using arrow::gpu::CudaIpcMemHandle;
+#ifdef PLASMA_CUDA
+using arrow::cuda::CudaIpcMemHandle;
 #endif
 
 namespace plasma {
@@ -73,7 +73,7 @@ typedef std::unordered_map<ObjectID, ObjectRequest> ObjectRequestMap;
 
 // TODO(pcm): Replace this by the flatbuffers message PlasmaObjectSpec.
 struct PlasmaObject {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   // IPC handle for Cuda.
   std::shared_ptr<CudaIpcMemHandle> ipc_handle;
 #endif
diff --git a/cpp/src/plasma/protocol.cc b/cpp/src/plasma/protocol.cc
index a74db66..c437840 100644
--- a/cpp/src/plasma/protocol.cc
+++ b/cpp/src/plasma/protocol.cc
@@ -25,7 +25,7 @@
 #include "plasma/common.h"
 #include "plasma/io.h"
 
-#ifdef ARROW_GPU
+#ifdef PLASMA_CUDA
 #include "arrow/gpu/cuda_api.h"
 #endif
 
@@ -129,7 +129,7 @@ Status SendCreateReply(int sock, ObjectID object_id, PlasmaObject* object,
                                  object->metadata_offset, object->metadata_size,
                                  object->device_num);
   auto object_string = fbb.CreateString(object_id.binary());
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   flatbuffers::Offset<fb::CudaHandle> ipc_handle;
   if (object->device_num != 0) {
     std::shared_ptr<arrow::Buffer> handle;
@@ -145,7 +145,7 @@ Status SendCreateReply(int sock, ObjectID object_id, PlasmaObject* object,
   crb.add_store_fd(object->store_fd);
   crb.add_mmap_size(mmap_size);
   if (object->device_num != 0) {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
     crb.add_ipc_handle(ipc_handle);
 #else
     ARROW_LOG(FATAL) << "This should be unreachable.";
@@ -171,7 +171,7 @@ Status ReadCreateReply(uint8_t* data, size_t size, ObjectID* object_id,
   *mmap_size = message->mmap_size();
 
   object->device_num = message->plasma_object()->device_num();
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   if (object->device_num != 0) {
     RETURN_NOT_OK(CudaIpcMemHandle::FromBuffer(message->ipc_handle()->handle()->data(),
                                                &object->ipc_handle));
@@ -588,7 +588,7 @@ Status SendGetReply(int sock, ObjectID object_ids[],
     objects.push_back(PlasmaObjectSpec(object.store_fd, object.data_offset,
                                        object.data_size, object.metadata_offset,
                                        object.metadata_size, object.device_num));
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
     if (object.device_num != 0) {
       std::shared_ptr<arrow::Buffer> handle;
       RETURN_NOT_OK(object.ipc_handle->Serialize(arrow::default_memory_pool(), &handle));
@@ -609,7 +609,7 @@ Status ReadGetReply(uint8_t* data, size_t size, ObjectID object_ids[],
                     std::vector<int>& store_fds, std::vector<int64_t>& mmap_sizes) {
   DCHECK(data);
   auto message = flatbuffers::GetRoot<fb::PlasmaGetReply>(data);
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   int handle_pos = 0;
 #endif
   DCHECK(VerifyFlatbuffer(message, data, size));
@@ -624,7 +624,7 @@ Status ReadGetReply(uint8_t* data, size_t size, ObjectID object_ids[],
     plasma_objects[i].metadata_offset = object->metadata_offset();
     plasma_objects[i].metadata_size = object->metadata_size();
     plasma_objects[i].device_num = object->device_num();
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
     if (object->device_num() != 0) {
       const void* ipc_handle = message->handles()->Get(handle_pos)->handle()->data();
       RETURN_NOT_OK(
diff --git a/cpp/src/plasma/store.cc b/cpp/src/plasma/store.cc
index bb99f59..ae658d7 100644
--- a/cpp/src/plasma/store.cc
+++ b/cpp/src/plasma/store.cc
@@ -58,12 +58,12 @@
 #include "plasma/io.h"
 #include "plasma/malloc.h"
 
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
 #include "arrow/gpu/cuda_api.h"
 
-using arrow::gpu::CudaBuffer;
-using arrow::gpu::CudaContext;
-using arrow::gpu::CudaDeviceManager;
+using arrow::cuda::CudaBuffer;
+using arrow::cuda::CudaContext;
+using arrow::cuda::CudaDeviceManager;
 #endif
 
 using arrow::util::ArrowLog;
@@ -117,7 +117,7 @@ PlasmaStore::PlasmaStore(EventLoop* loop, int64_t system_memory, std::string dir
   store_info_.memory_capacity = system_memory;
   store_info_.directory = directory;
   store_info_.hugepages_enabled = hugepages_enabled;
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   DCHECK_OK(CudaDeviceManager::GetInstance(&manager_));
 #endif
 }
@@ -162,7 +162,7 @@ PlasmaError PlasmaStore::CreateObject(const ObjectID& object_id, int64_t data_si
   }
   // Try to evict objects until there is enough space.
   uint8_t* pointer = nullptr;
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   std::shared_ptr<CudaBuffer> gpu_handle;
   std::shared_ptr<CudaContext> context_;
   if (device_num != 0) {
@@ -195,7 +195,7 @@ PlasmaError PlasmaStore::CreateObject(const ObjectID& object_id, int64_t data_si
         break;
       }
     } else {
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
       DCHECK_OK(context_->Allocate(data_size + metadata_size, &gpu_handle));
       break;
 #endif
@@ -220,7 +220,7 @@ PlasmaError PlasmaStore::CreateObject(const ObjectID& object_id, int64_t data_si
   entry->device_num = device_num;
   entry->create_time = std::time(nullptr);
   entry->construct_duration = -1;
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   if (device_num != 0) {
     DCHECK_OK(gpu_handle->ExportForIpc(&entry->ipc_handle));
     result->ipc_handle = entry->ipc_handle;
@@ -246,7 +246,7 @@ void PlasmaObject_init(PlasmaObject* object, ObjectTableEntry* entry) {
   DCHECK(object != nullptr);
   DCHECK(entry != nullptr);
   DCHECK(entry->state == ObjectState::PLASMA_SEALED);
-#ifdef PLASMA_GPU
+#ifdef PLASMA_CUDA
   if (entry->device_num != 0) {
     object->ipc_handle = entry->ipc_handle;
   }
diff --git a/cpp/src/plasma/store.h b/cpp/src/plasma/store.h
index 44fdf60..8d3facd 100644
--- a/cpp/src/plasma/store.h
+++ b/cpp/src/plasma/store.h
@@ -223,8 +223,8 @@ class PlasmaStore {
   std::unordered_map<int, std::unique_ptr<Client>> connected_clients_;
 
   std::unordered_set<ObjectID> deletion_cache_;
-#ifdef PLASMA_GPU
-  arrow::gpu::CudaDeviceManager* manager_;
+#ifdef PLASMA_CUDA
+  arrow::cuda::CudaDeviceManager* manager_;
 #endif
 };
 
diff --git a/cpp/src/plasma/test/client_tests.cc b/cpp/src/plasma/test/client_tests.cc
index 1ad6039..f820303 100644
--- a/cpp/src/plasma/test/client_tests.cc
+++ b/cpp/src/plasma/test/client_tests.cc
@@ -487,10 +487,10 @@ TEST_F(TestPlasmaStore, ManyObjectTest) {
   }
 }
 
-#ifdef PLASMA_GPU
-using arrow::gpu::CudaBuffer;
-using arrow::gpu::CudaBufferReader;
-using arrow::gpu::CudaBufferWriter;
+#ifdef PLASMA_CUDA
+using arrow::cuda::CudaBuffer;
+using arrow::cuda::CudaBufferReader;
+using arrow::cuda::CudaBufferWriter;
 
 namespace {
 
@@ -590,7 +590,7 @@ TEST_F(TestPlasmaStore, MultipleClientGPUTest) {
   AssertCudaRead(object_buffers[0].metadata, {5});
 }
 
-#endif  // PLASMA_GPU
+#endif  // PLASMA_CUDA
 
 }  // namespace plasma
 
diff --git a/dev/release/rat_exclude_files.txt b/dev/release/rat_exclude_files.txt
index e5e0411..0baf29e 100644
--- a/dev/release/rat_exclude_files.txt
+++ b/dev/release/rat_exclude_files.txt
@@ -79,7 +79,7 @@ dev/tasks/linux-packages/debian.ubuntu-trusty/watch
 dev/tasks/linux-packages/debian/compat
 dev/tasks/linux-packages/debian/control
 dev/tasks/linux-packages/debian/gir1.2-arrow-1.0.install
-dev/tasks/linux-packages/debian/gir1.2-arrow-gpu-1.0.install
+dev/tasks/linux-packages/debian/gir1.2-arrow-cuda-1.0.install
 dev/tasks/linux-packages/debian/gir1.2-parquet-1.0.install
 dev/tasks/linux-packages/debian/gir1.2-plasma-1.0.install
 dev/tasks/linux-packages/debian/libarrow-dev.install
@@ -88,10 +88,10 @@ dev/tasks/linux-packages/debian/libarrow-glib-doc.doc-base
 dev/tasks/linux-packages/debian/libarrow-glib-doc.install
 dev/tasks/linux-packages/debian/libarrow-glib-doc.links
 dev/tasks/linux-packages/debian/libarrow-glib12.install
-dev/tasks/linux-packages/debian/libarrow-gpu-dev.install
-dev/tasks/linux-packages/debian/libarrow-gpu-glib-dev.install
-dev/tasks/linux-packages/debian/libarrow-gpu-glib12.install
-dev/tasks/linux-packages/debian/libarrow-gpu12.install
+dev/tasks/linux-packages/debian/libarrow-cuda-dev.install
+dev/tasks/linux-packages/debian/libarrow-cuda-glib-dev.install
+dev/tasks/linux-packages/debian/libarrow-cuda-glib12.install
+dev/tasks/linux-packages/debian/libarrow-cuda12.install
 dev/tasks/linux-packages/debian/libarrow-python-dev.install
 dev/tasks/linux-packages/debian/libarrow-python12.install
 dev/tasks/linux-packages/debian/libarrow12.install
diff --git a/dev/tasks/linux-packages/debian/control b/dev/tasks/linux-packages/debian/control
index 3c66714..b5c6963 100644
--- a/dev/tasks/linux-packages/debian/control
+++ b/dev/tasks/linux-packages/debian/control
@@ -54,7 +54,7 @@ Description: Apache Arrow is a data processing library for analysis
  .
  This package provides C++ library files for Python support.
 
-Package: libarrow-gpu12
+Package: libarrow-cuda12
 Section: libs
 Architecture: any
 Multi-Arch: same
@@ -65,7 +65,7 @@ Depends:
   libarrow12 (= ${binary:Version})
 Description: Apache Arrow is a data processing library for analysis
  .
- This package provides C++ library files for GPU support.
+ This package provides C++ library files for CUDA support.
 
 Package: libarrow-dev
 Section: libdevel
@@ -90,17 +90,17 @@ Description: Apache Arrow is a data processing library for analysis
  .
  This package provides C++ header files for Python support.
 
-Package: libarrow-gpu-dev
+Package: libarrow-cuda-dev
 Section: libdevel
 Architecture: any
 Multi-Arch: same
 Depends:
   ${misc:Depends},
   libarrow-dev (= ${binary:Version}),
-  libarrow-gpu12 (= ${binary:Version})
+  libarrow-cuda12 (= ${binary:Version})
 Description: Apache Arrow is a data processing library for analysis
  .
- This package provides C++ header files for GPU support.
+ This package provides C++ header files for CUDA support.
 
 Package: libplasma12
 Section: libs
@@ -110,7 +110,7 @@ Pre-Depends: ${misc:Pre-Depends}
 Depends:
   ${misc:Depends},
   ${shlibs:Depends},
-  libarrow-gpu12 (= ${binary:Version})
+  libarrow-cuda12 (= ${binary:Version})
 Description: Plasma is an in-memory object store and cache for big data.
  .
  This package provides C++ library files to connect plasma_store_server.
@@ -133,7 +133,7 @@ Architecture: any
 Multi-Arch: same
 Depends:
   ${misc:Depends},
-  libarrow-gpu-dev (= ${binary:Version}),
+  libarrow-cuda-dev (= ${binary:Version}),
   libplasma12 (= ${binary:Version})
 Description: Plasma is an in-memory object store and cache for big data.
  .
@@ -213,7 +213,7 @@ Description: Apache Arrow is a data processing library for analysis
  .
  This package provides documentations.
 
-Package: libarrow-gpu-glib12
+Package: libarrow-cuda-glib12
 Section: libs
 Architecture: any
 Multi-Arch: same
@@ -222,12 +222,12 @@ Depends:
   ${misc:Depends},
   ${shlibs:Depends},
   libarrow-glib12 (= ${binary:Version}),
-  libarrow-gpu12 (= ${binary:Version})
+  libarrow-cuda12 (= ${binary:Version})
 Description: Apache Arrow is a data processing library for analysis
  .
- This package provides GLib based library files for GPU support.
+ This package provides GLib based library files for CUDA support.
 
-Package: gir1.2-arrow-gpu-1.0
+Package: gir1.2-arrow-cuda-1.0
 Section: introspection
 Architecture: any
 Multi-Arch: same
@@ -236,21 +236,21 @@ Depends:
   ${misc:Depends}
 Description: Apache Arrow is a data processing library for analysis
  .
- This package provides GObject Introspection typelib files for GPU support.
+ This package provides GObject Introspection typelib files for CUDA support.
 
-Package: libarrow-gpu-glib-dev
+Package: libarrow-cuda-glib-dev
 Section: libdevel
 Architecture: any
 Multi-Arch: same
 Depends:
   ${misc:Depends},
-  libarrow-gpu-dev (= ${binary:Version}),
+  libarrow-cuda-dev (= ${binary:Version}),
   libarrow-glib-dev (= ${binary:Version}),
-  libarrow-gpu-glib12 (= ${binary:Version}),
-  gir1.2-arrow-gpu-1.0 (= ${binary:Version})
+  libarrow-cuda-glib12 (= ${binary:Version}),
+  gir1.2-arrow-cuda-1.0 (= ${binary:Version})
 Description: Apache Arrow is a data processing library for analysis
  .
- This package provides GLib based header files for GPU support.
+ This package provides GLib based header files for CUDA support.
 
 Package: libplasma-glib12
 Section: libs
@@ -260,7 +260,7 @@ Pre-Depends: ${misc:Pre-Depends}
 Depends:
   ${misc:Depends},
   ${shlibs:Depends},
-  libarrow-gpu-glib12 (= ${binary:Version}),
+  libarrow-cuda-glib12 (= ${binary:Version}),
   libplasma12 (= ${binary:Version})
 Description: Plasma is an in-memory object store and cache for big data.
  .
@@ -284,7 +284,7 @@ Multi-Arch: same
 Depends:
   ${misc:Depends},
   libplasma-dev (= ${binary:Version}),
-  libarrow-gpu-glib-dev (= ${binary:Version}),
+  libarrow-cuda-glib-dev (= ${binary:Version}),
   libplasma-glib12 (= ${binary:Version}),
   gir1.2-plasma-1.0 (= ${binary:Version})
 Description: Plasma is an in-memory object store and cache for big data.
diff --git a/dev/tasks/linux-packages/debian/gir1.2-arrow-cuda-1.0.install b/dev/tasks/linux-packages/debian/gir1.2-arrow-cuda-1.0.install
new file mode 100644
index 0000000..ef0d9f5
--- /dev/null
+++ b/dev/tasks/linux-packages/debian/gir1.2-arrow-cuda-1.0.install
@@ -0,0 +1 @@
+usr/lib/*/girepository-1.0/ArrowCUDA-1.0.typelib
diff --git a/dev/tasks/linux-packages/debian/gir1.2-arrow-gpu-1.0.install b/dev/tasks/linux-packages/debian/gir1.2-arrow-gpu-1.0.install
deleted file mode 100644
index 10e0ca9..0000000
--- a/dev/tasks/linux-packages/debian/gir1.2-arrow-gpu-1.0.install
+++ /dev/null
@@ -1 +0,0 @@
-usr/lib/*/girepository-1.0/ArrowGPU-1.0.typelib
diff --git a/dev/tasks/linux-packages/debian/libarrow-cuda-dev.install b/dev/tasks/linux-packages/debian/libarrow-cuda-dev.install
new file mode 100644
index 0000000..2270d92
--- /dev/null
+++ b/dev/tasks/linux-packages/debian/libarrow-cuda-dev.install
@@ -0,0 +1,3 @@
+usr/lib/*/libarrow_cuda.a
+usr/lib/*/libarrow_cuda.so
+usr/lib/*/pkgconfig/arrow-cuda.pc
diff --git a/dev/tasks/linux-packages/debian/libarrow-cuda-glib-dev.install b/dev/tasks/linux-packages/debian/libarrow-cuda-glib-dev.install
new file mode 100644
index 0000000..7025fd2
--- /dev/null
+++ b/dev/tasks/linux-packages/debian/libarrow-cuda-glib-dev.install
@@ -0,0 +1,5 @@
+usr/include/arrow-cuda-glib/
+usr/lib/*/libarrow-cuda-glib.a
+usr/lib/*/libarrow-cuda-glib.so
+usr/lib/*/pkgconfig/arrow-cuda-glib.pc
+usr/share/gir-1.0/ArrowCUDA-1.0.gir
diff --git a/dev/tasks/linux-packages/debian/libarrow-cuda-glib12.install b/dev/tasks/linux-packages/debian/libarrow-cuda-glib12.install
new file mode 100644
index 0000000..a6d6375
--- /dev/null
+++ b/dev/tasks/linux-packages/debian/libarrow-cuda-glib12.install
@@ -0,0 +1 @@
+usr/lib/*/libarrow-cuda-glib.so.*
diff --git a/dev/tasks/linux-packages/debian/libarrow-cuda12.install b/dev/tasks/linux-packages/debian/libarrow-cuda12.install
new file mode 100644
index 0000000..5ae4646
--- /dev/null
+++ b/dev/tasks/linux-packages/debian/libarrow-cuda12.install
@@ -0,0 +1 @@
+usr/lib/*/libarrow_cuda.so.*
diff --git a/dev/tasks/linux-packages/debian/libarrow-gpu-dev.install b/dev/tasks/linux-packages/debian/libarrow-gpu-dev.install
deleted file mode 100644
index 1892fb8..0000000
--- a/dev/tasks/linux-packages/debian/libarrow-gpu-dev.install
+++ /dev/null
@@ -1,3 +0,0 @@
-usr/lib/*/libarrow_gpu.a
-usr/lib/*/libarrow_gpu.so
-usr/lib/*/pkgconfig/arrow-gpu.pc
diff --git a/dev/tasks/linux-packages/debian/libarrow-gpu-glib-dev.install b/dev/tasks/linux-packages/debian/libarrow-gpu-glib-dev.install
deleted file mode 100644
index 9b3ef8f..0000000
--- a/dev/tasks/linux-packages/debian/libarrow-gpu-glib-dev.install
+++ /dev/null
@@ -1,5 +0,0 @@
-usr/include/arrow-gpu-glib/
-usr/lib/*/libarrow-gpu-glib.a
-usr/lib/*/libarrow-gpu-glib.so
-usr/lib/*/pkgconfig/arrow-gpu-glib.pc
-usr/share/gir-1.0/ArrowGPU-1.0.gir
diff --git a/dev/tasks/linux-packages/debian/libarrow-gpu-glib12.install b/dev/tasks/linux-packages/debian/libarrow-gpu-glib12.install
deleted file mode 100644
index 4d97e5a..0000000
--- a/dev/tasks/linux-packages/debian/libarrow-gpu-glib12.install
+++ /dev/null
@@ -1 +0,0 @@
-usr/lib/*/libarrow-gpu-glib.so.*
diff --git a/dev/tasks/linux-packages/debian/libarrow-gpu12.install b/dev/tasks/linux-packages/debian/libarrow-gpu12.install
deleted file mode 100644
index cabd7e4..0000000
--- a/dev/tasks/linux-packages/debian/libarrow-gpu12.install
+++ /dev/null
@@ -1 +0,0 @@
-usr/lib/*/libarrow_gpu.so.*
diff --git a/dev/tasks/linux-packages/debian/rules b/dev/tasks/linux-packages/debian/rules
index 8cc9fe2..f3cc2a0 100755
--- a/dev/tasks/linux-packages/debian/rules
+++ b/dev/tasks/linux-packages/debian/rules
@@ -34,7 +34,7 @@ override_dh_auto_configure:
 	  -DARROW_PROTOBUF_USE_SHARED=ON \
 	  -DPythonInterp_FIND_VERSION=ON \
 	  -DPythonInterp_FIND_VERSION_MAJOR=3 \
-	  -DARROW_GPU=ON
+	  -DARROW_CUDA=ON
 	dh_auto_configure \
 	  --sourcedirectory=c_glib \
 	  --builddirectory=c_glib_build \
diff --git a/dev/tasks/tasks.yml b/dev/tasks/tasks.yml
index d5d362a..bd49616 100644
--- a/dev/tasks/tasks.yml
+++ b/dev/tasks/tasks.yml
@@ -293,7 +293,7 @@ tasks:
       - apache-arrow_{no_rc_version}-1.dsc
       - apache-arrow_{no_rc_version}.orig.tar.gz
       - gir1.2-arrow-1.0_{no_rc_version}-1_amd64.deb
-      - gir1.2-arrow-gpu-1.0_{no_rc_version}-1_amd64.deb
+      - gir1.2-arrow-cuda-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-parquet-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-plasma-1.0_{no_rc_version}-1_amd64.deb
       - libarrow-dev_{no_rc_version}-1_amd64.deb
@@ -301,12 +301,12 @@ tasks:
       - libarrow-glib-doc_{no_rc_version}-1_all.deb
       - libarrow-glib12-dbgsym_{no_rc_version}-1_amd64.deb
       - libarrow-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib12-dbgsym_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu12-dbgsym_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib12-dbgsym_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda12-dbgsym_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda12_{no_rc_version}-1_amd64.deb
       - libarrow-python-dev_{no_rc_version}-1_amd64.deb
       - libarrow-python12-dbgsym_{no_rc_version}-1_amd64.deb
       - libarrow-python12_{no_rc_version}-1_amd64.deb
@@ -375,17 +375,17 @@ tasks:
       - apache-arrow_{no_rc_version}-1.dsc
       - apache-arrow_{no_rc_version}.orig.tar.gz
       - gir1.2-arrow-1.0_{no_rc_version}-1_amd64.deb
-      - gir1.2-arrow-gpu-1.0_{no_rc_version}-1_amd64.deb
+      - gir1.2-arrow-cuda-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-parquet-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-plasma-1.0_{no_rc_version}-1_amd64.deb
       - libarrow-dev_{no_rc_version}-1_amd64.deb
       - libarrow-glib-dev_{no_rc_version}-1_amd64.deb
       - libarrow-glib-doc_{no_rc_version}-1_all.deb
       - libarrow-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda12_{no_rc_version}-1_amd64.deb
       - libarrow-python-dev_{no_rc_version}-1_amd64.deb
       - libarrow-python12_{no_rc_version}-1_amd64.deb
       - libarrow12_{no_rc_version}-1_amd64.deb
@@ -415,17 +415,17 @@ tasks:
       - apache-arrow_{no_rc_version}-1.dsc
       - apache-arrow_{no_rc_version}.orig.tar.gz
       - gir1.2-arrow-1.0_{no_rc_version}-1_amd64.deb
-      - gir1.2-arrow-gpu-1.0_{no_rc_version}-1_amd64.deb
+      - gir1.2-arrow-cuda-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-parquet-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-plasma-1.0_{no_rc_version}-1_amd64.deb
       - libarrow-dev_{no_rc_version}-1_amd64.deb
       - libarrow-glib-dev_{no_rc_version}-1_amd64.deb
       - libarrow-glib-doc_{no_rc_version}-1_all.deb
       - libarrow-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda12_{no_rc_version}-1_amd64.deb
       - libarrow-python-dev_{no_rc_version}-1_amd64.deb
       - libarrow-python12_{no_rc_version}-1_amd64.deb
       - libarrow12_{no_rc_version}-1_amd64.deb
@@ -455,17 +455,17 @@ tasks:
       - apache-arrow_{no_rc_version}-1.dsc
       - apache-arrow_{no_rc_version}.orig.tar.gz
       - gir1.2-arrow-1.0_{no_rc_version}-1_amd64.deb
-      - gir1.2-arrow-gpu-1.0_{no_rc_version}-1_amd64.deb
+      - gir1.2-arrow-cuda-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-parquet-1.0_{no_rc_version}-1_amd64.deb
       - gir1.2-plasma-1.0_{no_rc_version}-1_amd64.deb
       - libarrow-dev_{no_rc_version}-1_amd64.deb
       - libarrow-glib-dev_{no_rc_version}-1_amd64.deb
       - libarrow-glib-doc_{no_rc_version}-1_all.deb
       - libarrow-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib-dev_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu-glib12_{no_rc_version}-1_amd64.deb
-      - libarrow-gpu12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib-dev_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda-glib12_{no_rc_version}-1_amd64.deb
+      - libarrow-cuda12_{no_rc_version}-1_amd64.deb
       - libarrow-python-dev_{no_rc_version}-1_amd64.deb
       - libarrow-python12_{no_rc_version}-1_amd64.deb
       - libarrow12_{no_rc_version}-1_amd64.deb
diff --git a/python/CMakeLists.txt b/python/CMakeLists.txt
index 15a3479..1a87454 100644
--- a/python/CMakeLists.txt
+++ b/python/CMakeLists.txt
@@ -17,9 +17,6 @@
 #
 # Includes code assembled from BSD/MIT/Apache-licensed code from some 3rd-party
 # projects, including Kudu, Impala, and libdynd. See python/LICENSE.txt
-#
-# TODO(ARROW-3209): rename arrow_gpu to arrow_cuda
-#
 
 cmake_minimum_required(VERSION 2.7)
 project(pyarrow)
@@ -393,13 +390,13 @@ if (PYARROW_BUILD_CUDA)
       endif()
     endif()
     if (MSVC)
-      ADD_THIRDPARTY_LIB(arrow_gpu
+      ADD_THIRDPARTY_LIB(arrow_cuda
         SHARED_LIB ${ARROW_CUDA_SHARED_IMP_LIB})
     else()
-      ADD_THIRDPARTY_LIB(arrow_gpu
+      ADD_THIRDPARTY_LIB(arrow_cuda
         SHARED_LIB ${ARROW_CUDA_SHARED_LIB})
     endif()
-    set(LINK_LIBS ${LINK_LIBS} arrow_gpu_shared)
+    set(LINK_LIBS ${LINK_LIBS} arrow_cuda_shared)
     set(CYTHON_EXTENSIONS ${CYTHON_EXTENSIONS} _cuda)
   endif()
 endif()
diff --git a/python/pyarrow/includes/libarrow_cuda.pxd b/python/pyarrow/includes/libarrow_cuda.pxd
index 0e0d5e1..cedc432 100644
--- a/python/pyarrow/includes/libarrow_cuda.pxd
+++ b/python/pyarrow/includes/libarrow_cuda.pxd
@@ -19,9 +19,9 @@
 
 from pyarrow.includes.libarrow cimport *
 
-cdef extern from "arrow/gpu/cuda_api.h" namespace "arrow::gpu" nogil:
+cdef extern from "arrow/gpu/cuda_api.h" namespace "arrow::cuda" nogil:
 
-    cdef cppclass CCudaDeviceManager" arrow::gpu::CudaDeviceManager":
+    cdef cppclass CCudaDeviceManager" arrow::cuda::CudaDeviceManager":
         @staticmethod
         CStatus GetInstance(CCudaDeviceManager** manager)
         CStatus GetContext(int gpu_number, shared_ptr[CCudaContext]* ctx)
@@ -33,7 +33,7 @@ cdef extern from "arrow/gpu/cuda_api.h" namespace "arrow::gpu" nogil:
         # CStatus FreeHost(void* data, int64_t nbytes)
         int num_devices() const
 
-    cdef cppclass CCudaContext" arrow::gpu::CudaContext":
+    cdef cppclass CCudaContext" arrow::cuda::CudaContext":
         shared_ptr[CCudaContext]  shared_from_this()
         # CStatus Close()
         CStatus Allocate(int64_t nbytes, shared_ptr[CCudaBuffer]* out)
@@ -47,13 +47,13 @@ cdef extern from "arrow/gpu/cuda_api.h" namespace "arrow::gpu" nogil:
         const void* handle() const
         int device_number() const
 
-    cdef cppclass CCudaIpcMemHandle" arrow::gpu::CudaIpcMemHandle":
+    cdef cppclass CCudaIpcMemHandle" arrow::cuda::CudaIpcMemHandle":
         @staticmethod
         CStatus FromBuffer(const void* opaque_handle,
                            shared_ptr[CCudaIpcMemHandle]* handle)
         CStatus Serialize(CMemoryPool* pool, shared_ptr[CBuffer]* out) const
 
-    cdef cppclass CCudaBuffer" arrow::gpu::CudaBuffer"(CBuffer):
+    cdef cppclass CCudaBuffer" arrow::cuda::CudaBuffer"(CBuffer):
         CCudaBuffer(uint8_t* data, int64_t size,
                     const shared_ptr[CCudaContext]& context,
                     c_bool own_data=false, c_bool is_ipc=false)
@@ -73,17 +73,18 @@ cdef extern from "arrow/gpu/cuda_api.h" namespace "arrow::gpu" nogil:
         CStatus ExportForIpc(shared_ptr[CCudaIpcMemHandle]* handle)
         shared_ptr[CCudaContext] context() const
 
-    cdef cppclass CCudaHostBuffer" arrow::gpu::CudaHostBuffer"(CMutableBuffer):
+    cdef cppclass \
+            CCudaHostBuffer" arrow::cuda::CudaHostBuffer"(CMutableBuffer):
         pass
 
     cdef cppclass \
-            CCudaBufferReader" arrow::gpu::CudaBufferReader"(CBufferReader):
+            CCudaBufferReader" arrow::cuda::CudaBufferReader"(CBufferReader):
         CCudaBufferReader(const shared_ptr[CBuffer]& buffer)
         CStatus Read(int64_t nbytes, int64_t* bytes_read, void* buffer)
         CStatus Read(int64_t nbytes, shared_ptr[CBuffer]* out)
 
     cdef cppclass \
-            CCudaBufferWriter" arrow::gpu::CudaBufferWriter"(WritableFile):
+            CCudaBufferWriter" arrow::cuda::CudaBufferWriter"(WritableFile):
         CCudaBufferWriter(const shared_ptr[CCudaBuffer]& buffer)
         CStatus Close()
         CStatus Flush()
@@ -98,17 +99,17 @@ cdef extern from "arrow/gpu/cuda_api.h" namespace "arrow::gpu" nogil:
     CStatus AllocateCudaHostBuffer(int device_number, const int64_t size,
                                    shared_ptr[CCudaHostBuffer]* out)
 
-    # Cuda prefix is added to avoid picking up arrow::gpu functions
+    # Cuda prefix is added to avoid picking up arrow::cuda functions
     # from arrow namespace.
-    CStatus CudaSerializeRecordBatch" arrow::gpu::SerializeRecordBatch"\
+    CStatus CudaSerializeRecordBatch" arrow::cuda::SerializeRecordBatch"\
         (const CRecordBatch& batch,
          CCudaContext* ctx,
          shared_ptr[CCudaBuffer]* out)
-    CStatus CudaReadMessage" arrow::gpu::ReadMessage"\
+    CStatus CudaReadMessage" arrow::cuda::ReadMessage"\
         (CCudaBufferReader* reader,
          CMemoryPool* pool,
          unique_ptr[CMessage]* message)
-    CStatus CudaReadRecordBatch" arrow::gpu::ReadRecordBatch"\
+    CStatus CudaReadRecordBatch" arrow::cuda::ReadRecordBatch"\
         (const shared_ptr[CSchema]& schema,
          const shared_ptr[CCudaBuffer]& buffer,
          CMemoryPool* pool, shared_ptr[CRecordBatch]* out)
diff --git a/ruby/README.md b/ruby/README.md
index aac714e..4248658 100644
--- a/ruby/README.md
+++ b/ruby/README.md
@@ -23,4 +23,12 @@ There are the official Ruby bindings for Apache Arrow.
 
 [Red Arrow](https://github.com/apache/arrow/tree/master/ruby/red-arrow) is the base Apache Arrow bindings.
 
-[Red Arrow GPU](https://github.com/apache/arrow/tree/master/ruby/red-arrow-gpu) is the Apache Arrow bindings of GPU part.
+[Red Arrow CUDA](https://github.com/apache/arrow/tree/master/ruby/red-arrow-cuda) is the Apache Arrow bindings of CUDA part.
+
+[Red Gandiva](https://github.com/apache/arrow/tree/master/ruby/red-gandiva) is the Gandiva bindings.
+
+[Red Plasma](https://github.com/apache/arrow/tree/master/ruby/red-plasma) is the Plasma bindings.
+
+[Red Parquet](https://github.com/apache/arrow/tree/master/ruby/red-parquet) is the Parquet bindings.
+
+
diff --git a/ruby/red-arrow-gpu/.gitignore b/ruby/red-arrow-cuda/.gitignore
similarity index 96%
rename from ruby/red-arrow-gpu/.gitignore
rename to ruby/red-arrow-cuda/.gitignore
index 161ac05..3ec5511 100644
--- a/ruby/red-arrow-gpu/.gitignore
+++ b/ruby/red-arrow-cuda/.gitignore
@@ -15,6 +15,6 @@
 # specific language governing permissions and limitations
 # under the License.
 
-/lib/arrow-gpu/version.rb
+/lib/arrow-cuda/version.rb
 
 /pkg/
diff --git a/ruby/red-arrow-gpu/Gemfile b/ruby/red-arrow-cuda/Gemfile
similarity index 100%
rename from ruby/red-arrow-gpu/Gemfile
rename to ruby/red-arrow-cuda/Gemfile
diff --git a/ruby/red-arrow-gpu/LICENSE.txt b/ruby/red-arrow-cuda/LICENSE.txt
similarity index 100%
rename from ruby/red-arrow-gpu/LICENSE.txt
rename to ruby/red-arrow-cuda/LICENSE.txt
diff --git a/ruby/red-arrow-gpu/NOTICE.txt b/ruby/red-arrow-cuda/NOTICE.txt
similarity index 100%
rename from ruby/red-arrow-gpu/NOTICE.txt
rename to ruby/red-arrow-cuda/NOTICE.txt
diff --git a/ruby/red-arrow-cuda/README.md b/ruby/red-arrow-cuda/README.md
new file mode 100644
index 0000000..76fa51c
--- /dev/null
+++ b/ruby/red-arrow-cuda/README.md
@@ -0,0 +1,62 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Red Arrow CUDA - Apache Arrow CUDA Ruby
+
+Red Arrow CUDA is the Ruby bindings of Apache Arrow CUDA. Red Arrow CUDA is based on GObject Introspection.
+
+[Apache Arrow CUDA](https://arrow.apache.org/) is an in-memory columnar data store on GPU.
+
+[GObject Introspection](https://wiki.gnome.org/action/show/Projects/GObjectIntrospection) is a middleware for language bindings of C library. GObject Introspection can generate language bindings automatically at runtime.
+
+Red Arrow CUDA uses [Apache Arrow CUDA GLib](https://github.com/apache/arrow/tree/master/c_glib) and [gobject-introspection gem](https://rubygems.org/gems/gobject-introspection) to generate Ruby bindings of Apache Arrow CUDA.
+
+Apache Arrow CUDA GLib is a C wrapper for [Apache Arrow CUDA C++](https://github.com/apache/arrow/tree/master/cpp). GObject Introspection can't use Apache Arrow CUDA C++ directly. Apache Arrow CUDA GLib is a bridge between Apache Arrow CUDA C++ and GObject Introspection.
+
+gobject-introspection gem is a Ruby bindings of GObject Introspection. Red Arrow CUDA uses GObject Introspection via gobject-introspection gem.
+
+## Install
+
+Install Apache Arrow CUDA GLib before install Red Arrow CUDA. Use [packages.red-data-tools.org](https://github.com/red-data-tools/packages.red-data-tools.org) for installing Apache Arrow CUDA GLib.
+
+Note that the Apache Arrow CUDA GLib packages are "unofficial". "Official" packages will be released in the future.
+
+Install Red Arrow CUDA after you install Apache Arrow CUDA GLib:
+
+```text
+% gem install red-arrow-cuda
+```
+
+## Usage
+
+```ruby
+require "arrow-cuda"
+
+manager = ArrowCUDA::DeviceManager.new
+if manager.n_devices.zero?
+  raise "No GPU is found"
+end
+
+context = manager[0]
+buffer = ArrowCUDA::Buffer.new(context, 128)
+ArrowCUDA::BufferOutputStream.open(buffer) do |stream|
+  stream.write("Hello World")
+end
+puts buffer.copy_to_host(0, 11) # => "Hello World"
+```
diff --git a/ruby/red-arrow-gpu/Rakefile b/ruby/red-arrow-cuda/Rakefile
similarity index 100%
rename from ruby/red-arrow-gpu/Rakefile
rename to ruby/red-arrow-cuda/Rakefile
diff --git a/ruby/red-arrow-gpu/dependency-check/Rakefile b/ruby/red-arrow-cuda/dependency-check/Rakefile
similarity index 88%
rename from ruby/red-arrow-gpu/dependency-check/Rakefile
rename to ruby/red-arrow-cuda/dependency-check/Rakefile
index 0c22848..c057a1d 100644
--- a/ruby/red-arrow-gpu/dependency-check/Rakefile
+++ b/ruby/red-arrow-cuda/dependency-check/Rakefile
@@ -33,9 +33,9 @@ end
 namespace :dependency do
   desc "Check dependency"
   task :check do
-    unless PKGConfig.check_version?("arrow-gpu-glib")
-      unless NativePackageInstaller.install(:debian => "libarrow-gpu-glib-dev",
-                                            :redhat => "arrow-gpu-glib-devel")
+    unless PKGConfig.check_version?("arrow-cuda-glib")
+      unless NativePackageInstaller.install(:debian => "libarrow-cuda-glib-dev",
+                                            :redhat => "arrow-cuda-glib-devel")
         exit(false)
       end
     end
diff --git a/ruby/red-arrow-gpu/lib/arrow-gpu.rb b/ruby/red-arrow-cuda/lib/arrow-cuda.rb
similarity index 92%
rename from ruby/red-arrow-gpu/lib/arrow-gpu.rb
rename to ruby/red-arrow-cuda/lib/arrow-cuda.rb
index 10fdcc3..1fc13d0 100644
--- a/ruby/red-arrow-gpu/lib/arrow-gpu.rb
+++ b/ruby/red-arrow-cuda/lib/arrow-cuda.rb
@@ -17,11 +17,11 @@
 
 require "arrow"
 
-require "arrow-gpu/version"
+require "arrow-cuda/version"
 
-require "arrow-gpu/loader"
+require "arrow-cuda/loader"
 
-module ArrowGPU
+module ArrowCUDA
   class Error < StandardError
   end
 
diff --git a/ruby/red-arrow-gpu/lib/arrow-gpu/cuda-device-manager.rb b/ruby/red-arrow-cuda/lib/arrow-cuda/device-manager.rb
similarity index 95%
rename from ruby/red-arrow-gpu/lib/arrow-gpu/cuda-device-manager.rb
rename to ruby/red-arrow-cuda/lib/arrow-cuda/device-manager.rb
index 163128b..bbef749 100644
--- a/ruby/red-arrow-gpu/lib/arrow-gpu/cuda-device-manager.rb
+++ b/ruby/red-arrow-cuda/lib/arrow-cuda/device-manager.rb
@@ -15,8 +15,8 @@
 # specific language governing permissions and limitations
 # under the License.
 
-module ArrowGPU
-  class CUDADeviceManager
+module ArrowCUDA
+  class DeviceManager
     # Experimental.
     #
     # Can we think device manager is a container of contexts?
diff --git a/ruby/red-arrow-gpu/lib/arrow-gpu/loader.rb b/ruby/red-arrow-cuda/lib/arrow-cuda/loader.rb
similarity index 91%
rename from ruby/red-arrow-gpu/lib/arrow-gpu/loader.rb
rename to ruby/red-arrow-cuda/lib/arrow-cuda/loader.rb
index b9dc57c..6b2afc4 100644
--- a/ruby/red-arrow-gpu/lib/arrow-gpu/loader.rb
+++ b/ruby/red-arrow-cuda/lib/arrow-cuda/loader.rb
@@ -15,11 +15,11 @@
 # specific language governing permissions and limitations
 # under the License.
 
-module ArrowGPU
+module ArrowCUDA
   class Loader < GObjectIntrospection::Loader
     class << self
       def load
-        super("ArrowGPU", ArrowGPU)
+        super("ArrowCUDA", ArrowCUDA)
       end
     end
 
@@ -29,7 +29,7 @@ module ArrowGPU
     end
 
     def require_libraries
-      require "arrow-gpu/cuda-device-manager"
+      require "arrow-cuda/device-manager"
     end
   end
 end
diff --git a/ruby/red-arrow-gpu/red-arrow-gpu.gemspec b/ruby/red-arrow-cuda/red-arrow-cuda.gemspec
similarity index 84%
rename from ruby/red-arrow-gpu/red-arrow-gpu.gemspec
rename to ruby/red-arrow-cuda/red-arrow-cuda.gemspec
index 340d41e..b2ee982 100644
--- a/ruby/red-arrow-gpu/red-arrow-gpu.gemspec
+++ b/ruby/red-arrow-cuda/red-arrow-cuda.gemspec
@@ -20,11 +20,11 @@
 require_relative "version"
 
 Gem::Specification.new do |spec|
-  spec.name = "red-arrow-gpu"
+  spec.name = "red-arrow-cuda"
   version_components = [
-    ArrowGPU::Version::MAJOR.to_s,
-    ArrowGPU::Version::MINOR.to_s,
-    ArrowGPU::Version::MICRO.to_s,
+    ArrowCUDA::Version::MAJOR.to_s,
+    ArrowCUDA::Version::MINOR.to_s,
+    ArrowCUDA::Version::MICRO.to_s,
     # "beta1",
   ]
   spec.version = version_components.join(".")
@@ -32,9 +32,9 @@ Gem::Specification.new do |spec|
   spec.authors = ["Apache Arrow Developers"]
   spec.email = ["dev@arrow.apache.org"]
 
-  spec.summary = "Red Arrow GPU is the Ruby bindings of Apache Arrow GPU"
+  spec.summary = "Red Arrow CUDA is the Ruby bindings of Apache Arrow CUDA"
   spec.description =
-    "Apache Arrow GPU is a common in-memory columnar data store on GPU. " +
+    "Apache Arrow CUDA is a common in-memory columnar data store on CUDA. " +
     "It's useful to share and process large data."
   spec.license = "Apache-2.0"
   spec.files = ["README.md", "Rakefile", "Gemfile", "#{spec.name}.gemspec"]
diff --git a/ruby/red-arrow-gpu/test/helper.rb b/ruby/red-arrow-cuda/test/helper.rb
similarity index 97%
rename from ruby/red-arrow-gpu/test/helper.rb
rename to ruby/red-arrow-cuda/test/helper.rb
index 772636a..4d01833 100644
--- a/ruby/red-arrow-gpu/test/helper.rb
+++ b/ruby/red-arrow-cuda/test/helper.rb
@@ -18,6 +18,6 @@
 require_relative "../../red-arrow/version"
 require_relative "../version"
 
-require "arrow-gpu"
+require "arrow-cuda"
 
 require "test-unit"
diff --git a/ruby/red-arrow-gpu/test/run-test.rb b/ruby/red-arrow-cuda/test/run-test.rb
similarity index 100%
rename from ruby/red-arrow-gpu/test/run-test.rb
rename to ruby/red-arrow-cuda/test/run-test.rb
diff --git a/ruby/red-arrow-gpu/test/test-cuda.rb b/ruby/red-arrow-cuda/test/test-cuda.rb
similarity index 87%
rename from ruby/red-arrow-gpu/test/test-cuda.rb
rename to ruby/red-arrow-cuda/test/test-cuda.rb
index 05fd6cc..a48b687 100644
--- a/ruby/red-arrow-gpu/test/test-cuda.rb
+++ b/ruby/red-arrow-cuda/test/test-cuda.rb
@@ -17,7 +17,7 @@
 
 class TestCUDA < Test::Unit::TestCase
   def setup
-    @manager = ArrowGPU::CUDADeviceManager.new
+    @manager = ArrowCUDA::DeviceManager.new
     omit("At least one GPU is required") if @manager.n_devices.zero?
     @context = @manager[0]
   end
@@ -25,11 +25,11 @@ class TestCUDA < Test::Unit::TestCase
   sub_test_case("BufferOutputStream") do
     def setup
       super
-      @buffer = ArrowGPU::CUDABuffer.new(@context, 128)
+      @buffer = ArrowCUDA::Buffer.new(@context, 128)
     end
 
     def test_new
-      ArrowGPU::CUDABufferOutputStream.open(@buffer) do |stream|
+      ArrowCUDA::BufferOutputStream.open(@buffer) do |stream|
         stream.write("Hello World")
       end
       assert_equal("Hello World", @buffer.copy_to_host(0, 11).to_s)
diff --git a/ruby/red-arrow-gpu/version.rb b/ruby/red-arrow-cuda/version.rb
similarity index 94%
rename from ruby/red-arrow-gpu/version.rb
rename to ruby/red-arrow-cuda/version.rb
index fc0d37e..c8bbbc7 100644
--- a/ruby/red-arrow-gpu/version.rb
+++ b/ruby/red-arrow-cuda/version.rb
@@ -20,7 +20,7 @@ require "pathname"
 version_rb_path = Pathname.new(__FILE__)
 base_dir = version_rb_path.dirname
 pom_xml_path = base_dir.join("..", "..", "java", "pom.xml")
-lib_version_rb_path = base_dir.join("lib", "arrow-gpu", "version.rb")
+lib_version_rb_path = base_dir.join("lib", "arrow-cuda", "version.rb")
 
 need_update = false
 if not lib_version_rb_path.exist?
@@ -53,7 +53,7 @@ if need_update
 # specific language governing permissions and limitations
 # under the License.
 
-module ArrowGPU
+module ArrowCUDA
   module Version
     MAJOR = #{major}
     MINOR = #{minor}
@@ -68,4 +68,4 @@ end
   end
 end
 
-require_relative "lib/arrow-gpu/version"
+require_relative "lib/arrow-cuda/version"
diff --git a/ruby/red-arrow-gpu/README.md b/ruby/red-arrow-gpu/README.md
deleted file mode 100644
index ad76c13..0000000
--- a/ruby/red-arrow-gpu/README.md
+++ /dev/null
@@ -1,62 +0,0 @@
-<!---
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-
-# Red Arrow GPU - Apache Arrow GPU Ruby
-
-Red Arrow GPU is the Ruby bindings of Apache Arrow GPU. Red Arrow GPU is based on GObject Introspection.
-
-[Apache Arrow GPU](https://arrow.apache.org/) is an in-memory columnar data store on GPU.
-
-[GObject Introspection](https://wiki.gnome.org/action/show/Projects/GObjectIntrospection) is a middleware for language bindings of C library. GObject Introspection can generate language bindings automatically at runtime.
-
-Red Arrow GPU uses [Apache Arrow GPU GLib](https://github.com/apache/arrow/tree/master/c_glib) and [gobject-introspection gem](https://rubygems.org/gems/gobject-introspection) to generate Ruby bindings of Apache Arrow GPU.
-
-Apache Arrow GPU GLib is a C wrapper for [Apache Arrow GPU C++](https://github.com/apache/arrow/tree/master/cpp). GObject Introspection can't use Apache Arrow GPU C++ directly. Apache Arrow GPU GLib is a bridge between Apache Arrow GPU C++ and GObject Introspection.
-
-gobject-introspection gem is a Ruby bindings of GObject Introspection. Red Arrow GPU uses GObject Introspection via gobject-introspection gem.
-
-## Install
-
-Install Apache Arrow GPU GLib before install Red Arrow GPU. Use [packages.red-data-tools.org](https://github.com/red-data-tools/packages.red-data-tools.org) for installing Apache Arrow GPU GLib.
-
-Note that the Apache Arrow GPU GLib packages are "unofficial". "Official" packages will be released in the future.
-
-Install Red Arrow GPU after you install Apache Arrow GPU GLib:
-
-```text
-% gem install red-arrow-gpu
-```
-
-## Usage
-
-```ruby
-require "arrow-gpu"
-
-manager = ArrowGPU::CUDADeviceManager.new
-if manager.n_devices.zero?
-  raise "No GPU is found"
-end
-
-context = manager[0]
-buffer = ArrowGPU::CUDABuffer.new(context, 128)
-ArrowGPU::CUDABufferOutputStream.open(buffer) do |stream|
-  stream.write("Hello World")
-end
-puts buffer.copy_to_host(0, 11) # => "Hello World"
-```