You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by uw...@apache.org on 2018/12/23 16:31:48 UTC

[41/51] [partial] arrow-site git commit: Upload nightly docs

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.time64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.time64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.time64.rst.txt
new file mode 100644
index 0000000..af5408b
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.time64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.time64
+==============
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: time64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.timestamp.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.timestamp.rst.txt b/docs/latest/_sources/python/generated/pyarrow.timestamp.rst.txt
new file mode 100644
index 0000000..f015fea
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.timestamp.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.timestamp
+=================
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: timestamp
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.total_allocated_bytes.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.total_allocated_bytes.rst.txt b/docs/latest/_sources/python/generated/pyarrow.total_allocated_bytes.rst.txt
new file mode 100644
index 0000000..188244d
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.total_allocated_bytes.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.total\_allocated\_bytes
+===============================
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: total_allocated_bytes
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_binary.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_binary.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_binary.rst.txt
new file mode 100644
index 0000000..9a4a12f
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_binary.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_binary
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_binary
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_boolean.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_boolean.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_boolean.rst.txt
new file mode 100644
index 0000000..fe712ac
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_boolean.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_boolean
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_boolean
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_date.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_date.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_date.rst.txt
new file mode 100644
index 0000000..59f3813
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_date.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_date
+======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_date
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_date32.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_date32.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_date32.rst.txt
new file mode 100644
index 0000000..aa08063
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_date32.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_date32
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_date32
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_date64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_date64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_date64.rst.txt
new file mode 100644
index 0000000..076d551
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_date64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_date64
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_date64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_decimal.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_decimal.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_decimal.rst.txt
new file mode 100644
index 0000000..eabc7ab
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_decimal.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_decimal
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_decimal
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_dictionary.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_dictionary.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_dictionary.rst.txt
new file mode 100644
index 0000000..6764e20
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_dictionary.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_dictionary
+============================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_dictionary
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_fixed_size_binary.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_fixed_size_binary.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_fixed_size_binary.rst.txt
new file mode 100644
index 0000000..19e8e4c
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_fixed_size_binary.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_fixed\_size\_binary
+=====================================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_fixed_size_binary
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_float16.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_float16.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_float16.rst.txt
new file mode 100644
index 0000000..8b74167
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_float16.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_float16
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_float16
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_float32.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_float32.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_float32.rst.txt
new file mode 100644
index 0000000..ab821a7
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_float32.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_float32
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_float32
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_float64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_float64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_float64.rst.txt
new file mode 100644
index 0000000..527807b
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_float64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_float64
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_float64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_floating.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_floating.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_floating.rst.txt
new file mode 100644
index 0000000..124dbe3
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_floating.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_floating
+==========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_floating
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_int16.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_int16.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_int16.rst.txt
new file mode 100644
index 0000000..03bc84a
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_int16.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_int16
+=======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_int16
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_int32.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_int32.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_int32.rst.txt
new file mode 100644
index 0000000..4e8a85c
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_int32.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_int32
+=======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_int32
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_int64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_int64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_int64.rst.txt
new file mode 100644
index 0000000..645fcd2
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_int64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_int64
+=======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_int64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_int8.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_int8.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_int8.rst.txt
new file mode 100644
index 0000000..1cb8007
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_int8.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_int8
+======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_int8
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_integer.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_integer.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_integer.rst.txt
new file mode 100644
index 0000000..ce297b9
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_integer.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_integer
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_integer
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_list.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_list.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_list.rst.txt
new file mode 100644
index 0000000..8eca91c
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_list.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_list
+======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_list
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_map.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_map.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_map.rst.txt
new file mode 100644
index 0000000..8b7c266
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_map.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_map
+=====================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_map
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_nested.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_nested.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_nested.rst.txt
new file mode 100644
index 0000000..2cbb878
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_nested.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_nested
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_nested
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_null.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_null.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_null.rst.txt
new file mode 100644
index 0000000..61d2b61
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_null.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_null
+======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_null
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_signed_integer.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_signed_integer.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_signed_integer.rst.txt
new file mode 100644
index 0000000..1387d2f
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_signed_integer.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_signed\_integer
+=================================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_signed_integer
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_string.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_string.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_string.rst.txt
new file mode 100644
index 0000000..fc0519c
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_string.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_string
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_string
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_struct.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_struct.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_struct.rst.txt
new file mode 100644
index 0000000..814ea6b
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_struct.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_struct
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_struct
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_temporal.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_temporal.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_temporal.rst.txt
new file mode 100644
index 0000000..cfa9fd0
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_temporal.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_temporal
+==========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_temporal
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_time.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_time.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_time.rst.txt
new file mode 100644
index 0000000..be8802f
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_time.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_time
+======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_time
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_time32.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_time32.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_time32.rst.txt
new file mode 100644
index 0000000..cb6a55b
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_time32.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_time32
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_time32
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_time64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_time64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_time64.rst.txt
new file mode 100644
index 0000000..540c8eb
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_time64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_time64
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_time64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_timestamp.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_timestamp.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_timestamp.rst.txt
new file mode 100644
index 0000000..6832928
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_timestamp.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_timestamp
+===========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_timestamp
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_uint16.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_uint16.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_uint16.rst.txt
new file mode 100644
index 0000000..ecfa203
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_uint16.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_uint16
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_uint16
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_uint32.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_uint32.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_uint32.rst.txt
new file mode 100644
index 0000000..7d0259d
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_uint32.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_uint32
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_uint32
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_uint64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_uint64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_uint64.rst.txt
new file mode 100644
index 0000000..529f4da
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_uint64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_uint64
+========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_uint64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_uint8.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_uint8.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_uint8.rst.txt
new file mode 100644
index 0000000..6a782db
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_uint8.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_uint8
+=======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_uint8
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_unicode.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_unicode.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_unicode.rst.txt
new file mode 100644
index 0000000..6031f4b
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_unicode.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_unicode
+=========================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_unicode
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_union.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_union.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_union.rst.txt
new file mode 100644
index 0000000..b278046
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_union.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_union
+=======================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_union
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.types.is_unsigned_integer.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.types.is_unsigned_integer.rst.txt b/docs/latest/_sources/python/generated/pyarrow.types.is_unsigned_integer.rst.txt
new file mode 100644
index 0000000..6e30391
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.types.is_unsigned_integer.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.types.is\_unsigned\_integer
+===================================
+
+.. currentmodule:: pyarrow.types
+
+.. autofunction:: is_unsigned_integer
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.uint16.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.uint16.rst.txt b/docs/latest/_sources/python/generated/pyarrow.uint16.rst.txt
new file mode 100644
index 0000000..b1c446f
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.uint16.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.uint16
+==============
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: uint16
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.uint32.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.uint32.rst.txt b/docs/latest/_sources/python/generated/pyarrow.uint32.rst.txt
new file mode 100644
index 0000000..2183b60
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.uint32.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.uint32
+==============
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: uint32
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.uint64.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.uint64.rst.txt b/docs/latest/_sources/python/generated/pyarrow.uint64.rst.txt
new file mode 100644
index 0000000..fc878a8
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.uint64.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.uint64
+==============
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: uint64
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.uint8.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.uint8.rst.txt b/docs/latest/_sources/python/generated/pyarrow.uint8.rst.txt
new file mode 100644
index 0000000..37a1f23
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.uint8.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.uint8
+=============
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: uint8
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.utf8.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.utf8.rst.txt b/docs/latest/_sources/python/generated/pyarrow.utf8.rst.txt
new file mode 100644
index 0000000..9d636ac
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.utf8.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.utf8
+============
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: utf8
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/generated/pyarrow.write_tensor.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/generated/pyarrow.write_tensor.rst.txt b/docs/latest/_sources/python/generated/pyarrow.write_tensor.rst.txt
new file mode 100644
index 0000000..f804c20
--- /dev/null
+++ b/docs/latest/_sources/python/generated/pyarrow.write_tensor.rst.txt
@@ -0,0 +1,6 @@
+pyarrow.write\_tensor
+=====================
+
+.. currentmodule:: pyarrow
+
+.. autofunction:: write_tensor
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/getting_involved.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/getting_involved.rst.txt b/docs/latest/_sources/python/getting_involved.rst.txt
new file mode 100644
index 0000000..7159bdf
--- /dev/null
+++ b/docs/latest/_sources/python/getting_involved.rst.txt
@@ -0,0 +1,35 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+Getting Involved
+================
+
+Right now the primary audience for Apache Arrow are the developers of data
+systems; most people will use Apache Arrow indirectly through systems that use
+it for internal data handling and interoperating with other Arrow-enabled
+systems.
+
+Even if you do not plan to contribute to Apache Arrow itself or Arrow
+integrations in other projects, we'd be happy to have you involved:
+
+ * Join the mailing list: send an email to
+   `dev-subscribe@arrow.apache.org <ma...@arrow.apache.org>`_.
+   Share your ideas and use cases for the project or read through the
+   `Archive <http://mail-archives.apache.org/mod_mbox/arrow-dev/>`_.
+ * Follow our activity on `JIRA <https://issues.apache.org/jira/browse/ARROW>`_
+ * Learn the `Format / Specification
+   <https://github.com/apache/arrow/tree/master/format>`_

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/index.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/index.rst.txt b/docs/latest/_sources/python/index.rst.txt
new file mode 100644
index 0000000..cf691e3
--- /dev/null
+++ b/docs/latest/_sources/python/index.rst.txt
@@ -0,0 +1,49 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+Python bindings
+===============
+
+This is the documentation of the Python API of Apache Arrow. For more details
+on the Arrow format and other language bindings see the
+:doc:`parent documentation <../index>`.
+
+The Arrow Python bindings (also named "PyArrow") have first-class integration
+with NumPy, pandas, and built-in Python objects. They are based on the C++
+implementation of Arrow.
+
+Here will we detail the usage of the Python API for Arrow and the leaf
+libraries that add additional functionality such as reading Apache Parquet
+files into Arrow structures.
+
+.. toctree::
+   :maxdepth: 2
+
+   install
+   memory
+   data
+   ipc
+   filesystems
+   plasma
+   numpy
+   pandas
+   csv
+   parquet
+   extending
+   api
+   development
+   getting_involved

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/install.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/install.rst.txt b/docs/latest/_sources/python/install.rst.txt
new file mode 100644
index 0000000..8092b6c
--- /dev/null
+++ b/docs/latest/_sources/python/install.rst.txt
@@ -0,0 +1,51 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+Installing PyArrow
+==================
+
+Conda
+-----
+
+To install the latest version of PyArrow from conda-forge using conda:
+
+.. code-block:: bash
+
+    conda install -c conda-forge pyarrow
+
+Pip
+---
+
+Install the latest version from PyPI (Windows, Linux, and macOS):
+
+.. code-block:: bash
+
+    pip install pyarrow
+
+If you encounter any importing issues of the pip wheels on Windows, you may
+need to install the `Visual C++ Redistributable for Visual Studio 2015
+<https://www.microsoft.com/en-us/download/details.aspx?id=48145>`_.
+
+.. note::
+
+   Windows packages are only available for Python 3.5 and higher (this is also
+   true for TensorFlow and any package that is implemented with modern C++).
+
+Installing from source
+----------------------
+
+See :ref:`development`.

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/ipc.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/ipc.rst.txt b/docs/latest/_sources/python/ipc.rst.txt
new file mode 100644
index 0000000..812d843
--- /dev/null
+++ b/docs/latest/_sources/python/ipc.rst.txt
@@ -0,0 +1,383 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. currentmodule:: pyarrow
+
+.. _ipc:
+
+Streaming, Serialization, and IPC
+=================================
+
+Writing and Reading Streams
+---------------------------
+
+Arrow defines two types of binary formats for serializing record batches:
+
+* **Streaming format**: for sending an arbitrary length sequence of record
+  batches. The format must be processed from start to end, and does not support
+  random access
+
+* **File or Random Access format**: for serializing a fixed number of record
+  batches. Supports random access, and thus is very useful when used with
+  memory maps
+
+To follow this section, make sure to first read the section on :ref:`Memory and
+IO <io>`.
+
+Using streams
+~~~~~~~~~~~~~
+
+First, let's create a small record batch:
+
+.. ipython:: python
+
+   import pyarrow as pa
+
+   data = [
+       pa.array([1, 2, 3, 4]),
+       pa.array(['foo', 'bar', 'baz', None]),
+       pa.array([True, None, False, True])
+   ]
+
+   batch = pa.RecordBatch.from_arrays(data, ['f0', 'f1', 'f2'])
+   batch.num_rows
+   batch.num_columns
+
+Now, we can begin writing a stream containing some number of these batches. For
+this we use :class:`~pyarrow.RecordBatchStreamWriter`, which can write to a writeable
+``NativeFile`` object or a writeable Python object:
+
+.. ipython:: python
+
+   sink = pa.BufferOutputStream()
+   writer = pa.RecordBatchStreamWriter(sink, batch.schema)
+
+Here we used an in-memory Arrow buffer stream, but this could have been a
+socket or some other IO sink.
+
+When creating the ``StreamWriter``, we pass the schema, since the schema
+(column names and types) must be the same for all of the batches sent in this
+particular stream. Now we can do:
+
+.. ipython:: python
+
+   for i in range(5):
+      writer.write_batch(batch)
+   writer.close()
+
+   buf = sink.getvalue()
+   buf.size
+
+Now ``buf`` contains the complete stream as an in-memory byte buffer. We can
+read such a stream with :class:`~pyarrow.RecordBatchStreamReader` or the
+convenience function ``pyarrow.ipc.open_stream``:
+
+.. ipython:: python
+
+   reader = pa.ipc.open_stream(buf)
+   reader.schema
+
+   batches = [b for b in reader]
+   len(batches)
+
+We can check the returned batches are the same as the original input:
+
+.. ipython:: python
+
+   batches[0].equals(batch)
+
+An important point is that if the input source supports zero-copy reads
+(e.g. like a memory map, or ``pyarrow.BufferReader``), then the returned
+batches are also zero-copy and do not allocate any new memory on read.
+
+Writing and Reading Random Access Files
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :class:`~pyarrow.RecordBatchFileWriter` has the same API as
+:class:`~pyarrow.RecordBatchStreamWriter`:
+
+.. ipython:: python
+
+   sink = pa.BufferOutputStream()
+   writer = pa.RecordBatchFileWriter(sink, batch.schema)
+
+   for i in range(10):
+      writer.write_batch(batch)
+   writer.close()
+
+   buf = sink.getvalue()
+   buf.size
+
+The difference between :class:`~pyarrow.RecordBatchFileReader` and
+:class:`~pyarrow.RecordBatchStreamReader` is that the input source must have a
+``seek`` method for random access. The stream reader only requires read
+operations. We can also use the ``pyarrow.ipc.open_file`` method to open a file:
+
+.. ipython:: python
+
+   reader = pa.ipc.open_file(buf)
+
+Because we have access to the entire payload, we know the number of record
+batches in the file, and can read any at random:
+
+.. ipython:: python
+
+   reader.num_record_batches
+   b = reader.get_batch(3)
+   b.equals(batch)
+
+Reading from Stream and File Format for pandas
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The stream and file reader classes have a special ``read_pandas`` method to
+simplify reading multiple record batches and converting them to a single
+DataFrame output:
+
+.. ipython:: python
+
+   df = pa.ipc.open_file(buf).read_pandas()
+   df[:5]
+
+Arbitrary Object Serialization
+------------------------------
+
+In ``pyarrow`` we are able to serialize and deserialize many kinds of Python
+objects. While not a complete replacement for the ``pickle`` module, these
+functions can be significantly faster, particular when dealing with collections
+of NumPy arrays.
+
+As an example, consider a dictionary containing NumPy arrays:
+
+.. ipython:: python
+
+   import numpy as np
+
+   data = {
+       i: np.random.randn(500, 500)
+       for i in range(100)
+   }
+
+We use the ``pyarrow.serialize`` function to convert this data to a byte
+buffer:
+
+.. ipython:: python
+
+   buf = pa.serialize(data).to_buffer()
+   type(buf)
+   buf.size
+
+``pyarrow.serialize`` creates an intermediate object which can be converted to
+a buffer (the ``to_buffer`` method) or written directly to an output stream.
+
+``pyarrow.deserialize`` converts a buffer-like object back to the original
+Python object:
+
+.. ipython:: python
+
+   restored_data = pa.deserialize(buf)
+   restored_data[0]
+
+When dealing with NumPy arrays, ``pyarrow.deserialize`` can be significantly
+faster than ``pickle`` because the resulting arrays are zero-copy references
+into the input buffer. The larger the arrays, the larger the performance
+savings.
+
+Consider this example, we have for ``pyarrow.deserialize``
+
+.. ipython:: python
+
+   %timeit restored_data = pa.deserialize(buf)
+
+And for pickle:
+
+.. ipython:: python
+
+   import pickle
+   pickled = pickle.dumps(data)
+   %timeit unpickled_data = pickle.loads(pickled)
+
+We aspire to make these functions a high-speed alternative to pickle for
+transient serialization in Python big data applications.
+
+Serializing Custom Data Types
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If an unrecognized data type is encountered when serializing an object,
+``pyarrow`` will fall back on using ``pickle`` for converting that type to a
+byte string. There may be a more efficient way, though.
+
+Consider a class with two members, one of which is a NumPy array:
+
+.. code-block:: python
+
+   class MyData:
+       def __init__(self, name, data):
+           self.name = name
+           self.data = data
+
+We write functions to convert this to and from a dictionary with simpler types:
+
+.. code-block:: python
+
+   def _serialize_MyData(val):
+       return {'name': val.name, 'data': val.data}
+
+   def _deserialize_MyData(data):
+       return MyData(data['name'], data['data']
+
+then, we must register these functions in a ``SerializationContext`` so that
+``MyData`` can be recognized:
+
+.. code-block:: python
+
+   context = pa.SerializationContext()
+   context.register_type(MyData, 'MyData',
+                         custom_serializer=_serialize_MyData,
+                         custom_deserializer=_deserialize_MyData)
+
+Lastly, we use this context as an additional argument to ``pyarrow.serialize``:
+
+.. code-block:: python
+
+   buf = pa.serialize(val, context=context).to_buffer()
+   restored_val = pa.deserialize(buf, context=context)
+
+The ``SerializationContext`` also has convenience methods ``serialize`` and
+``deserialize``, so these are equivalent statements:
+
+.. code-block:: python
+
+   buf = context.serialize(val).to_buffer()
+   restored_val = context.deserialize(buf)
+
+Component-based Serialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For serializing Python objects containing some number of NumPy arrays, Arrow
+buffers, or other data types, it may be desirable to transport their serialized
+representation without having to produce an intermediate copy using the
+``to_buffer`` method. To motivate this, suppose we have a list of NumPy arrays:
+
+.. ipython:: python
+
+   import numpy as np
+   data = [np.random.randn(10, 10) for i in range(5)]
+
+The call ``pa.serialize(data)`` does not copy the memory inside each of these
+NumPy arrays. This serialized representation can be then decomposed into a
+dictionary containing a sequence of ``pyarrow.Buffer`` objects containing
+metadata for each array and references to the memory inside the arrays. To do
+this, use the ``to_components`` method:
+
+.. ipython:: python
+
+   serialized = pa.serialize(data)
+   components = serialized.to_components()
+
+The particular details of the output of ``to_components`` are not too
+important. The objects in the ``'data'`` field are ``pyarrow.Buffer`` objects,
+which are zero-copy convertible to Python ``memoryview`` objects:
+
+.. ipython:: python
+
+   memoryview(components['data'][0])
+
+A memoryview can be converted back to a Arrow ``Buffer`` with
+``pyarrow.py_buffer``:
+
+.. ipython:: python
+
+   mv = memoryview(components['data'][0])
+   buf = pa.py_buffer(mv)
+
+An object can be reconstructed from its component-based representation using
+``deserialize_components``:
+
+.. ipython:: python
+
+   restored_data = pa.deserialize_components(components)
+   restored_data[0]
+
+``deserialize_components`` is also available as a method on
+``SerializationContext`` objects.
+
+Serializing pandas Objects
+--------------------------
+
+The default serialization context has optimized handling of pandas
+objects like ``DataFrame`` and ``Series``. Combined with component-based
+serialization above, this enables zero-copy transport of pandas DataFrame
+objects not containing any Python objects:
+
+.. ipython:: python
+
+   import pandas as pd
+   df = pd.DataFrame({'a': [1, 2, 3, 4, 5]})
+   context = pa.default_serialization_context()
+   serialized_df = context.serialize(df)
+   df_components = serialized_df.to_components()
+   original_df = context.deserialize_components(df_components)
+   original_df
+
+Feather Format
+--------------
+
+Feather is a lightweight file-format for data frames that uses the Arrow memory
+layout for data representation on disk. It was created early in the Arrow
+project as a proof of concept for fast, language-agnostic data frame storage
+for Python (pandas) and R.
+
+Compared with Arrow streams and files, Feather has some limitations:
+
+* Only non-nested data types and categorical (dictionary-encoded) types are
+  supported
+* Supports only a single batch of rows, where general Arrow streams support an
+  arbitrary number
+* Supports limited scalar value types, adequate only for representing typical
+  data found in R and pandas
+
+We would like to continue to innovate in the Feather format, but we must wait
+for an R implementation for Arrow to mature.
+
+The ``pyarrow.feather`` module contains the read and write functions for the
+format. The input and output are ``pandas.DataFrame`` objects:
+
+.. code-block:: python
+
+   import pyarrow.feather as feather
+
+   feather.write_feather(df, '/path/to/file')
+   read_df = feather.read_feather('/path/to/file')
+
+``read_feather`` supports multithreaded reads, and may yield faster performance
+on some files:
+
+.. code-block:: python
+
+   read_df = feather.read_feather('/path/to/file', nthreads=4)
+
+These functions can read and write with file-like objects. For example:
+
+.. code-block:: python
+
+   with open('/path/to/file', 'wb') as f:
+       feather.write_feather(df, f)
+
+   with open('/path/to/file', 'rb') as f:
+       read_df = feather.read_feather(f)
+
+A file input to ``read_feather`` must support seeking.

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/memory.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/memory.rst.txt b/docs/latest/_sources/python/memory.rst.txt
new file mode 100644
index 0000000..0d30866
--- /dev/null
+++ b/docs/latest/_sources/python/memory.rst.txt
@@ -0,0 +1,284 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. currentmodule:: pyarrow
+.. _io:
+
+========================
+Memory and IO Interfaces
+========================
+
+This section will introduce you to the major concepts in PyArrow's memory
+management and IO systems:
+
+* Buffers
+* Memory pools
+* File-like and stream-like objects
+
+Referencing and Allocating Memory
+=================================
+
+pyarrow.Buffer
+--------------
+
+The :class:`Buffer` object wraps the C++ :cpp:class:`arrow::Buffer` type
+which is the primary tool for memory management in Apache Arrow in C++. It permits
+higher-level array classes to safely interact with memory which they may or may
+not own. ``arrow::Buffer`` can be zero-copy sliced to permit Buffers to cheaply
+reference other Buffers, while preserving memory lifetime and clean
+parent-child relationships.
+
+There are many implementations of ``arrow::Buffer``, but they all provide a
+standard interface: a data pointer and length. This is similar to Python's
+built-in `buffer protocol` and ``memoryview`` objects.
+
+A :class:`Buffer` can be created from any Python object implementing
+the buffer protocol by calling the :func:`py_buffer` function. Let's consider
+a bytes object:
+
+.. ipython:: python
+
+   import pyarrow as pa
+
+   data = b'abcdefghijklmnopqrstuvwxyz'
+   buf = pa.py_buffer(data)
+   buf
+   buf.size
+
+Creating a Buffer in this way does not allocate any memory; it is a zero-copy
+view on the memory exported from the ``data`` bytes object.
+
+External memory, under the form of a raw pointer and size, can also be
+referenced using the :func:`foreign_buffer` function.
+
+Buffers can be used in circumstances where a Python buffer or memoryview is
+required, and such conversions are zero-copy:
+
+.. ipython:: python
+
+   memoryview(buf)
+
+The Buffer's :meth:`~Buffer.to_pybytes` method converts the Buffer's data to a
+Python bytestring (thus making a copy of the data):
+
+.. ipython:: python
+
+   buf.to_pybytes()
+
+Memory Pools
+------------
+
+All memory allocations and deallocations (like ``malloc`` and ``free`` in C)
+are tracked in an instance of ``arrow::MemoryPool``. This means that we can
+then precisely track amount of memory that has been allocated:
+
+.. ipython:: python
+
+   pa.total_allocated_bytes()
+
+PyArrow uses a default built-in memory pool, but in the future there may be
+additional memory pools (and subpools) to choose from. Let's allocate
+a resizable ``Buffer`` from the default pool:
+
+.. ipython:: python
+
+   buf = pa.allocate_buffer(1024, resizable=True)
+   pa.total_allocated_bytes()
+   buf.resize(2048)
+   pa.total_allocated_bytes()
+
+The default allocator requests memory in a minimum increment of 64 bytes. If
+the buffer is garbaged-collected, all of the memory is freed:
+
+.. ipython:: python
+
+   buf = None
+   pa.total_allocated_bytes()
+
+
+Input and Output
+================
+
+.. _io.native_file:
+
+The Arrow C++ libraries have several abstract interfaces for different kinds of
+IO objects:
+
+* Read-only streams
+* Read-only files supporting random access
+* Write-only streams
+* Write-only files supporting random access
+* File supporting reads, writes, and random access
+
+In the interest of making these objects behave more like Python's built-in
+``file`` objects, we have defined a :class:`~pyarrow.NativeFile` base class
+which implements the same API as regular Python file objects.
+
+:class:`~pyarrow.NativeFile` has some important features which make it
+preferable to using Python files with PyArrow where possible:
+
+* Other Arrow classes can access the internal C++ IO objects natively, and do
+  not need to acquire the Python GIL
+* Native C++ IO may be able to do zero-copy IO, such as with memory maps
+
+There are several kinds of :class:`~pyarrow.NativeFile` options available:
+
+* :class:`~pyarrow.OSFile`, a native file that uses your operating system's
+  file descriptors
+* :class:`~pyarrow.MemoryMappedFile`, for reading (zero-copy) and writing with
+  memory maps
+* :class:`~pyarrow.BufferReader`, for reading :class:`~pyarrow.Buffer` objects
+  as a file
+* :class:`~pyarrow.BufferOutputStream`, for writing data in-memory, producing a
+  Buffer at the end
+* :class:`~pyarrow.FixedSizeBufferWriter`, for writing data into an already
+  allocated Buffer
+* :class:`~pyarrow.HdfsFile`, for reading and writing data to the Hadoop Filesystem
+* :class:`~pyarrow.PythonFile`, for interfacing with Python file objects in C++
+* :class:`~pyarrow.CompressedInputStream` and
+  :class:`~pyarrow.CompressedOutputStream`, for on-the-fly compression or
+  decompression to/from another stream
+
+There are also high-level APIs to make instantiating common kinds of streams
+easier.
+
+High-Level API
+--------------
+
+Input Streams
+~~~~~~~~~~~~~
+
+The :func:`~pyarrow.input_stream` function allows creating a readable
+:class:`~pyarrow.NativeFile` from various kinds of sources.
+
+* If passed a :class:`~pyarrow.Buffer` or a ``memoryview`` object, a
+  :class:`~pyarrow.BufferReader` will be returned:
+
+   .. ipython:: python
+
+      buf = memoryview(b"some data")
+      stream = pa.input_stream(buf)
+      stream.read(4)
+
+* If passed a string or file path, it will open the given file on disk
+  for reading, creating a :class:`~pyarrow.OSFile`.  Optionally, the file
+  can be compressed: if its filename ends with a recognized extension
+  such as ``.gz``, its contents will automatically be decompressed on
+  reading.
+
+  .. ipython:: python
+
+     import gzip
+     with gzip.open('example.gz', 'wb') as f:
+         f.write(b'some data\n' * 3)
+
+     stream = pa.input_stream('example.gz')
+     stream.read()
+
+* If passed a Python file object, it will wrapped in a :class:`PythonFile`
+  such that the Arrow C++ libraries can read data from it (at the expense
+  of a slight overhead).
+
+Output Streams
+~~~~~~~~~~~~~~
+
+:func:`~pyarrow.output_stream` is the equivalent function for output streams
+and allows creating a writable :class:`~pyarrow.NativeFile`.  It has the same
+features as explained above for :func:`~pyarrow.input_stream`, such as being
+able to write to buffers or do on-the-fly compression.
+
+.. ipython:: python
+
+   with pa.output_stream('example1.dat') as stream:
+       stream.write(b'some data')
+
+   f = open('example1.dat', 'rb')
+   f.read()
+
+
+On-Disk and Memory Mapped Files
+-------------------------------
+
+PyArrow includes two ways to interact with data on disk: standard operating
+system-level file APIs, and memory-mapped files. In regular Python we can
+write:
+
+.. ipython:: python
+
+   with open('example2.dat', 'wb') as f:
+       f.write(b'some example data')
+
+Using pyarrow's :class:`~pyarrow.OSFile` class, you can write:
+
+.. ipython:: python
+
+   with pa.OSFile('example3.dat', 'wb') as f:
+       f.write(b'some example data')
+
+For reading files, you can use :class:`~pyarrow.OSFile` or
+:class:`~pyarrow.MemoryMappedFile`. The difference between these is that
+:class:`~pyarrow.OSFile` allocates new memory on each read, like Python file
+objects. In reads from memory maps, the library constructs a buffer referencing
+the mapped memory without any memory allocation or copying:
+
+.. ipython:: python
+
+   file_obj = pa.OSFile('example2.dat')
+   mmap = pa.memory_map('example3.dat')
+   file_obj.read(4)
+   mmap.read(4)
+
+The ``read`` method implements the standard Python file ``read`` API. To read
+into Arrow Buffer objects, use ``read_buffer``:
+
+.. ipython:: python
+
+   mmap.seek(0)
+   buf = mmap.read_buffer(4)
+   print(buf)
+   buf.to_pybytes()
+
+Many tools in PyArrow, particular the Apache Parquet interface and the file and
+stream messaging tools, are more efficient when used with these ``NativeFile``
+types than with normal Python file objects.
+
+.. ipython:: python
+   :suppress:
+
+   buf = mmap = file_obj = None
+   !rm example.dat
+   !rm example2.dat
+
+In-Memory Reading and Writing
+-----------------------------
+
+To assist with serialization and deserialization of in-memory data, we have
+file interfaces that can read and write to Arrow Buffers.
+
+.. ipython:: python
+
+   writer = pa.BufferOutputStream()
+   writer.write(b'hello, friends')
+
+   buf = writer.getvalue()
+   buf
+   buf.size
+   reader = pa.BufferReader(buf)
+   reader.seek(7)
+   reader.read(7)
+
+These have similar semantics to Python's built-in ``io.BytesIO``.

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/numpy.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/numpy.rst.txt b/docs/latest/_sources/python/numpy.rst.txt
new file mode 100644
index 0000000..870f9cb
--- /dev/null
+++ b/docs/latest/_sources/python/numpy.rst.txt
@@ -0,0 +1,75 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _numpy_interop:
+
+NumPy Integration
+=================
+
+PyArrow allows converting back and forth from
+`NumPy <https://www.numpy.org/>`_ arrays to Arrow :ref:`Arrays <data.array>`.
+
+NumPy to Arrow
+--------------
+
+To convert a NumPy array to Arrow, one can simply call the :func:`pyarrow.array`
+factory function.
+
+.. code-block:: pycon
+
+   >>> import numpy as np
+   >>> import pyarrow as pa
+   >>> data = np.arange(10, dtype='int16')
+   >>> arr = pa.array(data)
+   >>> arr
+   <pyarrow.lib.Int16Array object at 0x7fb1d1e6ae58>
+   [
+     0,
+     1,
+     2,
+     3,
+     4,
+     5,
+     6,
+     7,
+     8,
+     9
+   ]
+
+Converting from NumPy supports a wide range of input dtypes, including
+structured dtypes or strings.
+
+Arrow to NumPy
+--------------
+
+In the reverse direction, it is possible to produce a view of an Arrow Array
+for use with NumPy using the :meth:`~pyarrow.Array.to_numpy` method.
+This is limited to primitive types for which NumPy has the same physical
+representation as Arrow, and assuming the Arrow data has no nulls.
+
+.. code-block:: pycon
+
+   >>> import numpy as np
+   >>> import pyarrow as pa
+   >>> arr = pa.array([4, 5, 6], type=pa.int32())
+   >>> view = arr.to_numpy()
+   >>> view
+   array([4, 5, 6], dtype=int32)
+
+For more complex data types, you have to use the :meth:`~pyarrow.Array.to_pandas`
+method (which will construct a Numpy array with Pandas semantics for, e.g.,
+representation of null values).

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/pandas.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/pandas.rst.txt b/docs/latest/_sources/python/pandas.rst.txt
new file mode 100644
index 0000000..16b4ff6
--- /dev/null
+++ b/docs/latest/_sources/python/pandas.rst.txt
@@ -0,0 +1,124 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _pandas_interop:
+
+Pandas Integration
+==================
+
+To interface with `pandas <https://pandas.pydata.org/>`_, PyArrow provides
+various conversion routines to consume pandas structures and convert back
+to them.
+
+.. note::
+   While pandas uses NumPy as a backend, it has enough peculiarities
+   (such as a different type system, and support for null values) that this
+   is a separate topic from :ref:`numpy_interop`.
+
+DataFrames
+----------
+
+The equivalent to a pandas DataFrame in Arrow is a :ref:`Table <data.table>`.
+Both consist of a set of named columns of equal length. While pandas only
+supports flat columns, the Table also provides nested columns, thus it can
+represent more data than a DataFrame, so a full conversion is not always possible.
+
+Conversion from a Table to a DataFrame is done by calling
+:meth:`pyarrow.Table.to_pandas`. The inverse is then achieved by using
+:meth:`pyarrow.Table.from_pandas`.
+
+.. code-block:: python
+
+    import pyarrow as pa
+    import pandas as pd
+
+    df = pd.DataFrame({"a": [1, 2, 3]})
+    # Convert from pandas to Arrow
+    table = pa.Table.from_pandas(df)
+    # Convert back to pandas
+    df_new = table.to_pandas()
+
+    # Infer Arrow schema from pandas
+    schema = pa.Schema.from_pandas(df)
+
+Series
+------
+
+In Arrow, the most similar structure to a pandas Series is an Array.
+It is a vector that contains data of the same type as linear memory. You can
+convert a pandas Series to an Arrow Array using :meth:`pyarrow.Array.from_pandas`.
+As Arrow Arrays are always nullable, you can supply an optional mask using
+the ``mask`` parameter to mark all null-entries.
+
+Type differences
+----------------
+
+With the current design of pandas and Arrow, it is not possible to convert all
+column types unmodified. One of the main issues here is that pandas has no
+support for nullable columns of arbitrary type. Also ``datetime64`` is currently
+fixed to nanosecond resolution. On the other side, Arrow might be still missing
+support for some types.
+
+pandas -> Arrow Conversion
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
++------------------------+--------------------------+
+| Source Type (pandas)   | Destination Type (Arrow) |
++========================+==========================+
+| ``bool``               | ``BOOL``                 |
++------------------------+--------------------------+
+| ``(u)int{8,16,32,64}`` | ``(U)INT{8,16,32,64}``   |
++------------------------+--------------------------+
+| ``float32``            | ``FLOAT``                |
++------------------------+--------------------------+
+| ``float64``            | ``DOUBLE``               |
++------------------------+--------------------------+
+| ``str`` / ``unicode``  | ``STRING``               |
++------------------------+--------------------------+
+| ``pd.Categorical``     | ``DICTIONARY``           |
++------------------------+--------------------------+
+| ``pd.Timestamp``       | ``TIMESTAMP(unit=ns)``   |
++------------------------+--------------------------+
+| ``datetime.date``      | ``DATE``                 |
++------------------------+--------------------------+
+
+Arrow -> pandas Conversion
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
++-------------------------------------+--------------------------------------------------------+
+| Source Type (Arrow)                 | Destination Type (pandas)                              |
++=====================================+========================================================+
+| ``BOOL``                            | ``bool``                                               |
++-------------------------------------+--------------------------------------------------------+
+| ``BOOL`` *with nulls*               | ``object`` (with values ``True``, ``False``, ``None``) |
++-------------------------------------+--------------------------------------------------------+
+| ``(U)INT{8,16,32,64}``              | ``(u)int{8,16,32,64}``                                 |
++-------------------------------------+--------------------------------------------------------+
+| ``(U)INT{8,16,32,64}`` *with nulls* | ``float64``                                            |
++-------------------------------------+--------------------------------------------------------+
+| ``FLOAT``                           | ``float32``                                            |
++-------------------------------------+--------------------------------------------------------+
+| ``DOUBLE``                          | ``float64``                                            |
++-------------------------------------+--------------------------------------------------------+
+| ``STRING``                          | ``str``                                                |
++-------------------------------------+--------------------------------------------------------+
+| ``DICTIONARY``                      | ``pd.Categorical``                                     |
++-------------------------------------+--------------------------------------------------------+
+| ``TIMESTAMP(unit=*)``               | ``pd.Timestamp`` (``np.datetime64[ns]``)               |
++-------------------------------------+--------------------------------------------------------+
+| ``DATE``                            | ``pd.Timestamp`` (``np.datetime64[ns]``)               |
++-------------------------------------+--------------------------------------------------------+

http://git-wip-us.apache.org/repos/asf/arrow-site/blob/62ef7145/docs/latest/_sources/python/parquet.rst.txt
----------------------------------------------------------------------
diff --git a/docs/latest/_sources/python/parquet.rst.txt b/docs/latest/_sources/python/parquet.rst.txt
new file mode 100644
index 0000000..5422ebe
--- /dev/null
+++ b/docs/latest/_sources/python/parquet.rst.txt
@@ -0,0 +1,402 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. currentmodule:: pyarrow
+.. _parquet:
+
+Reading and Writing the Apache Parquet Format
+=============================================
+
+The `Apache Parquet <http://parquet.apache.org/>`_ project provides a
+standardized open-source columnar storage format for use in data analysis
+systems. It was created originally for use in `Apache Hadoop
+<http://hadoop.apache.org/>`_ with systems like `Apache Drill
+<http://drill.apache.org>`_, `Apache Hive <http://hive.apache.org>`_, `Apache
+Impala (incubating) <http://impala.apache.org>`_, and `Apache Spark
+<http://spark.apache.org>`_ adopting it as a shared standard for high
+performance data IO.
+
+Apache Arrow is an ideal in-memory transport layer for data that is being read
+or written with Parquet files. We have been concurrently developing the `C++
+implementation of Apache Parquet <http://github.com/apache/parquet-cpp>`_,
+which includes a native, multithreaded C++ adapter to and from in-memory Arrow
+data. PyArrow includes Python bindings to this code, which thus enables reading
+and writing Parquet files with pandas as well.
+
+Obtaining PyArrow with Parquet Support
+--------------------------------------
+
+If you installed ``pyarrow`` with pip or conda, it should be built with Parquet
+support bundled:
+
+.. ipython:: python
+
+   import pyarrow.parquet as pq
+
+If you are building ``pyarrow`` from source, you must use
+``-DARROW_PARQUET=ON`` when compiling the C++ libraries and enable the Parquet
+extensions when building ``pyarrow``. See the :ref:`Development <development>`
+page for more details.
+
+Reading and Writing Single Files
+--------------------------------
+
+The functions :func:`~.parquet.read_table` and :func:`~.parquet.write_table`
+read and write the :ref:`pyarrow.Table <data.table>` objects, respectively.
+
+Let's look at a simple table:
+
+.. ipython:: python
+
+   import numpy as np
+   import pandas as pd
+   import pyarrow as pa
+
+   df = pd.DataFrame({'one': [-1, np.nan, 2.5],
+                      'two': ['foo', 'bar', 'baz'],
+                      'three': [True, False, True]},
+                      index=list('abc'))
+   table = pa.Table.from_pandas(df)
+
+We write this to Parquet format with ``write_table``:
+
+.. ipython:: python
+
+   import pyarrow.parquet as pq
+   pq.write_table(table, 'example.parquet')
+
+This creates a single Parquet file. In practice, a Parquet dataset may consist
+of many files in many directories. We can read a single file back with
+``read_table``:
+
+.. ipython:: python
+
+   table2 = pq.read_table('example.parquet')
+   table2.to_pandas()
+
+You can pass a subset of columns to read, which can be much faster than reading
+the whole file (due to the columnar layout):
+
+.. ipython:: python
+
+   pq.read_table('example.parquet', columns=['one', 'three'])
+
+When reading a subset of columns from a file that used a Pandas dataframe as the
+source, we use ``read_pandas`` to maintain any additional index column data:
+
+.. ipython:: python
+
+   pq.read_pandas('example.parquet', columns=['two']).to_pandas()
+
+We need not use a string to specify the origin of the file. It can be any of:
+
+* A file path as a string
+* A :ref:`NativeFile <io.native_file>` from PyArrow
+* A Python file object
+
+In general, a Python file object will have the worst read performance, while a
+string file path or an instance of :class:`~.NativeFile` (especially memory
+maps) will perform the best.
+
+Omitting the DataFrame index
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When using ``pa.Table.from_pandas`` to convert to an Arrow table, by default
+one or more special columns are added to keep track of the index (row
+labels). Storing the index takes extra space, so if your index is not valuable,
+you may choose to omit it by passing ``preserve_index=False``
+
+.. ipython:: python
+
+   df = pd.DataFrame({'one': [-1, np.nan, 2.5],
+                      'two': ['foo', 'bar', 'baz'],
+                      'three': [True, False, True]},
+                      index=list('abc'))
+   df
+   table = pa.Table.from_pandas(df, preserve_index=False)
+
+Then we have:
+
+.. ipython:: python
+
+   pq.write_table(table, 'example_noindex.parquet')
+   t = pq.read_table('example_noindex.parquet')
+   t.to_pandas()
+
+Here you see the index did not survive the round trip.
+
+Finer-grained Reading and Writing
+---------------------------------
+
+``read_table`` uses the :class:`~.ParquetFile` class, which has other features:
+
+.. ipython:: python
+
+   parquet_file = pq.ParquetFile('example.parquet')
+   parquet_file.metadata
+   parquet_file.schema
+
+As you can learn more in the `Apache Parquet format
+<https://github.com/apache/parquet-format>`_, a Parquet file consists of
+multiple row groups. ``read_table`` will read all of the row groups and
+concatenate them into a single table. You can read individual row groups with
+``read_row_group``:
+
+.. ipython:: python
+
+   parquet_file.num_row_groups
+   parquet_file.read_row_group(0)
+
+We can similarly write a Parquet file with multiple row groups by using
+``ParquetWriter``:
+
+.. ipython:: python
+
+   writer = pq.ParquetWriter('example2.parquet', table.schema)
+   for i in range(3):
+       writer.write_table(table)
+   writer.close()
+
+   pf2 = pq.ParquetFile('example2.parquet')
+   pf2.num_row_groups
+
+Alternatively python ``with`` syntax can also be use:
+
+.. ipython:: python
+
+   with pq.ParquetWriter('example3.parquet', table.schema) as writer:
+       for i in range(3):
+           writer.write_table(table)
+
+.. ipython:: python
+   :suppress:
+
+   !rm example.parquet
+   !rm example_noindex.parquet
+   !rm example2.parquet
+   !rm example3.parquet
+
+Data Type Handling
+------------------
+
+Storing timestamps
+~~~~~~~~~~~~~~~~~~
+
+Some Parquet readers may only support timestamps stored in millisecond
+(``'ms'``) or microsecond (``'us'``) resolution. Since pandas uses nanoseconds
+to represent timestamps, this can occasionally be a nuisance. We provide the
+``coerce_timestamps`` option to allow you to select the desired resolution:
+
+.. code-block:: python
+
+   pq.write_table(table, where, coerce_timestamps='ms')
+
+If a cast to a lower resolution value may result in a loss of data, by default
+an exception will be raised. This can be suppressed by passing
+``allow_truncated_timestamps=True``:
+
+.. code-block:: python
+
+   pq.write_table(table, where, coerce_timestamps='ms',
+                  allow_truncated_timestamps=True)
+
+Compression, Encoding, and File Compatibility
+---------------------------------------------
+
+The most commonly used Parquet implementations use dictionary encoding when
+writing files; if the dictionaries grow too large, then they "fall back" to
+plain encoding. Whether dictionary encoding is used can be toggled using the
+``use_dictionary`` option:
+
+.. code-block:: python
+
+   pq.write_table(table, where, use_dictionary=False)
+
+The data pages within a column in a row group can be compressed after the
+encoding passes (dictionary, RLE encoding). In PyArrow we use Snappy
+compression by default, but Brotli, Gzip, and uncompressed are also supported:
+
+.. code-block:: python
+
+   pq.write_table(table, where, compression='snappy')
+   pq.write_table(table, where, compression='gzip')
+   pq.write_table(table, where, compression='brotli')
+   pq.write_table(table, where, compression='none')
+
+Snappy generally results in better performance, while Gzip may yield smaller
+files.
+
+These settings can also be set on a per-column basis:
+
+.. code-block:: python
+
+   pq.write_table(table, where, compression={'foo': 'snappy', 'bar': 'gzip'},
+                  use_dictionary=['foo', 'bar'])
+
+Partitioned Datasets (Multiple Files)
+------------------------------------------------
+
+Multiple Parquet files constitute a Parquet *dataset*. These may present in a
+number of ways:
+
+* A list of Parquet absolute file paths
+* A directory name containing nested directories defining a partitioned dataset
+
+A dataset partitioned by year and month may look like on disk:
+
+.. code-block:: text
+
+   dataset_name/
+     year=2007/
+       month=01/
+          0.parq
+          1.parq
+          ...
+       month=02/
+          0.parq
+          1.parq
+          ...
+       month=03/
+       ...
+     year=2008/
+       month=01/
+       ...
+     ...
+
+Writing to Partitioned Datasets
+------------------------------------------------
+
+You can write a partitioned dataset for any ``pyarrow`` file system that is a
+file-store (e.g. local, HDFS, S3). The default behaviour when no filesystem is
+added is to use the local filesystem.
+
+.. code-block:: python
+
+   # Local dataset write
+   pq.write_to_dataset(table, root_path='dataset_name',
+                       partition_cols=['one', 'two'])
+
+The root path in this case specifies the parent directory to which data will be
+saved. The partition columns are the column names by which to partition the
+dataset. Columns are partitioned in the order they are given. The partition
+splits are determined by the unique values in the partition columns.
+
+To use another filesystem you only need to add the filesystem parameter, the
+individual table writes are wrapped using ``with`` statements so the
+``pq.write_to_dataset`` function does not need to be.
+
+.. code-block:: python
+
+   # Remote file-system example
+   fs = pa.hdfs.connect(host, port, user=user, kerb_ticket=ticket_cache_path)
+   pq.write_to_dataset(table, root_path='dataset_name',
+                       partition_cols=['one', 'two'], filesystem=fs)
+
+Compatibility Note: if using ``pq.write_to_dataset`` to create a table that
+will then be used by HIVE then partition column values must be compatible with
+the allowed character set of the HIVE version you are running.
+
+Reading from Partitioned Datasets
+------------------------------------------------
+
+The :class:`~.ParquetDataset` class accepts either a directory name or a list
+or file paths, and can discover and infer some common partition structures,
+such as those produced by Hive:
+
+.. code-block:: python
+
+   dataset = pq.ParquetDataset('dataset_name/')
+   table = dataset.read()
+
+You can also use the convenience function ``read_table`` exposed by
+``pyarrow.parquet`` that avoids the need for an additional Dataset object
+creation step.
+
+.. code-block:: python
+
+   table = pq.read_table('dataset_name')
+
+Note: the partition columns in the original table will have their types
+converted to Arrow dictionary types (pandas categorical) on load. Ordering of
+partition columns is not preserved through the save/load process. If reading
+from a remote filesystem into a pandas dataframe you may need to run
+``sort_index`` to maintain row ordering (as long as the ``preserve_index``
+option was enabled on write).
+
+Using with Spark
+----------------
+
+Spark places some constraints on the types of Parquet files it will read. The
+option ``flavor='spark'`` will set these options automatically and also
+sanitize field characters unsupported by Spark SQL.
+
+Multithreaded Reads
+-------------------
+
+Each of the reading functions have an ``nthreads`` argument which will read
+columns with the indicated level of parallelism. Depending on the speed of IO
+and how expensive it is to decode the columns in a particular file
+(particularly with GZIP compression), this can yield significantly higher data
+throughput:
+
+.. code-block:: python
+
+   pq.read_table(where, nthreads=4)
+   pq.ParquetDataset(where).read(nthreads=4)
+
+Reading a Parquet File from Azure Blob storage
+----------------------------------------------
+
+The code below shows how to use Azure's storage sdk along with pyarrow to read
+a parquet file into a Pandas dataframe.
+This is suitable for executing inside a Jupyter notebook running on a Python 3
+kernel.
+
+Dependencies:
+
+* python 3.6.2
+* azure-storage 0.36.0
+* pyarrow 0.8.0
+
+.. code-block:: python
+
+   import pyarrow.parquet as pq
+   from io import BytesIO
+   from azure.storage.blob import BlockBlobService
+
+   account_name = '...'
+   account_key = '...'
+   container_name = '...'
+   parquet_file = 'mysample.parquet'
+
+   byte_stream = io.BytesIO()
+   block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
+   try:
+      block_blob_service.get_blob_to_stream(container_name=container_name, blob_name=parquet_file, stream=byte_stream)
+      df = pq.read_table(source=byte_stream).to_pandas()
+      # Do work on df ...
+   finally:
+      # Add finally block to ensure closure of the stream
+      byte_stream.close()
+
+Notes:
+
+* The ``account_key`` can be found under ``Settings -> Access keys`` in the
+  Microsoft Azure portal for a given container
+* The code above works for a container with private access, Lease State =
+  Available, Lease Status = Unlocked
+* The parquet file was Blob Type = Block blob