You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2021/08/30 08:28:20 UTC

[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #10999: ARROW-13404: [Doc][Python] Improve PyArrow documentation for new users

jorisvandenbossche commented on a change in pull request #10999:
URL: https://github.com/apache/arrow/pull/10999#discussion_r698272830



##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.

Review comment:
       ```suggestion
   :ref:`parquet`, :ref:`ipc` (:ref:`feather`), :ref:`csv` or :ref:`json` formats.
   ```
   
   ? (there might be quite some people that know Feather instead, and it's actually also still a bit more convenient to use)

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
+
+Performing Computations
+-----------------------
+
+Arrow ships with a bunch of compute functions that can be applied
+to its arrays, so through the compute functions it's possible to apply
+transformations to the data
+
+.. ipython:: python
+
+    import pyarrow.compute as pc
+
+    pc.value_counts(birthdays_table["years"])
+
+See :ref:`compute` for a list of available compute functions and
+how to use them.
+
+Working with big data

Review comment:
       I would personally avoid the term "Big Data" (although you maybe didn't mean "Big Data" with capitals here :), it's still what many people will read)
   
   Not directly sure what would be a good alternative. "Working with datasets" is probably not specific enough. "Working with partitioned datasets"? (or "Working with larger, partitioned datasets" ..)

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
+
+Performing Computations
+-----------------------
+
+Arrow ships with a bunch of compute functions that can be applied
+to its arrays, so through the compute functions it's possible to apply
+transformations to the data
+
+.. ipython:: python
+
+    import pyarrow.compute as pc
+
+    pc.value_counts(birthdays_table["years"])
+
+See :ref:`compute` for a list of available compute functions and
+how to use them.
+
+Working with big data
+---------------------
+
+Arrow also provides the :class:`pyarrow.dataset` api to work with
+big data, which will handle for you partitioning of your data in
+smaller chunks
+
+.. ipython:: python
+
+    import pyarrow.dataset as ds
+
+    ds.write_dataset(birthdays_table, "savedir", format="parquet", 
+                     partitioning=ds.partitioning(
+                        pa.schema([birthdays_table.schema.field("years")])
+                    ))
+
+Loading back the partitioned dataset will detect the chunks
+
+.. ipython:: python
+
+    birthdays_dataset = ds.dataset("savedir", schema=birthdays_table.schema,
+                                   partitioning=ds.partitioning(field_names=["years"]))
+
+    birthdays_dataset.files
+
+and will lazily load chunks of data only when iterating over them
+
+.. ipython:: python
+
+    import datetime
+
+    current_year = datetime.datetime.utcnow().year
+    for table_chunk in birthdays_dataset.to_batches():
+        print("AGES", pc.abs(pc.subtract(table_chunk["years"], current_year)))
+
+For further details on how to work with big datasets, how to filter them,
+how to project them etc... refer to :ref:`dataset` documentation.

Review comment:
       ```suggestion
   how to project them, etc., refer to :ref:`dataset` documentation.
   ```

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
+
+Performing Computations
+-----------------------
+
+Arrow ships with a bunch of compute functions that can be applied
+to its arrays, so through the compute functions it's possible to apply
+transformations to the data
+
+.. ipython:: python
+
+    import pyarrow.compute as pc
+
+    pc.value_counts(birthdays_table["years"])
+
+See :ref:`compute` for a list of available compute functions and
+how to use them.
+
+Working with big data
+---------------------
+
+Arrow also provides the :class:`pyarrow.dataset` api to work with
+big data, which will handle for you partitioning of your data in
+smaller chunks
+
+.. ipython:: python
+
+    import pyarrow.dataset as ds
+
+    ds.write_dataset(birthdays_table, "savedir", format="parquet", 
+                     partitioning=ds.partitioning(
+                        pa.schema([birthdays_table.schema.field("years")])
+                    ))
+
+Loading back the partitioned dataset will detect the chunks
+
+.. ipython:: python
+
+    birthdays_dataset = ds.dataset("savedir", schema=birthdays_table.schema,
+                                   partitioning=ds.partitioning(field_names=["years"]))

Review comment:
       ```suggestion
       birthdays_dataset = ds.dataset("savedir", format="parquet", partitioning=["years"])
   ```
   
   I would explicitly pass the format, because we don't actually "infer" it (this just happens to work because "parquet" is the default format, but it won't work if used a different format for writing). 
   The schema is not really needed in this case, I think.
   

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular

Review comment:
       ```suggestion
   Arrow also provides support for various formats to get those tabular
   ```

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns

Review comment:
       ```suggestion
   Multiple arrays can be combined in tables to form the columns
   ```

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
+
+Performing Computations
+-----------------------
+
+Arrow ships with a bunch of compute functions that can be applied
+to its arrays, so through the compute functions it's possible to apply
+transformations to the data
+
+.. ipython:: python
+
+    import pyarrow.compute as pc
+
+    pc.value_counts(birthdays_table["years"])
+
+See :ref:`compute` for a list of available compute functions and
+how to use them.
+
+Working with big data
+---------------------
+
+Arrow also provides the :class:`pyarrow.dataset` api to work with
+big data, which will handle for you partitioning of your data in
+smaller chunks
+
+.. ipython:: python
+
+    import pyarrow.dataset as ds
+
+    ds.write_dataset(birthdays_table, "savedir", format="parquet", 
+                     partitioning=ds.partitioning(
+                        pa.schema([birthdays_table.schema.field("years")])
+                    ))
+
+Loading back the partitioned dataset will detect the chunks
+
+.. ipython:: python
+
+    birthdays_dataset = ds.dataset("savedir", schema=birthdays_table.schema,
+                                   partitioning=ds.partitioning(field_names=["years"]))
+
+    birthdays_dataset.files
+
+and will lazily load chunks of data only when iterating over them
+
+.. ipython:: python
+
+    import datetime
+
+    current_year = datetime.datetime.utcnow().year
+    for table_chunk in birthdays_dataset.to_batches():
+        print("AGES", pc.abs(pc.subtract(table_chunk["years"], current_year)))

Review comment:
       ```suggestion
           print("AGES", pc.subtract(current_year, table_chunk["years"]))
   ```
   
   (didn't test!)

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
+
+Performing Computations
+-----------------------
+
+Arrow ships with a bunch of compute functions that can be applied
+to its arrays, so through the compute functions it's possible to apply

Review comment:
       Not only arrays but also tables (depending on the compute kernel)

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet

Review comment:
       ```suggestion
   Once you have tabular data, Arrow provides out of the box
   the features to save and restore that data for common formats
   like Parquet:
   ```
   
   Also, "Arrow provides out of the box the features" reads a bit strange I think. Maybe just something like "Arrow provides the functionality to save ..."

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)

Review comment:
       In this case, it's actually easier to only provide the field names `names=["days", "months", "years"]` (it's not needed to create a schema manually if you already have arrays with a type). 
   
   Unless it's to illustrate that a table consists of a list of column arrays according to a certain schema. But maybe for that you could also show the `table.schema` afterwards.

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
+
+Performing Computations
+-----------------------
+
+Arrow ships with a bunch of compute functions that can be applied
+to its arrays, so through the compute functions it's possible to apply
+transformations to the data
+
+.. ipython:: python
+
+    import pyarrow.compute as pc
+
+    pc.value_counts(birthdays_table["years"])
+
+See :ref:`compute` for a list of available compute functions and

Review comment:
       This links to the python page, which doesn't actually have a list of them ... (but not sure if directly linking to the C++ ones is better, though, it's just not ideal ;))

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,
+and Arrow is heavily optimized for memory and speed so loading
+data will be as quick as possible
+
+.. ipython:: python
+
+    reloaded_birthdays = pq.read_table('birthdays.parquet')
+
+    reloaded_birthdays
+
+Saving and loading back data in arrow is usually done through
+:ref:`parquet`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.

Review comment:
       ```suggestion
   :ref:`Parquet <parquet>`, :ref:`ipc`, :ref:`csv` or :ref:`json` formats.
   ```
   etc.
   
   Otherwise I _think_ that those links get replaced with the full title of each of those pages? 

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and
+a type
+
+.. ipython:: python
+
+    import pyarrow as pa
+
+    days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
+
+multiple arrays can be combined in tables to form the columns
+in tabular data according to a provided schema
+
+.. ipython:: python
+
+    months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
+    years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
+
+    birthdays_table = pa.table([days, months, years], 
+                               schema=pa.schema([
+                                    ('days', days.type),
+                                    ('months', months.type),
+                                    ('years', years.type)
+                               ]))
+    
+    birthdays_table
+
+See :ref:`data` for more details.
+
+Saving and Loading Tables
+-------------------------
+
+Once you have a tabular data, Arrow provides out of the box
+the features to save and restore that data for common formats
+like parquet
+
+.. ipython:: python   
+
+    import pyarrow.parquet as pq
+
+    pq.write_table(birthdays_table, 'birthdays.parquet')
+
+Once you have your data on disk, loading it back is as easy,

Review comment:
       In general it's good to avoid terms as "easy" and "just" in explanations, according to best practices (not that I always do that!)

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and
+perform computation of it. So each array is meant to have data and

Review comment:
       ```suggestion
   perform computations on it. So each array is meant to have data and
   ```

##########
File path: docs/source/python/index.rst
##########
@@ -15,12 +15,17 @@
 .. specific language governing permissions and limitations
 .. under the License.
 
-Python bindings
-===============
+PyArrow - Apache Arrow Python bindings
+======================================
 
 This is the documentation of the Python API of Apache Arrow. For more details
-on the Arrow format and other language bindings see the
-:doc:`parent documentation <../index>`.
+on the Arrow format and other language bindings 

Review comment:
       here is missing something?

##########
File path: docs/source/python/getstarted.rst
##########
@@ -0,0 +1,149 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+.. _getstarted:
+
+Getting Started
+===============
+
+Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
+grouped in tables (:class:`pyarrow.Table`) to represent columns of data
+in tabular data.
+
+Arrow also exposes supports for various formats to get those tabular
+data in and out of disk and networks. Most commonly used formats are
+Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`). 
+
+Creating Arrays and Tables
+--------------------------
+
+Arrays in Arrow are collections of data of uniform type. That allows
+arrow to use the best performing implementation to store the data and

Review comment:
       ```suggestion
   Arrow to use the best performing implementation to store the data and
   ```
   
   maybe "optimal implementation" instead of "best performing"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org