You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by np...@apache.org on 2020/08/11 19:58:11 UTC

[arrow-site] branch master updated: Add aws-data-wrangler to "Powered by" section (#71)

This is an automated email from the ASF dual-hosted git repository.

npr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/arrow-site.git


The following commit(s) were added to refs/heads/master by this push:
     new 7189c5a  Add aws-data-wrangler to "Powered by" section (#71)
7189c5a is described below

commit 7189c5af876bf527ffafc23431e452eac4bfdc5d
Author: Igor Tavares <ig...@gmail.com>
AuthorDate: Tue Aug 11 16:56:19 2020 -0300

    Add aws-data-wrangler to "Powered by" section (#71)
---
 powered_by.md | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/powered_by.md b/powered_by.md
index 48731c3..99f55e4 100644
--- a/powered_by.md
+++ b/powered_by.md
@@ -63,6 +63,9 @@ short description of your use case.
   large-scale data processing. Spark uses Apache Arrow to
   1. improve performance of conversion between Spark DataFrame and pandas DataFrame
   2. enable a set of vectorized user-defined functions (`pandas_udf`) in PySpark.
+* **[AWS Data Wrangler][34]:** Extends the power of Pandas library to AWS connecting 
+  DataFrames and AWS data related services such as Amazon Redshift, AWS Glue, Amazon Athena, 
+  Amazon EMR, Amazon QuickSight, etc.
 * **[Dask][15]:** Python library for parallel and distributed execution of
   dynamic task graphs. Dask supports using pyarrow for accessing Parquet
   files
@@ -188,3 +191,4 @@ short description of your use case.
 [31]: https://github.com/RandomFractals/vscode-data-preview
 [32]: https://github.com/TileDB-Inc/TileDB
 [33]: https://github.com/TileDB-Inc/TileDB-VCF
+[34]: https://github.com/awslabs/aws-data-wrangler