You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/11 06:40:00 UTC

[jira] [Commented] (SPARK-28864) Add spark connector for Alibaba Log Service

    [ https://issues.apache.org/jira/browse/SPARK-28864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949184#comment-16949184 ] 

Hyukjin Kwon commented on SPARK-28864:
--------------------------------------

I don't think this should be in Spark for now. Let's do it as a separate project or in Bahir.

> Add spark connector for Alibaba Log Service
> -------------------------------------------
>
>                 Key: SPARK-28864
>                 URL: https://issues.apache.org/jira/browse/SPARK-28864
>             Project: Spark
>          Issue Type: New Feature
>          Components: Input/Output
>    Affects Versions: 3.0.0
>            Reporter: Ke Li
>            Priority: Major
>
> Alibaba Log Service is a big data service which has been widely used in Alibaba Group and thousands of customers of Alibaba Cloud. The core storage engine of Log Service is named Loghub which is a large scale distributed storage system which provides producer and consumer to push and pull data like Kafka, AWS Kinesis and Azure Eventhub does. 
> Log Service provides a complete solution to help user collect data from both on premise and cloud data sources. More than 10 PB data is sent to and consumed from Loghub every day. And hundreds of thousands of users of Alibaba are working with Log Service for building their DevOPS and big data system. A large part of these users of Log Service are using Spark Streaming, Spark SQL and Spark Structured Streaming as well.
> Happy to hear any comments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org