You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Chetan Bhat (JIRA)" <ji...@apache.org> on 2017/11/15 13:16:01 UTC
[jira] [Updated] (CARBONDATA-1726) Carbon1.3.0-Streaming - Select
query from spark-shell does not execute successfully for streaming table
load
[ https://issues.apache.org/jira/browse/CARBONDATA-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chetan Bhat updated CARBONDATA-1726:
------------------------------------
Summary: Carbon1.3.0-Streaming - Select query from spark-shell does not execute successfully for streaming table load (was: Carbon1.3.0-Streaming - Select query from spark-sql does not execute successfully for streaming table load)
> Carbon1.3.0-Streaming - Select query from spark-shell does not execute successfully for streaming table load
> ------------------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-1726
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1726
> Project: CarbonData
> Issue Type: Bug
> Components: data-query
> Affects Versions: 1.3.0
> Environment: 3 node ant cluster SUSE 11 SP4
> Reporter: Chetan Bhat
> Labels: Functional
>
> Steps :
> // prepare csv file for batch loading
> cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin
> // generate streamSample.csv
> 100000001,batch_1,city_1,0.1,school_1:school_11$20
> 100000002,batch_2,city_2,0.2,school_2:school_22$30
> 100000003,batch_3,city_3,0.3,school_3:school_33$40
> 100000004,batch_4,city_4,0.4,school_4:school_44$50
> 100000005,batch_5,city_5,0.5,school_5:school_55$60
> // put to hdfs /tmp/streamSample.csv
> ./hadoop fs -put streamSample.csv /tmp
> // spark-beeline
> cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
> bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G --num-executors 3 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar "hdfs://hacluster/user/sparkhive/warehouse"
> bin/beeline -u jdbc:hive2://10.18.98.34:23040
> CREATE TABLE stream_table(
> id INT,
> name STRING,
> city STRING,
> salary FLOAT
> )
> STORED BY 'carbondata'
> TBLPROPERTIES('streaming'='true', 'sort_columns'='name');
> LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE stream_table OPTIONS('HEADER'='false');
> // spark-shell
> cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
> bin/spark-shell --master yarn-client
> import java.io.{File, PrintWriter}
> import java.net.ServerSocket
> import org.apache.spark.sql.{CarbonEnv, SparkSession}
> import org.apache.spark.sql.hive.CarbonRelation
> import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}
> import org.apache.carbondata.core.constants.CarbonCommonConstants
> import org.apache.carbondata.core.util.CarbonProperties
> import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
> import org.apache.spark.sql.CarbonSession._
> val carbonSession = SparkSession.
> builder().
> appName("StreamExample").
> config("spark.sql.warehouse.dir", "hdfs://hacluster/user/sparkhive/warehouse").
> config("javax.jdo.option.ConnectionURL", "jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8").
> config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver").
> config("javax.jdo.option.ConnectionPassword", "huawei").
> config("javax.jdo.option.ConnectionUserName", "sparksql").
> getOrCreateCarbonSession()
>
> carbonSession.sparkContext.setLogLevel("ERROR")
> carbonSession.sql("select * from stream_table").show
> Issue : Select query from spark-sql does not execute successfully for streaming table load.
> Expected : Select query from spark-sql should execute successfully for streaming table load.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)