You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ashish Shrowty (JIRA)" <ji...@apache.org> on 2016/09/30 18:38:20 UTC

[jira] [Comment Edited] (SPARK-17709) spark 2.0 join - column resolution error

    [ https://issues.apache.org/jira/browse/SPARK-17709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535957#comment-15535957 ] 

Ashish Shrowty edited comment on SPARK-17709 at 9/30/16 6:37 PM:
-----------------------------------------------------------------

Sure .. the data is brought over into the EMR (5.0.0) HDFS cluster via sqoop. Once there, I issue the following commands in Hive (2.1.0) to store it in S3 -

CREATE EXTERNAL TABLE <s3_tablename> (
   col1 bigint,
   col2 int,
   col3 string,
   ....
)
PARTITIONED BY (col8 int)
STORED AS PARQUET
LOCATION 's3_table_dir'

INSERT into <s3_tablename>
SELECT col1,col2,.... FROM <hdfs_tablename>



was (Author: ashrowty):
Sure .. the data is brought over into the EMR (5.0.0) HDFS cluster via sqoop. Once there, I issue the following commands in Hive (2.1.0) to store it in S3 -

CREATE EXTERNAL TABLE <s3_tablename> (
   col1 bigint,
   col2 int,
   col3 string,
   ....
)
PARTITIONED BY (col1 int)
STORED AS PARQUET
LOCATION 's3_table_dir'

INSERT into <s3_tablename>
SELECT col1,col2,.... FROM <hdfs_tablename>


> spark 2.0 join - column resolution error
> ----------------------------------------
>
>                 Key: SPARK-17709
>                 URL: https://issues.apache.org/jira/browse/SPARK-17709
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>            Reporter: Ashish Shrowty
>              Labels: easyfix
>
> If I try to inner-join two dataframes which originated from the same initial dataframe that was loaded using spark.sql() call, it results in an error -
> // reading from Hive .. the data is stored in Parquet format in Amazon S3
> val d1 = spark.sql("select * from <hivetable>")  
> val df1 = d1.groupBy("key1","key2")
>           .agg(avg("totalprice").as("avgtotalprice"))
> val df2 = d1.groupBy("key1","key2")
>           .agg(avg("itemcount").as("avgqty")) 
> df1.join(df2, Seq("key1","key2")) gives error -
> org.apache.spark.sql.AnalysisException: using columns ['key1,'key2] can 
> not be resolved given input columns: [key1, key2, avgtotalprice, avgqty];
> If the same Dataframe is initialized via spark.read.parquet(), the above code works. This same code above worked with Spark 1.6.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org