You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Nirmalkumar (JIRA)" <ji...@apache.org> on 2016/09/27 07:46:20 UTC
[jira] [Commented] (HIVE-14844) Not able to create the Hive table
with more than 700 columns
[ https://issues.apache.org/jira/browse/HIVE-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15525392#comment-15525392 ]
Nirmalkumar commented on HIVE-14844:
------------------------------------
It seems like we are hitting a metastore limitation related to the Hive Metastore "SERDE_PARAMS.PARAM_VALUE" column limits to varchar 4000 our SERDEPROPERTIES is 5620byte.
> Not able to create the Hive table with more than 700 columns
> ------------------------------------------------------------
>
> Key: HIVE-14844
> URL: https://issues.apache.org/jira/browse/HIVE-14844
> Project: Hive
> Issue Type: Bug
> Components: Metastore
> Affects Versions: 1.1.0
> Environment: MySQL
> Reporter: Nirmalkumar
>
> We tried creating the Hive table with 700+ columns which leads to the below error:
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Put request failed : INSERT INTO `SERDE_PARAMS` (`PARAM_VALUE`,`SERDE_ID`,`PARAM_KEY`) VALUES (?,?,?)
> we use hive 1.1.0 and below is the SERDE properties we have in our MySQL.
> mysql> desc SERDE_PARAMS;
> +-------------+---------------+------+-----+---------+-------+
> | Field | Type | Null | Key | Default | Extra |
> +-------------+---------------+------+-----+---------+-------+
> | SERDE_ID | bigint(20) | NO | PRI | NULL | |
> | PARAM_KEY | varchar(256) | NO | PRI | NULL | |
> | PARAM_VALUE | varchar(4000) | YES | | NULL | |
> +-------------+---------------+------+-----+---------+-------+
> As per Cloudera,
> Actually it's a known limitation and there are no patches for this.
> To avoid this issue typically is increased the size of the field in the metastore, BUT this is not a supported Cloudera solution that can result in a break in a future Metastore schema upgrade (in case if we are evolving our CDH version to an upper one that will implement an hive version greater than the actual one).
> So could you please provide a fix for this?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)