You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Eshwar Gunturu (JIRA)" <ji...@apache.org> on 2018/03/15 15:25:00 UTC

[jira] [Updated] (SOLR-12102) Solr Index on Hive is Failing

     [ https://issues.apache.org/jira/browse/SOLR-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eshwar Gunturu updated SOLR-12102:
----------------------------------
    Description: 
I created a Solr Index on a Hive table with below steps. Everything works but when I copy rows from the Hive Internal table to the Hive External table, it fails. Pls help.

 
 1) {{CREATE TABLE ER_ENTITY1000(entityid INT,claimid_s INT,firstname_s STRING,lastname_s STRING,addrline1_s STRING, addrline2_s STRING, city_s STRING, state_S STRING, country_s STRING, zipcode_s STRING, dob_s STRING, ssn_s STRING, dl_num_s STRING, proflic_s STRING, policynum_s STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';}}

2) {{LOAD DATA LOCAL INPATH '/home/Solr1.csv' OVERWRITE INTO TABLE ER_ENTITY1;}}

3) {{add jar /home/solr-hive-serde-3.0.0.jar;}}

4)
 {{CREATE EXTERNAL TABLE SOLR_ENTITY999(entityid INT,claimid_s INT,firstname_s STRING,lastname_s STRING,addrline1_s STRING, addrline2_s STRING, city_s STRING, state_S STRING, country_s STRING, zipcode_s STRING, dob_s STRING, ssn_s STRING, dl_num_s STRING, proflic_s STRING, policynum_s STRING) STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' LOCATION '/user/SOLR_ENTITY1000' TBLPROPERTIES('solr.server.url' = 'http://URL/solr','solr.collection' = 'er_entity999','solr.query' = '*:*'); }}
 * 
 ** 
 *** 
 **** 
 ***** 
 ****** 
 ******* 
 ******** 
 ********* 
 ********** All above steps work fine **********

5) This step fails ...
 \{{INSERT OVERWRITE TABLE SOLR_ENTITY999 SELECT * FROM ER_ENTITY1000; }}
 ... With error:
 \{{hive> INSERT OVERWRITE TABLE SOLR_ENTITY999 SELECT * FROM ER_ENTITY1000; WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = i98779_20180308085142_3918b9ea-2158-4b0e-865f-2fcdefc17e4b Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Job running in-process (local Hadoop) 2018-03-08 08:51:45,993 Stage-1 map = 0%, reduce = 0% Ended Job = job_local1283927429_0001 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: MAPRFS Read: 0 MAPRFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec }}

 

The HIVE JOB FAILURE ERROR SAYS:

java.lang.Exception: Unknown container. Container either has not started or has already completed or doesn't belong to this node at all

 

  was:
I created a Solr Index on a Hive table with below steps. Everything works but when I copy rows from the Hive Internal table to the Hive External table, it fails. Pls help.

 
1) {{CREATE TABLE ER_ENTITY1000(entityid INT,claimid_s INT,firstname_s STRING,lastname_s STRING,addrline1_s STRING, addrline2_s STRING, city_s STRING, state_S STRING, country_s STRING, zipcode_s STRING, dob_s STRING, ssn_s STRING, dl_num_s STRING, proflic_s STRING, policynum_s STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';}}

2) {{LOAD DATA LOCAL INPATH '/home/Solr1.csv' OVERWRITE INTO TABLE ER_ENTITY1;}}

3) {{add jar /home/solr-hive-serde-3.0.0.jar;}}

4)
{{CREATE EXTERNAL TABLE SOLR_ENTITY999(entityid INT,claimid_s INT,firstname_s STRING,lastname_s STRING,addrline1_s STRING, addrline2_s STRING, city_s STRING, state_S STRING, country_s STRING, zipcode_s STRING, dob_s STRING, ssn_s STRING, dl_num_s STRING, proflic_s STRING, policynum_s STRING) STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' LOCATION '/user/SOLR_ENTITY1000' TBLPROPERTIES('solr.server.url' = 'http://URL/solr','solr.collection' = 'er_entity999','solr.query' = '*:*'); }}
********** All above steps work fine **********

5) This step fails ...
{{INSERT OVERWRITE TABLE SOLR_ENTITY999 SELECT * FROM ER_ENTITY1000; }}
... With error:
{{hive> INSERT OVERWRITE TABLE SOLR_ENTITY999 SELECT * FROM ER_ENTITY1000; WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = i98779_20180308085142_3918b9ea-2158-4b0e-865f-2fcdefc17e4b Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Job running in-process (local Hadoop) 2018-03-08 08:51:45,993 Stage-1 map = 0%, reduce = 0% Ended Job = job_local1283927429_0001 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: MAPRFS Read: 0 MAPRFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec }}


> Solr Index on Hive is Failing
> -----------------------------
>
>                 Key: SOLR-12102
>                 URL: https://issues.apache.org/jira/browse/SOLR-12102
>             Project: Solr
>          Issue Type: Wish
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: SolrJ
>         Environment: MapR
> Hive
> Solr
>            Reporter: Eshwar Gunturu
>            Priority: Critical
>
> I created a Solr Index on a Hive table with below steps. Everything works but when I copy rows from the Hive Internal table to the Hive External table, it fails. Pls help.
>  
>  1) {{CREATE TABLE ER_ENTITY1000(entityid INT,claimid_s INT,firstname_s STRING,lastname_s STRING,addrline1_s STRING, addrline2_s STRING, city_s STRING, state_S STRING, country_s STRING, zipcode_s STRING, dob_s STRING, ssn_s STRING, dl_num_s STRING, proflic_s STRING, policynum_s STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';}}
> 2) {{LOAD DATA LOCAL INPATH '/home/Solr1.csv' OVERWRITE INTO TABLE ER_ENTITY1;}}
> 3) {{add jar /home/solr-hive-serde-3.0.0.jar;}}
> 4)
>  {{CREATE EXTERNAL TABLE SOLR_ENTITY999(entityid INT,claimid_s INT,firstname_s STRING,lastname_s STRING,addrline1_s STRING, addrline2_s STRING, city_s STRING, state_S STRING, country_s STRING, zipcode_s STRING, dob_s STRING, ssn_s STRING, dl_num_s STRING, proflic_s STRING, policynum_s STRING) STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' LOCATION '/user/SOLR_ENTITY1000' TBLPROPERTIES('solr.server.url' = 'http://URL/solr','solr.collection' = 'er_entity999','solr.query' = '*:*'); }}
>  * 
>  ** 
>  *** 
>  **** 
>  ***** 
>  ****** 
>  ******* 
>  ******** 
>  ********* 
>  ********** All above steps work fine **********
> 5) This step fails ...
>  \{{INSERT OVERWRITE TABLE SOLR_ENTITY999 SELECT * FROM ER_ENTITY1000; }}
>  ... With error:
>  \{{hive> INSERT OVERWRITE TABLE SOLR_ENTITY999 SELECT * FROM ER_ENTITY1000; WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = i98779_20180308085142_3918b9ea-2158-4b0e-865f-2fcdefc17e4b Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Job running in-process (local Hadoop) 2018-03-08 08:51:45,993 Stage-1 map = 0%, reduce = 0% Ended Job = job_local1283927429_0001 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: MAPRFS Read: 0 MAPRFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec }}
>  
> The HIVE JOB FAILURE ERROR SAYS:
> java.lang.Exception: Unknown container. Container either has not started or has already completed or doesn't belong to this node at all
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org