You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mich Talebzadeh <mi...@peridale.co.uk> on 2015/04/08 00:46:42 UTC

FW: A simple insert stuck in hive

Hi.

 

I sent this to hive user group but it seems that it is more relevant to map
reduce operation. It is inserting a row into table via hive. So reduce
should not play any role in it.

 

Today I have noticed the following issue.

 

A simple insert into a table is sting there throwing the following

 

hive> insert into table mytest values(1,'test');

Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1428439695331_0002, Tracking URL =
http://rhes564:8088/proxy/application_1428439695331_0002/

Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill
job_1428439695331_0002

Hadoop job information for Stage-1: number of mappers: 1; number of
reducers: 0

2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%

 

I have been messing around with concurrency for hive. That did not work. My
metastore is built in Oracle. So I drooped that schema and recreated from
scratch. Got rid of concurrency parameters. First I was getting "container
is running beyond virtual memory limits" for the task. I changed the
following parameters in yarn-site.xml 

 

 

<property>

  <name>yarn.nodemanager.resource.memory-mb</name>

  <value>2048</value>

  <description>Amount of physical memory, in MB, that can be allocated for
containers.</description>

</property>

<property>

  <name>yarn.scheduler.minimum-allocation-mb</name>

  <value>1024</value>

</property>

 

and mapred-site.xml 

 

<property>

<name>mapreduce.map.memory.mb</name>

<value>4096</value>

</property>

<property>

<name>mapreduce.reduce.memory.mb</name>

<value>4096</value>

</property>

<property>

<name>mapreduce.map.java.opts</name>

<value>-Xmx3072m</value>

</property>

<property>

<name>mapreduce.recduce.java.opts</name>

<value>-Xmx6144m</value>

</property>

<property>

<name>yarn.app.mapreduce.am.resource.mb</name>

<value>400</value>

</property>

 

However, nothing has helped except that virtual memory error has gone. Any
ideas appreciated.

 

Thanks

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Ltd, its
subsidiaries or their employees, unless expressly so stated. It is the
responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 


Re: FW: A simple insert stuck in hive

Posted by Chris Mawata <ch...@gmail.com>.
If this is a paste you have a typo where you say recduse instead of reduce
On Apr 7, 2015 6:47 PM, "Mich Talebzadeh" <mi...@peridale.co.uk> wrote:

> Hi.
>
>
>
> I sent this to hive user group but it seems that it is more relevant to
> map reduce operation. It is inserting a row into table via hive. So reduce
> should not play any role in it.
>
>
>
> Today I have noticed the following issue.
>
>
>
> A simple insert into a table is sting there throwing the following
>
>
>
> hive> insert into table mytest values(1,'test');
>
> Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Starting Job = job_1428439695331_0002, Tracking URL =
> http://rhes564:8088/proxy/application_1428439695331_0002/
>
> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill
> job_1428439695331_0002
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 0
>
> 2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%
>
>
>
> I have been messing around with concurrency for hive. That did not work.
> My metastore is built in Oracle. So I drooped that schema and recreated
> from scratch. Got rid of concurrency parameters. First I was getting
> “container is running beyond virtual memory limits” for the task. I changed
> the following parameters in yarn-site.xml
>
>
>
>
>
> <property>
>
>   <name>yarn.nodemanager.resource.memory-mb</name>
>
>   <value>2048</value>
>
>   <description>Amount of physical memory, in MB, that can be allocated for
> containers.</description>
>
> </property>
>
> <property>
>
>   <name>yarn.scheduler.minimum-allocation-mb</name>
>
>   <value>1024</value>
>
> </property>
>
>
>
> and mapred-site.xml
>
>
>
> <property>
>
> <name>mapreduce.map.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.reduce.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.map.java.opts</name>
>
> <value>-Xmx3072m</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.recduce.java.opts</name>
>
> <value>-Xmx6144m</value>
>
> </property>
>
> <property>
>
> <name>yarn.app.mapreduce.am.resource.mb</name>
>
> <value>400</value>
>
> </property>
>
>
>
> However, nothing has helped except that virtual memory error has gone. Any
> ideas appreciated.
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>

Re: FW: A simple insert stuck in hive

Posted by Chris Mawata <ch...@gmail.com>.
If this is a paste you have a typo where you say recduse instead of reduce
On Apr 7, 2015 6:47 PM, "Mich Talebzadeh" <mi...@peridale.co.uk> wrote:

> Hi.
>
>
>
> I sent this to hive user group but it seems that it is more relevant to
> map reduce operation. It is inserting a row into table via hive. So reduce
> should not play any role in it.
>
>
>
> Today I have noticed the following issue.
>
>
>
> A simple insert into a table is sting there throwing the following
>
>
>
> hive> insert into table mytest values(1,'test');
>
> Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Starting Job = job_1428439695331_0002, Tracking URL =
> http://rhes564:8088/proxy/application_1428439695331_0002/
>
> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill
> job_1428439695331_0002
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 0
>
> 2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%
>
>
>
> I have been messing around with concurrency for hive. That did not work.
> My metastore is built in Oracle. So I drooped that schema and recreated
> from scratch. Got rid of concurrency parameters. First I was getting
> “container is running beyond virtual memory limits” for the task. I changed
> the following parameters in yarn-site.xml
>
>
>
>
>
> <property>
>
>   <name>yarn.nodemanager.resource.memory-mb</name>
>
>   <value>2048</value>
>
>   <description>Amount of physical memory, in MB, that can be allocated for
> containers.</description>
>
> </property>
>
> <property>
>
>   <name>yarn.scheduler.minimum-allocation-mb</name>
>
>   <value>1024</value>
>
> </property>
>
>
>
> and mapred-site.xml
>
>
>
> <property>
>
> <name>mapreduce.map.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.reduce.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.map.java.opts</name>
>
> <value>-Xmx3072m</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.recduce.java.opts</name>
>
> <value>-Xmx6144m</value>
>
> </property>
>
> <property>
>
> <name>yarn.app.mapreduce.am.resource.mb</name>
>
> <value>400</value>
>
> </property>
>
>
>
> However, nothing has helped except that virtual memory error has gone. Any
> ideas appreciated.
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>

Re: FW: A simple insert stuck in hive

Posted by Chris Mawata <ch...@gmail.com>.
If this is a paste you have a typo where you say recduse instead of reduce
On Apr 7, 2015 6:47 PM, "Mich Talebzadeh" <mi...@peridale.co.uk> wrote:

> Hi.
>
>
>
> I sent this to hive user group but it seems that it is more relevant to
> map reduce operation. It is inserting a row into table via hive. So reduce
> should not play any role in it.
>
>
>
> Today I have noticed the following issue.
>
>
>
> A simple insert into a table is sting there throwing the following
>
>
>
> hive> insert into table mytest values(1,'test');
>
> Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Starting Job = job_1428439695331_0002, Tracking URL =
> http://rhes564:8088/proxy/application_1428439695331_0002/
>
> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill
> job_1428439695331_0002
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 0
>
> 2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%
>
>
>
> I have been messing around with concurrency for hive. That did not work.
> My metastore is built in Oracle. So I drooped that schema and recreated
> from scratch. Got rid of concurrency parameters. First I was getting
> “container is running beyond virtual memory limits” for the task. I changed
> the following parameters in yarn-site.xml
>
>
>
>
>
> <property>
>
>   <name>yarn.nodemanager.resource.memory-mb</name>
>
>   <value>2048</value>
>
>   <description>Amount of physical memory, in MB, that can be allocated for
> containers.</description>
>
> </property>
>
> <property>
>
>   <name>yarn.scheduler.minimum-allocation-mb</name>
>
>   <value>1024</value>
>
> </property>
>
>
>
> and mapred-site.xml
>
>
>
> <property>
>
> <name>mapreduce.map.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.reduce.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.map.java.opts</name>
>
> <value>-Xmx3072m</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.recduce.java.opts</name>
>
> <value>-Xmx6144m</value>
>
> </property>
>
> <property>
>
> <name>yarn.app.mapreduce.am.resource.mb</name>
>
> <value>400</value>
>
> </property>
>
>
>
> However, nothing has helped except that virtual memory error has gone. Any
> ideas appreciated.
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>

Re: FW: A simple insert stuck in hive

Posted by Chris Mawata <ch...@gmail.com>.
If this is a paste you have a typo where you say recduse instead of reduce
On Apr 7, 2015 6:47 PM, "Mich Talebzadeh" <mi...@peridale.co.uk> wrote:

> Hi.
>
>
>
> I sent this to hive user group but it seems that it is more relevant to
> map reduce operation. It is inserting a row into table via hive. So reduce
> should not play any role in it.
>
>
>
> Today I have noticed the following issue.
>
>
>
> A simple insert into a table is sting there throwing the following
>
>
>
> hive> insert into table mytest values(1,'test');
>
> Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Starting Job = job_1428439695331_0002, Tracking URL =
> http://rhes564:8088/proxy/application_1428439695331_0002/
>
> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill
> job_1428439695331_0002
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 0
>
> 2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%
>
> 2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%
>
>
>
> I have been messing around with concurrency for hive. That did not work.
> My metastore is built in Oracle. So I drooped that schema and recreated
> from scratch. Got rid of concurrency parameters. First I was getting
> “container is running beyond virtual memory limits” for the task. I changed
> the following parameters in yarn-site.xml
>
>
>
>
>
> <property>
>
>   <name>yarn.nodemanager.resource.memory-mb</name>
>
>   <value>2048</value>
>
>   <description>Amount of physical memory, in MB, that can be allocated for
> containers.</description>
>
> </property>
>
> <property>
>
>   <name>yarn.scheduler.minimum-allocation-mb</name>
>
>   <value>1024</value>
>
> </property>
>
>
>
> and mapred-site.xml
>
>
>
> <property>
>
> <name>mapreduce.map.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.reduce.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.map.java.opts</name>
>
> <value>-Xmx3072m</value>
>
> </property>
>
> <property>
>
> <name>mapreduce.recduce.java.opts</name>
>
> <value>-Xmx6144m</value>
>
> </property>
>
> <property>
>
> <name>yarn.app.mapreduce.am.resource.mb</name>
>
> <value>400</value>
>
> </property>
>
>
>
> However, nothing has helped except that virtual memory error has gone. Any
> ideas appreciated.
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>