You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by Bhavesh Shah <bh...@gmail.com> on 2012/02/02 12:46:09 UTC

Table not creating in hive

Hello all,

After successfully importing the tables in hive I am not able to see the
table in Hive.
When I imported the table I saw the dir on HDFS (under
/user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
the table is not in the list.

I find a lot about it but not getting anything.
Pls suggest me some solution for it.




-- 
Thanks and Regards,
Bhavesh Shah

RE: Not able to create table in hive

Posted by "Manish.Bhoge" <Ma...@target.com>.
This command can just tell about the data node and block details. 

My question was not specific to particular table. But I was not able to run any Hive command from command however same SQL can run from WEB UI.

Yes, I am using 2 different users, 'Guest' user from Web and 'admin' user from command line.

Thank You,
Manish

-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Friday, February 03, 2012 1:37 PM
To: user@hive.apache.org
Cc: dev@hive.apache.org; mgrover@oanda.com
Subject: Re: Not able to create table in hive
Importance: High

check the rights in hdfs for the table

hadoop dfs -fsck /path/to/warehouse/dir/table

The table was created with the right of the user guest, so only guest can write in.

- Alex  

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 8:58 AM, Manish.Bhoge wrote:

> Thanks Mark, 
> 
> I do have an access on metastore as I am login as admin. However when I query through Hue web interface it is working for me (using User: Guest). 
> 
> Here is how my hive-site.xml looks like. Is there any property that I need to extend here. 
> 
> <property>
>  <name>javax.jdo.option.ConnectionURL</name>
>  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
>  <description>JDBC connect string for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>javax.jdo.option.ConnectionDriverName</name>
>  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
>  <description>Driver class name for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>hive.hwi.war.file</name>
>  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
>  <description>This is the WAR file with the jsp content for Hive Web Interface</description>
> </property>
> 
> Thank You,
> Manish
> 
> -----Original Message-----
> From: Mark Grover [mailto:mgrover@oanda.com] 
> Sent: Thursday, February 02, 2012 8:15 PM
> To: user@hive.apache.org
> Subject: Re: Not able to create table in hive
> 
> Hi Manish,
> Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?
> 
> Mark
> 
> Mark Grover, Business Intelligence Analyst
> OANDA Corporation 
> 
> www: oanda.com www: fxtrade.com 
> e: mgrover@oanda.com 
> 
> "Best Trading Platform" - World Finance's Forex Awards 2009. 
> "The One to Watch" - Treasury Today's Adam Smith Awards 2009. 
> 
> 
> ----- Original Message -----
> From: "Manish.Bhoge" <Ma...@target.com>
> To: user@hive.apache.org, dev@hive.apache.org
> Sent: Thursday, February 2, 2012 8:45:26 AM
> Subject: Not able to create table in hive
> 
> 
> 
> 
> Hi, 
> 
> 
> 
> I am trying to create a table in Hive using below DDL: 
> 
> 
> 
> CREATE TABLE pokes (foo INT, bar STRING); 
> 
> 
> 
> I am getting below error, I have logged in as admin user : 
> 
> 
> 
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> 
> 
> NestedThrowables: 
> 
> org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 
> 
> 
> 
> Any idea? 
> 
> 
> 
> Thank You, 
> 
> Manish 
> 


Re: Not able to create table in hive

Posted by hadoop hive <ha...@gmail.com>.
Thanks Mark,

I do have an access on metastore as I am login as admin. However when I
query through Hue web interface it is working for me (using User: Guest).

Here is how my hive-site.xml looks like. Is there any property that I need
to extend here.

<property>
 <name>javax.jdo.option.ConnectionURL</name>

 <value>jdbc:derby:*;*databaseName=/usr/share/hue/metastore_db;create=true</
value>

 <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
 <name>javax.jdo.option.ConnectionDriverName</name>
 <value>org.apache.derby.jdbc.EmbeddedDriver</value>
 <description>Driver class name for a JDBC metastore</description>
</property>

<property>
 <name>hive.hwi.war.file</name>
 <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
 <description>This is the WAR file with the jsp content for Hive Web
Interface</description>
</property>

Thank You,
Manish

On Fri, Feb 3, 2012 at 1:37 PM, alo alt <wg...@googlemail.com> wrote:

> check the rights in hdfs for the table
>
> hadoop dfs -fsck /path/to/warehouse/dir/table
>
> The table was created with the right of the user guest, so only guest can
> write in.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 3, 2012, at 8:58 AM, Manish.Bhoge wrote:
>
> > Thanks Mark,
> >
> > I do have an access on metastore as I am login as admin. However when I
> query through Hue web interface it is working for me (using User: Guest).
> >
> > Here is how my hive-site.xml looks like. Is there any property that I
> need to extend here.
> >
> > <property>
> >  <name>javax.jdo.option.ConnectionURL</name>
> >
>  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
> >  <description>JDBC connect string for a JDBC metastore</description>
> > </property>
> >
> > <property>
> >  <name>javax.jdo.option.ConnectionDriverName</name>
> >  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
> >  <description>Driver class name for a JDBC metastore</description>
> > </property>
> >
> > <property>
> >  <name>hive.hwi.war.file</name>
> >  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
> >  <description>This is the WAR file with the jsp content for Hive Web
> Interface</description>
> > </property>
> >
> > Thank You,
> > Manish
> >
> > -----Original Message-----
> > From: Mark Grover [mailto:mgrover@oanda.com]
> > Sent: Thursday, February 02, 2012 8:15 PM
> > To: user@hive.apache.org
> > Subject: Re: Not able to create table in hive
> >
> > Hi Manish,
> > Sounds like a problem with your metastore. Can you verify that you have
> the access and correct permissions to your metastore?
> >
> > Mark
> >
> > Mark Grover, Business Intelligence Analyst
> > OANDA Corporation
> >
> > www: oanda.com www: fxtrade.com
> > e: mgrover@oanda.com
> >
> > "Best Trading Platform" - World Finance's Forex Awards 2009.
> > "The One to Watch" - Treasury Today's Adam Smith Awards 2009.
> >
> >
> > ----- Original Message -----
> > From: "Manish.Bhoge" <Ma...@target.com>
> > To: user@hive.apache.org, dev@hive.apache.org
> > Sent: Thursday, February 2, 2012 8:45:26 AM
> > Subject: Not able to create table in hive
> >
> >
> >
> >
> > Hi,
> >
> >
> >
> > I am trying to create a table in Hive using below DDL:
> >
> >
> >
> > CREATE TABLE pokes (foo INT, bar STRING);
> >
> >
> >
> > I am getting below error, I have logged in as admin user :
> >
> >
> >
> > FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot
> get a connection, pool error Could not create a validated object, cause: A
> read-only user or a user in a read-only database is not permitted to
> disable read-only mode on a connection.
> >
> >
> >
> > NestedThrowables:
> >
> > org.apache.commons.dbcp.SQLNestedException: Cannot get a connection,
> pool error Could not create a validated object, cause: A read-only user or
> a user in a read-only database is not permitted to disable read-only mode
> on a connection.
> >
> > FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> >
> >
> >
> > Any idea?
> >
> >
> >
> > Thank You,
> >
> > Manish
> >
>
>

Re: Not able to create table in hive

Posted by hadoop hive <ha...@gmail.com>.
Thanks Mark,

I do have an access on metastore as I am login as admin. However when I
query through Hue web interface it is working for me (using User: Guest).

Here is how my hive-site.xml looks like. Is there any property that I need
to extend here.

<property>
 <name>javax.jdo.option.ConnectionURL</name>

 <value>jdbc:derby:*;*databaseName=/usr/share/hue/metastore_db;create=true</
value>

 <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
 <name>javax.jdo.option.ConnectionDriverName</name>
 <value>org.apache.derby.jdbc.EmbeddedDriver</value>
 <description>Driver class name for a JDBC metastore</description>
</property>

<property>
 <name>hive.hwi.war.file</name>
 <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
 <description>This is the WAR file with the jsp content for Hive Web
Interface</description>
</property>

Thank You,
Manish

On Fri, Feb 3, 2012 at 1:37 PM, alo alt <wg...@googlemail.com> wrote:

> check the rights in hdfs for the table
>
> hadoop dfs -fsck /path/to/warehouse/dir/table
>
> The table was created with the right of the user guest, so only guest can
> write in.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 3, 2012, at 8:58 AM, Manish.Bhoge wrote:
>
> > Thanks Mark,
> >
> > I do have an access on metastore as I am login as admin. However when I
> query through Hue web interface it is working for me (using User: Guest).
> >
> > Here is how my hive-site.xml looks like. Is there any property that I
> need to extend here.
> >
> > <property>
> >  <name>javax.jdo.option.ConnectionURL</name>
> >
>  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
> >  <description>JDBC connect string for a JDBC metastore</description>
> > </property>
> >
> > <property>
> >  <name>javax.jdo.option.ConnectionDriverName</name>
> >  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
> >  <description>Driver class name for a JDBC metastore</description>
> > </property>
> >
> > <property>
> >  <name>hive.hwi.war.file</name>
> >  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
> >  <description>This is the WAR file with the jsp content for Hive Web
> Interface</description>
> > </property>
> >
> > Thank You,
> > Manish
> >
> > -----Original Message-----
> > From: Mark Grover [mailto:mgrover@oanda.com]
> > Sent: Thursday, February 02, 2012 8:15 PM
> > To: user@hive.apache.org
> > Subject: Re: Not able to create table in hive
> >
> > Hi Manish,
> > Sounds like a problem with your metastore. Can you verify that you have
> the access and correct permissions to your metastore?
> >
> > Mark
> >
> > Mark Grover, Business Intelligence Analyst
> > OANDA Corporation
> >
> > www: oanda.com www: fxtrade.com
> > e: mgrover@oanda.com
> >
> > "Best Trading Platform" - World Finance's Forex Awards 2009.
> > "The One to Watch" - Treasury Today's Adam Smith Awards 2009.
> >
> >
> > ----- Original Message -----
> > From: "Manish.Bhoge" <Ma...@target.com>
> > To: user@hive.apache.org, dev@hive.apache.org
> > Sent: Thursday, February 2, 2012 8:45:26 AM
> > Subject: Not able to create table in hive
> >
> >
> >
> >
> > Hi,
> >
> >
> >
> > I am trying to create a table in Hive using below DDL:
> >
> >
> >
> > CREATE TABLE pokes (foo INT, bar STRING);
> >
> >
> >
> > I am getting below error, I have logged in as admin user :
> >
> >
> >
> > FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot
> get a connection, pool error Could not create a validated object, cause: A
> read-only user or a user in a read-only database is not permitted to
> disable read-only mode on a connection.
> >
> >
> >
> > NestedThrowables:
> >
> > org.apache.commons.dbcp.SQLNestedException: Cannot get a connection,
> pool error Could not create a validated object, cause: A read-only user or
> a user in a read-only database is not permitted to disable read-only mode
> on a connection.
> >
> > FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> >
> >
> >
> > Any idea?
> >
> >
> >
> > Thank You,
> >
> > Manish
> >
>
>

RE: Not able to create table in hive

Posted by "Manish.Bhoge" <Ma...@target.com>.
This command can just tell about the data node and block details. 

My question was not specific to particular table. But I was not able to run any Hive command from command however same SQL can run from WEB UI.

Yes, I am using 2 different users, 'Guest' user from Web and 'admin' user from command line.

Thank You,
Manish

-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Friday, February 03, 2012 1:37 PM
To: user@hive.apache.org
Cc: dev@hive.apache.org; mgrover@oanda.com
Subject: Re: Not able to create table in hive
Importance: High

check the rights in hdfs for the table

hadoop dfs -fsck /path/to/warehouse/dir/table

The table was created with the right of the user guest, so only guest can write in.

- Alex  

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 8:58 AM, Manish.Bhoge wrote:

> Thanks Mark, 
> 
> I do have an access on metastore as I am login as admin. However when I query through Hue web interface it is working for me (using User: Guest). 
> 
> Here is how my hive-site.xml looks like. Is there any property that I need to extend here. 
> 
> <property>
>  <name>javax.jdo.option.ConnectionURL</name>
>  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
>  <description>JDBC connect string for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>javax.jdo.option.ConnectionDriverName</name>
>  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
>  <description>Driver class name for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>hive.hwi.war.file</name>
>  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
>  <description>This is the WAR file with the jsp content for Hive Web Interface</description>
> </property>
> 
> Thank You,
> Manish
> 
> -----Original Message-----
> From: Mark Grover [mailto:mgrover@oanda.com] 
> Sent: Thursday, February 02, 2012 8:15 PM
> To: user@hive.apache.org
> Subject: Re: Not able to create table in hive
> 
> Hi Manish,
> Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?
> 
> Mark
> 
> Mark Grover, Business Intelligence Analyst
> OANDA Corporation 
> 
> www: oanda.com www: fxtrade.com 
> e: mgrover@oanda.com 
> 
> "Best Trading Platform" - World Finance's Forex Awards 2009. 
> "The One to Watch" - Treasury Today's Adam Smith Awards 2009. 
> 
> 
> ----- Original Message -----
> From: "Manish.Bhoge" <Ma...@target.com>
> To: user@hive.apache.org, dev@hive.apache.org
> Sent: Thursday, February 2, 2012 8:45:26 AM
> Subject: Not able to create table in hive
> 
> 
> 
> 
> Hi, 
> 
> 
> 
> I am trying to create a table in Hive using below DDL: 
> 
> 
> 
> CREATE TABLE pokes (foo INT, bar STRING); 
> 
> 
> 
> I am getting below error, I have logged in as admin user : 
> 
> 
> 
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> 
> 
> NestedThrowables: 
> 
> org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 
> 
> 
> 
> Any idea? 
> 
> 
> 
> Thank You, 
> 
> Manish 
> 


Re: Not able to create table in hive

Posted by alo alt <wg...@googlemail.com>.
check the rights in hdfs for the table

hadoop dfs -fsck /path/to/warehouse/dir/table

The table was created with the right of the user guest, so only guest can write in.

- Alex  

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 8:58 AM, Manish.Bhoge wrote:

> Thanks Mark, 
> 
> I do have an access on metastore as I am login as admin. However when I query through Hue web interface it is working for me (using User: Guest). 
> 
> Here is how my hive-site.xml looks like. Is there any property that I need to extend here. 
> 
> <property>
>  <name>javax.jdo.option.ConnectionURL</name>
>  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
>  <description>JDBC connect string for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>javax.jdo.option.ConnectionDriverName</name>
>  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
>  <description>Driver class name for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>hive.hwi.war.file</name>
>  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
>  <description>This is the WAR file with the jsp content for Hive Web Interface</description>
> </property>
> 
> Thank You,
> Manish
> 
> -----Original Message-----
> From: Mark Grover [mailto:mgrover@oanda.com] 
> Sent: Thursday, February 02, 2012 8:15 PM
> To: user@hive.apache.org
> Subject: Re: Not able to create table in hive
> 
> Hi Manish,
> Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?
> 
> Mark
> 
> Mark Grover, Business Intelligence Analyst
> OANDA Corporation 
> 
> www: oanda.com www: fxtrade.com 
> e: mgrover@oanda.com 
> 
> "Best Trading Platform" - World Finance's Forex Awards 2009. 
> "The One to Watch" - Treasury Today's Adam Smith Awards 2009. 
> 
> 
> ----- Original Message -----
> From: "Manish.Bhoge" <Ma...@target.com>
> To: user@hive.apache.org, dev@hive.apache.org
> Sent: Thursday, February 2, 2012 8:45:26 AM
> Subject: Not able to create table in hive
> 
> 
> 
> 
> Hi, 
> 
> 
> 
> I am trying to create a table in Hive using below DDL: 
> 
> 
> 
> CREATE TABLE pokes (foo INT, bar STRING); 
> 
> 
> 
> I am getting below error, I have logged in as admin user : 
> 
> 
> 
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> 
> 
> NestedThrowables: 
> 
> org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 
> 
> 
> 
> Any idea? 
> 
> 
> 
> Thank You, 
> 
> Manish 
> 


Re: Not able to create table in hive

Posted by alo alt <wg...@googlemail.com>.
check the rights in hdfs for the table

hadoop dfs -fsck /path/to/warehouse/dir/table

The table was created with the right of the user guest, so only guest can write in.

- Alex  

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 8:58 AM, Manish.Bhoge wrote:

> Thanks Mark, 
> 
> I do have an access on metastore as I am login as admin. However when I query through Hue web interface it is working for me (using User: Guest). 
> 
> Here is how my hive-site.xml looks like. Is there any property that I need to extend here. 
> 
> <property>
>  <name>javax.jdo.option.ConnectionURL</name>
>  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
>  <description>JDBC connect string for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>javax.jdo.option.ConnectionDriverName</name>
>  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
>  <description>Driver class name for a JDBC metastore</description>
> </property>
> 
> <property>
>  <name>hive.hwi.war.file</name>
>  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
>  <description>This is the WAR file with the jsp content for Hive Web Interface</description>
> </property>
> 
> Thank You,
> Manish
> 
> -----Original Message-----
> From: Mark Grover [mailto:mgrover@oanda.com] 
> Sent: Thursday, February 02, 2012 8:15 PM
> To: user@hive.apache.org
> Subject: Re: Not able to create table in hive
> 
> Hi Manish,
> Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?
> 
> Mark
> 
> Mark Grover, Business Intelligence Analyst
> OANDA Corporation 
> 
> www: oanda.com www: fxtrade.com 
> e: mgrover@oanda.com 
> 
> "Best Trading Platform" - World Finance's Forex Awards 2009. 
> "The One to Watch" - Treasury Today's Adam Smith Awards 2009. 
> 
> 
> ----- Original Message -----
> From: "Manish.Bhoge" <Ma...@target.com>
> To: user@hive.apache.org, dev@hive.apache.org
> Sent: Thursday, February 2, 2012 8:45:26 AM
> Subject: Not able to create table in hive
> 
> 
> 
> 
> Hi, 
> 
> 
> 
> I am trying to create a table in Hive using below DDL: 
> 
> 
> 
> CREATE TABLE pokes (foo INT, bar STRING); 
> 
> 
> 
> I am getting below error, I have logged in as admin user : 
> 
> 
> 
> FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> 
> 
> NestedThrowables: 
> 
> org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 
> 
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 
> 
> 
> 
> Any idea? 
> 
> 
> 
> Thank You, 
> 
> Manish 
> 


RE: Not able to create table in hive

Posted by "Manish.Bhoge" <Ma...@target.com>.
Thanks Mark, 

I do have an access on metastore as I am login as admin. However when I query through Hue web interface it is working for me (using User: Guest). 

Here is how my hive-site.xml looks like. Is there any property that I need to extend here. 

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>hive.hwi.war.file</name>
  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
  <description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>

Thank You,
Manish

-----Original Message-----
From: Mark Grover [mailto:mgrover@oanda.com] 
Sent: Thursday, February 02, 2012 8:15 PM
To: user@hive.apache.org
Subject: Re: Not able to create table in hive

Hi Manish,
Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?

Mark

Mark Grover, Business Intelligence Analyst
OANDA Corporation 

www: oanda.com www: fxtrade.com 
e: mgrover@oanda.com 

"Best Trading Platform" - World Finance's Forex Awards 2009. 
"The One to Watch" - Treasury Today's Adam Smith Awards 2009. 


----- Original Message -----
From: "Manish.Bhoge" <Ma...@target.com>
To: user@hive.apache.org, dev@hive.apache.org
Sent: Thursday, February 2, 2012 8:45:26 AM
Subject: Not able to create table in hive




Hi, 



I am trying to create a table in Hive using below DDL: 



CREATE TABLE pokes (foo INT, bar STRING); 



I am getting below error, I have logged in as admin user : 



FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 



NestedThrowables: 

org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 



Any idea? 



Thank You, 

Manish 


RE: Not able to create table in hive

Posted by "Manish.Bhoge" <Ma...@target.com>.
Thanks Mark, 

I do have an access on metastore as I am login as admin. However when I query through Hue web interface it is working for me (using User: Guest). 

Here is how my hive-site.xml looks like. Is there any property that I need to extend here. 

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:derby:;databaseName=/usr/share/hue/metastore_db;create=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>org.apache.derby.jdbc.EmbeddedDriver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>hive.hwi.war.file</name>
  <value>/usr/lib/hive/lib/hive-hwi-0.7.0-CDH3B4.war</value>
  <description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>

Thank You,
Manish

-----Original Message-----
From: Mark Grover [mailto:mgrover@oanda.com] 
Sent: Thursday, February 02, 2012 8:15 PM
To: user@hive.apache.org
Subject: Re: Not able to create table in hive

Hi Manish,
Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?

Mark

Mark Grover, Business Intelligence Analyst
OANDA Corporation 

www: oanda.com www: fxtrade.com 
e: mgrover@oanda.com 

"Best Trading Platform" - World Finance's Forex Awards 2009. 
"The One to Watch" - Treasury Today's Adam Smith Awards 2009. 


----- Original Message -----
From: "Manish.Bhoge" <Ma...@target.com>
To: user@hive.apache.org, dev@hive.apache.org
Sent: Thursday, February 2, 2012 8:45:26 AM
Subject: Not able to create table in hive




Hi, 



I am trying to create a table in Hive using below DDL: 



CREATE TABLE pokes (foo INT, bar STRING); 



I am getting below error, I have logged in as admin user : 



FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 



NestedThrowables: 

org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 



Any idea? 



Thank You, 

Manish 


Re: Not able to create table in hive

Posted by Mark Grover <mg...@oanda.com>.
Hi Manish,
Sounds like a problem with your metastore. Can you verify that you have the access and correct permissions to your metastore?

Mark

Mark Grover, Business Intelligence Analyst
OANDA Corporation 

www: oanda.com www: fxtrade.com 
e: mgrover@oanda.com 

"Best Trading Platform" - World Finance's Forex Awards 2009. 
"The One to Watch" - Treasury Today's Adam Smith Awards 2009. 


----- Original Message -----
From: "Manish.Bhoge" <Ma...@target.com>
To: user@hive.apache.org, dev@hive.apache.org
Sent: Thursday, February 2, 2012 8:45:26 AM
Subject: Not able to create table in hive




Hi, 



I am trying to create a table in Hive using below DDL: 



CREATE TABLE pokes (foo INT, bar STRING); 



I am getting below error, I have logged in as admin user : 



FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 



NestedThrowables: 

org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. 

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask 



Any idea? 



Thank You, 

Manish 


Not able to create table in hive

Posted by "Manish.Bhoge" <Ma...@target.com>.
Hi,

I am trying to create a table in Hive using below DDL:

CREATE TABLE pokes (foo INT, bar STRING);

I am getting below error, I have logged in as admin user :

FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
NestedThrowables:
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

Any idea?

Thank You,
Manish


Not able to create table in hive

Posted by "Manish.Bhoge" <Ma...@target.com>.
Hi,

I am trying to create a table in Hive using below DDL:

CREATE TABLE pokes (foo INT, bar STRING);

I am getting below error, I have logged in as admin user :

FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
NestedThrowables:
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

Any idea?

Thank You,
Manish


Re: Table not creating in hive

Posted by Bejoy Ks <be...@yahoo.com>.
Bhavesh
    It is SQOOP related question hence moving to the SQOOP user group with hive user group in cc.

Did you do your SQOOP import by specifying --hive-import ? 

If hive table creation still doesn't seem to work,execute the SQOOP command with --verbose and scan the logs to see what is happening after the hdfs import. You can paste the relevant parts of the log here.

Regards
Bejoy.K.S




________________________________
 From: Bhavesh Shah <bh...@gmail.com>
To: user@hive.apache.org; dev@hive.apache.org 
Sent: Thursday, February 2, 2012 5:16 PM
Subject: Table not creating in hive
 

Hello all,

After successfully importing the tables in hive I am not able to see the table in Hive.
When I imported the table I saw the dir on HDFS (under /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
the table is not in the list.

I find a lot about it but not getting anything.
Pls suggest me some solution for it.




-- 
Thanks and Regards,
Bhavesh Shah

Re: Table not creating in hive

Posted by Bhavesh Shah <bh...@gmail.com>.
Alex,
I am importing from MS SQL, Server.
I have just one hive (hive-0.7.1) and I have installed Mysql in ubuntu and
using it as a database for hive.

--
Regards,
Bhavesh Shah

On Fri, Feb 3, 2012 at 2:09 PM, alo alt <wg...@googlemail.com> wrote:

> Let me understanding:
>
> you import from one box with installed hdfs and hive from a mysql server,
> right?
> Or did you have more as one hive box? And sqoop is running as the same
> user you use for hive?
>
> Note: the included and per default enabled metatore is a derby-db, local
> only. If you use more as one hive installation you have to change the
> metastore db first (mostly mysql).
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 3, 2012, at 9:32 AM, Bhavesh Shah wrote:
>
> > Hello Alex,
> > I have checked the rights and i am using same user for importing too.
> > Do I need to  install the hive again so that I cam=n solve my problem??
> >
> > --
> > Regards,
> > Bhavesh Shah
> >
> >
> > On Fri, Feb 3, 2012 at 1:45 PM, alo alt <wg...@googlemail.com>
> wrote:
> > check the hdfs for the rights:
> > hadoop dfs -ls /path/
> >
> > the config looks okay, so I assume that some tables was created in hue
> with other rights (rw hue user, r for all other). That can you check with
> -ls or in the WebUI -> browse filesystem -> click trough /hive/warehouse
> >
> > use the same user for import, operations and hue. Or enable kerberos
> auth ;)
> >
> > best,
> >  Alex
> >
> > --
> > Alexander Lorenz
> > http://mapredit.blogspot.com
> >
> > On Feb 3, 2012, at 9:09 AM, Bhavesh Shah wrote:
> >
> > > Hello Alex,
> > > Thanks for your reply.
> > > I have observed one thing this thing is happening with some tables
> only. While some tables import with the complete data while some not.
> > > But the issue is that though import completely or not their entry is
> not listed in "SHOW TABLE" command.
> > >
> > > Why this is happening I am not getting.
> > > Is there any problem in configuration?
> > >
> > >
> > >
> > > -
> > > Thanks and Regards,
> > > Bhavesh Shah
> > >
> > >
> > >
> > >
> > > On Fri, Feb 3, 2012 at 1:34 PM, alo alt <wg...@googlemail.com>
> wrote:
> > > 0 records exported, so the table will be not created since they have
> no data. Also check the file:
> > > > /java.io.IOException: Destination
> '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> > >
> > > sqoop will move it, but it still exists.
> > >
> > > - Alex
> > >
> > > --
> > > Alexander Lorenz
> > > http://mapredit.blogspot.com
> > >
> > > On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:
> > >
> > > >
> > > >
> > > > ---------- Forwarded message ----------
> > > > From: Bhavesh Shah <bh...@gmail.com>
> > > > Date: Fri, Feb 3, 2012 at 10:38 AM
> > > > Subject: Re: Table not creating in hive
> > > > To: dev@hive.apache.org, sqoop-user@incubator.apache.org
> > > >
> > > >
> > > > Hello Bejoy & Alexis,
> > > > Thanks for your reply.
> > > > I am using mysql as a database (and not derby)
> > > > Previuosly I am using --split by 1 and is working fine, but when I
> installed MySQL and change the database then I got the error for --split-by
> option and thats why I use -m 1.
> > > > But again due to that it is showing that data retrieve is 0.
> > > >
> > > > Here are the logs.
> > > > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect
> 'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest'
> --table Appointment --hive-table appointment -m 1 --hive-import --verbose
> > > >
> > > > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> > > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific
> delimiters for output. You can override
> > > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with
> --fields-terminated-by, etc.
> > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by
> /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
> com.cloudera.sqoop.manager.DefaultManagerFactory
> > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > > > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using
> Microsoft's SQL Server - Hadoop Connector
> > > > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize
> of 1000
> > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager
> com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> > > > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
> > > > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection
> paramenters specified. Using regular API for making connection.
> > > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1,
> ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1,
> RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1,
> Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12,
> ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93,
> ModifiedByUid:1, SingleDayAppointmentGroupUid:1,
> MultiDayAppointmentGroupUid:1,
> > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is
> Appointment.java
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is
> /home/hadoop/hadoop-0.20.2-cdh3u2
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with
> args:
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
> > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> > > > java.io.IOException: Destination
> '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> > > >     at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
> > > >     at
> com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
> > > >     at
> com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
> > > >     at
> com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
> > > >     at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
> > > >     at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
> > > >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > > >     at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
> > > >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
> > > >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
> > > >     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> > > > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class
> files in directory:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
> > > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class
> -> Appointment.class
> > > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar
> file
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > > > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of
> Appointment
> > > > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table
> class: Appointment
> > > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using
> InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> > > > 12/01/31 22:33:43 INFO mapred.JobClient: Running job:
> job_201201311414_0051
> > > > 12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
> > > > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete:
> job_201201311414_0051
> > > > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
> > > > 12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
> > > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes
> in 6.2606 seconds (0 bytes/sec)
> > > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> > > > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files
> from import process: Appointment/_logs
> > > > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into
> Hive
> > > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
> > > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable:
> appointment
> > > > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection
> paramenters specified. Using regular API for making connection.
> > > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to
> be cast to a less precise type in Hive
> > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be
> cast to a less precise type in Hive
> > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had
> to be cast to a less precise type in Hive
> > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had
> to be cast to a less precise type in Hive
> > > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement:
> CREATE TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING,
> `ExternalID` STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime`
> STRING,`Note` STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid`
> STRING, `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING)
> COMMENT 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
> > > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD
> DATA INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE
> `appointment`
> > > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
> > > > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> > > > 12/01/31 22:33:52 INFO hive.HiveImport: OK
> > > > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
> > > > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table
> default.appointment
> > > > 12/01/31 22:33:53 INFO hive.HiveImport: OK
> > > > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
> > > > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks and Regards,
> > > > Bhavesh Shah
> > > >
> > > >
> > > >
> > > > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <
> alexisdct@gmail.com> wrote:
> > > > This is because you need the metastore.
> > > > If you aren't installed in a databases,
> > > > it installed with derby in the directory when
> > > > you access to hive, remember where was it.
> > > > There you should find the directory name _metastore
> > > > and in this directory access to hive.
> > > >
> > > > Regards.
> > > >
> > > > El 2 de febrero de 2012 05:46, Bhavesh Shah <bhavesh25shah@gmail.com
> >escribió:
> > > >
> > > > > Hello all,
> > > > >
> > > > > After successfully importing the tables in hive I am not able to
> see the
> > > > > table in Hive.
> > > > > When I imported the table I saw the dir on HDFS (under
> > > > > /user/hive/warehouse/) but when I execute command in Hive "SHOW
> TABLES"
> > > > > the table is not in the list.
> > > > >
> > > > > I find a lot about it but not getting anything.
> > > > > Pls suggest me some solution for it.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Thanks and Regards,
> > > > > Bhavesh Shah
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Ing. Alexis de la Cruz Toledo.
> > > > *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro
> Zacatenco. México,
> > > > D.F, 07360 *
> > > > *CINVESTAV, DF.*
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Bhavesh Shah
> > > >
> > >
> > >
> > >
> > >
> > > -
> >
> >
> >
> >
> >
>
>


-- 
Regards,
Bhavesh Shah

Re: Table not creating in hive

Posted by alo alt <wg...@googlemail.com>.
Let me understanding:

you import from one box with installed hdfs and hive from a mysql server, right?
Or did you have more as one hive box? And sqoop is running as the same user you use for hive?

Note: the included and per default enabled metatore is a derby-db, local only. If you use more as one hive installation you have to change the metastore db first (mostly mysql).

- Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 9:32 AM, Bhavesh Shah wrote:

> Hello Alex,
> I have checked the rights and i am using same user for importing too.
> Do I need to  install the hive again so that I cam=n solve my problem??
> 
> -- 
> Regards,
> Bhavesh Shah
> 
> 
> On Fri, Feb 3, 2012 at 1:45 PM, alo alt <wg...@googlemail.com> wrote:
> check the hdfs for the rights:
> hadoop dfs -ls /path/
> 
> the config looks okay, so I assume that some tables was created in hue with other rights (rw hue user, r for all other). That can you check with -ls or in the WebUI -> browse filesystem -> click trough /hive/warehouse
> 
> use the same user for import, operations and hue. Or enable kerberos auth ;)
> 
> best,
>  Alex
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Feb 3, 2012, at 9:09 AM, Bhavesh Shah wrote:
> 
> > Hello Alex,
> > Thanks for your reply.
> > I have observed one thing this thing is happening with some tables only. While some tables import with the complete data while some not.
> > But the issue is that though import completely or not their entry is not listed in "SHOW TABLE" command.
> >
> > Why this is happening I am not getting.
> > Is there any problem in configuration?
> >
> >
> >
> > -
> > Thanks and Regards,
> > Bhavesh Shah
> >
> >
> >
> >
> > On Fri, Feb 3, 2012 at 1:34 PM, alo alt <wg...@googlemail.com> wrote:
> > 0 records exported, so the table will be not created since they have no data. Also check the file:
> > > /java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> >
> > sqoop will move it, but it still exists.
> >
> > - Alex
> >
> > --
> > Alexander Lorenz
> > http://mapredit.blogspot.com
> >
> > On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:
> >
> > >
> > >
> > > ---------- Forwarded message ----------
> > > From: Bhavesh Shah <bh...@gmail.com>
> > > Date: Fri, Feb 3, 2012 at 10:38 AM
> > > Subject: Re: Table not creating in hive
> > > To: dev@hive.apache.org, sqoop-user@incubator.apache.org
> > >
> > >
> > > Hello Bejoy & Alexis,
> > > Thanks for your reply.
> > > I am using mysql as a database (and not derby)
> > > Previuosly I am using --split by 1 and is working fine, but when I installed MySQL and change the database then I got the error for --split-by option and thats why I use -m 1.
> > > But again due to that it is showing that data retrieve is 0.
> > >
> > > Here are the logs.
> > > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect 'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest' --table Appointment --hive-table appointment -m 1 --hive-import --verbose
> > >
> > > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
> > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using Microsoft's SQL Server - Hadoop Connector
> > > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of 1000
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> > > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
> > > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
> > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1, ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1, RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1, Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12, ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93, ModifiedByUid:1, SingleDayAppointmentGroupUid:1, MultiDayAppointmentGroupUid:1,
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is Appointment.java
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is /home/hadoop/hadoop-0.20.2-cdh3u2
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with args:
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> > > java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> > >     at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
> > >     at com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
> > >     at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
> > >     at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
> > >     at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
> > >     at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
> > >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > >     at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
> > >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
> > >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
> > >     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> > > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class files in directory: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class -> Appointment.class
> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar file /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of Appointment
> > > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table class: Appointment
> > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> > > 12/01/31 22:33:43 INFO mapred.JobClient: Running job: job_201201311414_0051
> > > 12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
> > > 12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
> > > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete: job_201201311414_0051
> > > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> > > 12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
> > > 12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
> > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 6.2606 seconds (0 bytes/sec)
> > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> > > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from import process: Appointment/_logs
> > > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
> > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
> > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
> > > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
> > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be cast to a less precise type in Hive
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be cast to a less precise type in Hive
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to be cast to a less precise type in Hive
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to be cast to a less precise type in Hive
> > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID` STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note` STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
> > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE `appointment`
> > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
> > > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> > > 12/01/31 22:33:52 INFO hive.HiveImport: OK
> > > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
> > > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table default.appointment
> > > 12/01/31 22:33:53 INFO hive.HiveImport: OK
> > > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
> > > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Thanks and Regards,
> > > Bhavesh Shah
> > >
> > >
> > >
> > > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <al...@gmail.com> wrote:
> > > This is because you need the metastore.
> > > If you aren't installed in a databases,
> > > it installed with derby in the directory when
> > > you access to hive, remember where was it.
> > > There you should find the directory name _metastore
> > > and in this directory access to hive.
> > >
> > > Regards.
> > >
> > > El 2 de febrero de 2012 05:46, Bhavesh Shah <bh...@gmail.com>escribió:
> > >
> > > > Hello all,
> > > >
> > > > After successfully importing the tables in hive I am not able to see the
> > > > table in Hive.
> > > > When I imported the table I saw the dir on HDFS (under
> > > > /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> > > > the table is not in the list.
> > > >
> > > > I find a lot about it but not getting anything.
> > > > Pls suggest me some solution for it.
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks and Regards,
> > > > Bhavesh Shah
> > > >
> > >
> > >
> > >
> > > --
> > > Ing. Alexis de la Cruz Toledo.
> > > *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco. México,
> > > D.F, 07360 *
> > > *CINVESTAV, DF.*
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Regards,
> > > Bhavesh Shah
> > >
> >
> >
> >
> >
> > -
> 
> 
> 
> 
> 


Re: Table not creating in hive

Posted by Bhavesh Shah <bh...@gmail.com>.
Hello Alex,
I have checked the rights and i am using same user for importing too.
Do I need to  install the hive again so that I cam=n solve my problem??

-- 
Regards,
Bhavesh Shah


On Fri, Feb 3, 2012 at 1:45 PM, alo alt <wg...@googlemail.com> wrote:

> check the hdfs for the rights:
> hadoop dfs -ls /path/
>
> the config looks okay, so I assume that some tables was created in hue
> with other rights (rw hue user, r for all other). That can you check with
> -ls or in the WebUI -> browse filesystem -> click trough /hive/warehouse
>
> use the same user for import, operations and hue. Or enable kerberos auth
> ;)
>
> best,
>  Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 3, 2012, at 9:09 AM, Bhavesh Shah wrote:
>
> > Hello Alex,
> > Thanks for your reply.
> > I have observed one thing this thing is happening with some tables only.
> While some tables import with the complete data while some not.
> > But the issue is that though import completely or not their entry is not
> listed in "SHOW TABLE" command.
> >
> > Why this is happening I am not getting.
> > Is there any problem in configuration?
> >
> >
> >
> > -
> > Thanks and Regards,
> > Bhavesh Shah
> >
> >
> >
> >
> > On Fri, Feb 3, 2012 at 1:34 PM, alo alt <wg...@googlemail.com>
> wrote:
> > 0 records exported, so the table will be not created since they have no
> data. Also check the file:
> > > /java.io.IOException: Destination
> '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> >
> > sqoop will move it, but it still exists.
> >
> > - Alex
> >
> > --
> > Alexander Lorenz
> > http://mapredit.blogspot.com
> >
> > On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:
> >
> > >
> > >
> > > ---------- Forwarded message ----------
> > > From: Bhavesh Shah <bh...@gmail.com>
> > > Date: Fri, Feb 3, 2012 at 10:38 AM
> > > Subject: Re: Table not creating in hive
> > > To: dev@hive.apache.org, sqoop-user@incubator.apache.org
> > >
> > >
> > > Hello Bejoy & Alexis,
> > > Thanks for your reply.
> > > I am using mysql as a database (and not derby)
> > > Previuosly I am using --split by 1 and is working fine, but when I
> installed MySQL and change the database then I got the error for --split-by
> option and thats why I use -m 1.
> > > But again due to that it is showing that data retrieve is 0.
> > >
> > > Here are the logs.
> > > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect
> 'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest'
> --table Appointment --hive-table appointment -m 1 --hive-import --verbose
> > >
> > > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific
> delimiters for output. You can override
> > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with
> --fields-terminated-by, etc.
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by
> /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
> com.cloudera.sqoop.manager.DefaultManagerFactory
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using
> Microsoft's SQL Server - Hadoop Connector
> > > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of
> 1000
> > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager
> com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> > > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
> > > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters
> specified. Using regular API for making connection.
> > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1,
> ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1,
> RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1,
> Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12,
> ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93,
> ModifiedByUid:1, SingleDayAppointmentGroupUid:1,
> MultiDayAppointmentGroupUid:1,
> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is
> Appointment.java
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is
> /home/hadoop/hadoop-0.20.2-cdh3u2
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with
> args:
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
> > > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> > > java.io.IOException: Destination
> '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> > >     at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
> > >     at
> com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
> > >     at
> com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
> > >     at
> com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
> > >     at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
> > >     at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
> > >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > >     at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
> > >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
> > >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
> > >     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> > > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class
> files in directory:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class
> -> Appointment.class
> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar
> file
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of
> Appointment
> > > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table
> class: Appointment
> > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using
> InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> > > 12/01/31 22:33:43 INFO mapred.JobClient: Running job:
> job_201201311414_0051
> > > 12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
> > > 12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
> > > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete:
> job_201201311414_0051
> > > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> > > 12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
> > > 12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
> > > 12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
> > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
> 6.2606 seconds (0 bytes/sec)
> > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> > > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from
> import process: Appointment/_logs
> > > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
> > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
> > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
> > > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters
> specified. Using regular API for making connection.
> > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be
> cast to a less precise type in Hive
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be
> cast to a less precise type in Hive
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to
> be cast to a less precise type in Hive
> > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to
> be cast to a less precise type in Hive
> > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE
> TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID`
> STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note`
> STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING,
> `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT
> 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS
> TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
> > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA
> INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE
> `appointment`
> > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
> > > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> > > 12/01/31 22:33:52 INFO hive.HiveImport: OK
> > > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
> > > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table
> default.appointment
> > > 12/01/31 22:33:53 INFO hive.HiveImport: OK
> > > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
> > > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Thanks and Regards,
> > > Bhavesh Shah
> > >
> > >
> > >
> > > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <
> alexisdct@gmail.com> wrote:
> > > This is because you need the metastore.
> > > If you aren't installed in a databases,
> > > it installed with derby in the directory when
> > > you access to hive, remember where was it.
> > > There you should find the directory name _metastore
> > > and in this directory access to hive.
> > >
> > > Regards.
> > >
> > > El 2 de febrero de 2012 05:46, Bhavesh Shah <bhavesh25shah@gmail.com
> >escribió:
> > >
> > > > Hello all,
> > > >
> > > > After successfully importing the tables in hive I am not able to see
> the
> > > > table in Hive.
> > > > When I imported the table I saw the dir on HDFS (under
> > > > /user/hive/warehouse/) but when I execute command in Hive "SHOW
> TABLES"
> > > > the table is not in the list.
> > > >
> > > > I find a lot about it but not getting anything.
> > > > Pls suggest me some solution for it.
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks and Regards,
> > > > Bhavesh Shah
> > > >
> > >
> > >
> > >
> > > --
> > > Ing. Alexis de la Cruz Toledo.
> > > *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco.
> México,
> > > D.F, 07360 *
> > > *CINVESTAV, DF.*
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Regards,
> > > Bhavesh Shah
> > >
> >
> >
> >
> >
> > -
>
>

Re: Table not creating in hive

Posted by alo alt <wg...@googlemail.com>.
check the hdfs for the rights:
hadoop dfs -ls /path/

the config looks okay, so I assume that some tables was created in hue with other rights (rw hue user, r for all other). That can you check with -ls or in the WebUI -> browse filesystem -> click trough /hive/warehouse

use the same user for import, operations and hue. Or enable kerberos auth ;)

best,
 Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 9:09 AM, Bhavesh Shah wrote:

> Hello Alex,
> Thanks for your reply.
> I have observed one thing this thing is happening with some tables only. While some tables import with the complete data while some not.
> But the issue is that though import completely or not their entry is not listed in "SHOW TABLE" command.
> 
> Why this is happening I am not getting.
> Is there any problem in configuration?
> 
> 
> 
> - 
> Thanks and Regards,
> Bhavesh Shah
> 
> 
> 
> 
> On Fri, Feb 3, 2012 at 1:34 PM, alo alt <wg...@googlemail.com> wrote:
> 0 records exported, so the table will be not created since they have no data. Also check the file:
> > /java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> 
> sqoop will move it, but it still exists.
> 
> - Alex
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:
> 
> >
> >
> > ---------- Forwarded message ----------
> > From: Bhavesh Shah <bh...@gmail.com>
> > Date: Fri, Feb 3, 2012 at 10:38 AM
> > Subject: Re: Table not creating in hive
> > To: dev@hive.apache.org, sqoop-user@incubator.apache.org
> >
> >
> > Hello Bejoy & Alexis,
> > Thanks for your reply.
> > I am using mysql as a database (and not derby)
> > Previuosly I am using --split by 1 and is working fine, but when I installed MySQL and change the database then I got the error for --split-by option and thats why I use -m 1.
> > But again due to that it is showing that data retrieve is 0.
> >
> > Here are the logs.
> > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect 'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest' --table Appointment --hive-table appointment -m 1 --hive-import --verbose
> >
> > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
> > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using Microsoft's SQL Server - Hadoop Connector
> > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of 1000
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
> > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
> > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1, ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1, RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1, Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12, ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93, ModifiedByUid:1, SingleDayAppointmentGroupUid:1, MultiDayAppointmentGroupUid:1,
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is Appointment.java
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is /home/hadoop/hadoop-0.20.2-cdh3u2
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with args:
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> > java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> >     at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
> >     at com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
> >     at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
> >     at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
> >     at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
> >     at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
> >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >     at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
> >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
> >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
> >     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class files in directory: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class -> Appointment.class
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar file /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of Appointment
> > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table class: Appointment
> > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> > 12/01/31 22:33:43 INFO mapred.JobClient: Running job: job_201201311414_0051
> > 12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
> > 12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
> > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete: job_201201311414_0051
> > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> > 12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
> > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
> > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
> > 12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
> > 12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
> > 12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
> > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 6.2606 seconds (0 bytes/sec)
> > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from import process: Appointment/_logs
> > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID` STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note` STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
> > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE `appointment`
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
> > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> > 12/01/31 22:33:52 INFO hive.HiveImport: OK
> > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
> > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table default.appointment
> > 12/01/31 22:33:53 INFO hive.HiveImport: OK
> > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
> > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Bhavesh Shah
> >
> >
> >
> > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <al...@gmail.com> wrote:
> > This is because you need the metastore.
> > If you aren't installed in a databases,
> > it installed with derby in the directory when
> > you access to hive, remember where was it.
> > There you should find the directory name _metastore
> > and in this directory access to hive.
> >
> > Regards.
> >
> > El 2 de febrero de 2012 05:46, Bhavesh Shah <bh...@gmail.com>escribió:
> >
> > > Hello all,
> > >
> > > After successfully importing the tables in hive I am not able to see the
> > > table in Hive.
> > > When I imported the table I saw the dir on HDFS (under
> > > /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> > > the table is not in the list.
> > >
> > > I find a lot about it but not getting anything.
> > > Pls suggest me some solution for it.
> > >
> > >
> > >
> > >
> > > --
> > > Thanks and Regards,
> > > Bhavesh Shah
> > >
> >
> >
> >
> > --
> > Ing. Alexis de la Cruz Toledo.
> > *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco. México,
> > D.F, 07360 *
> > *CINVESTAV, DF.*
> >
> >
> >
> >
> >
> >
> > --
> > Regards,
> > Bhavesh Shah
> >
> 
> 
> 
> 
> -


Re: Table not creating in hive

Posted by Bhavesh Shah <bh...@gmail.com>.
Hello Alex,
Thanks for your reply.
I have observed one thing this thing is happening with some tables only.
While some tables import with the complete data while some not.
But the issue is that though import completely or not their entry is not
listed in "SHOW TABLE" command.

Why this is happening I am not getting.
Is there any problem in configuration?



-
Thanks and Regards,
Bhavesh Shah




On Fri, Feb 3, 2012 at 1:34 PM, alo alt <wg...@googlemail.com> wrote:

> 0 records exported, so the table will be not created since they have no
> data. Also check the file:
> > /java.io.IOException: Destination
> '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
>
> sqoop will move it, but it still exists.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:
>
> >
> >
> > ---------- Forwarded message ----------
> > From: Bhavesh Shah <bh...@gmail.com>
> > Date: Fri, Feb 3, 2012 at 10:38 AM
> > Subject: Re: Table not creating in hive
> > To: dev@hive.apache.org, sqoop-user@incubator.apache.org
> >
> >
> > Hello Bejoy & Alexis,
> > Thanks for your reply.
> > I am using mysql as a database (and not derby)
> > Previuosly I am using --split by 1 and is working fine, but when I
> installed MySQL and change the database then I got the error for --split-by
> option and thats why I use -m 1.
> > But again due to that it is showing that data retrieve is 0.
> >
> > Here are the logs.
> > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect
> 'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest'
> --table Appointment --hive-table appointment -m 1 --hive-import --verbose
> >
> > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific
> delimiters for output. You can override
> > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with
> --fields-terminated-by, etc.
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by
> /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
> com.cloudera.sqoop.manager.DefaultManagerFactory
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using
> Microsoft's SQL Server - Hadoop Connector
> > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of
> 1000
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager
> com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
> > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters
> specified. Using regular API for making connection.
> > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1,
> ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1,
> RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1,
> Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12,
> ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93,
> ModifiedByUid:1, SingleDayAppointmentGroupUid:1,
> MultiDayAppointmentGroupUid:1,
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is
> Appointment.java
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is
> /home/hadoop/hadoop-0.20.2-cdh3u2
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with args:
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager:
> /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> > java.io.IOException: Destination
> '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
> >     at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
> >     at
> com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
> >     at
> com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
> >     at
> com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
> >     at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
> >     at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
> >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >     at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
> >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
> >     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
> >     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class
> files in directory:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile:
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class
> -> Appointment.class
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar
> file
> /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of
> Appointment
> > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table
> class: Appointment
> > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using
> InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> > 12/01/31 22:33:43 INFO mapred.JobClient: Running job:
> job_201201311414_0051
> > 12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
> > 12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
> > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete:
> job_201201311414_0051
> > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> > 12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
> > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
> > 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
> > 12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
> > 12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
> > 12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
> > 12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
> > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
> 6.2606 seconds (0 bytes/sec)
> > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from
> import process: Appointment/_logs
> > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters
> specified. Using regular API for making connection.
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement:
> SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be
> cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be
> cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to be
> cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to
> be cast to a less precise type in Hive
> > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE
> TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID`
> STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note`
> STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING,
> `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT
> 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS
> TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
> > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA
> INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE
> `appointment`
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
> > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> > 12/01/31 22:33:52 INFO hive.HiveImport: OK
> > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
> > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table
> default.appointment
> > 12/01/31 22:33:53 INFO hive.HiveImport: OK
> > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
> > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Bhavesh Shah
> >
> >
> >
> > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <
> alexisdct@gmail.com> wrote:
> > This is because you need the metastore.
> > If you aren't installed in a databases,
> > it installed with derby in the directory when
> > you access to hive, remember where was it.
> > There you should find the directory name _metastore
> > and in this directory access to hive.
> >
> > Regards.
> >
> > El 2 de febrero de 2012 05:46, Bhavesh Shah <bhavesh25shah@gmail.com
> >escribió:
> >
> > > Hello all,
> > >
> > > After successfully importing the tables in hive I am not able to see
> the
> > > table in Hive.
> > > When I imported the table I saw the dir on HDFS (under
> > > /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> > > the table is not in the list.
> > >
> > > I find a lot about it but not getting anything.
> > > Pls suggest me some solution for it.
> > >
> > >
> > >
> > >
> > > --
> > > Thanks and Regards,
> > > Bhavesh Shah
> > >
> >
> >
> >
> > --
> > Ing. Alexis de la Cruz Toledo.
> > *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco.
> México,
> > D.F, 07360 *
> > *CINVESTAV, DF.*
> >
> >
> >
> >
> >
> >
> > --
> > Regards,
> > Bhavesh Shah
> >
>
>


-

Re: Table not creating in hive

Posted by alo alt <wg...@googlemail.com>.
0 records exported, so the table will be not created since they have no data. Also check the file:
> /java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists

sqoop will move it, but it still exists. 

- Alex

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:

> 
> 
> ---------- Forwarded message ----------
> From: Bhavesh Shah <bh...@gmail.com>
> Date: Fri, Feb 3, 2012 at 10:38 AM
> Subject: Re: Table not creating in hive
> To: dev@hive.apache.org, sqoop-user@incubator.apache.org
> 
> 
> Hello Bejoy & Alexis,
> Thanks for your reply.
> I am using mysql as a database (and not derby)
> Previuosly I am using --split by 1 and is working fine, but when I installed MySQL and change the database then I got the error for --split-by option and thats why I use -m 1.
> But again due to that it is showing that data retrieve is 0.
> 
> Here are the logs. 
> hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect 'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest' --table Appointment --hive-table appointment -m 1 --hive-import --verbose
> 
> 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
> 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
> 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
> 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using Microsoft's SQL Server - Hadoop Connector
> 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of 1000
> 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
> 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
> 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment] 
> 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment] 
> 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
> 12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
> 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
> 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1, ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1, RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1, Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12, ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93, ModifiedByUid:1, SingleDayAppointmentGroupUid:1, MultiDayAppointmentGroupUid:1, 
> 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is Appointment.java
> 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is /home/hadoop/hadoop-0.20.2-cdh3u2
> 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
> 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with args:
> 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
> 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
> 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> 12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
> 12/01/31 22:33:41 DEBUG orm.CompilationManager:   /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
>     at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
>     at com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
>     at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
>     at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
>     at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
>     at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>     at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
>     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
>     at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
>     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class files in directory: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
> 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class -> Appointment.class
> 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar file /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
> 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of Appointment
> 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment] 
> 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table class: Appointment
> 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> 12/01/31 22:33:43 INFO mapred.JobClient: Running job: job_201201311414_0051
> 12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
> 12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
> 12/01/31 22:33:48 INFO mapred.JobClient: Job complete: job_201201311414_0051
> 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> 12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters 
> 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
> 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
> 12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
> 12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
> 12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
> 12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
> 12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
> 12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
> 12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
> 12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
> 12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
> 12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
> 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 6.2606 seconds (0 bytes/sec)
> 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from import process: Appointment/_logs
> 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
> 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
> 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
> 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
> 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment] 
> 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT TOP 1 * FROM [Appointment] 
> 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be cast to a less precise type in Hive
> 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be cast to a less precise type in Hive
> 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to be cast to a less precise type in Hive
> 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to be cast to a less precise type in Hive
> 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID` STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note` STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
> 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE `appointment`
> 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
> 12/01/31 22:33:50 INFO hive.HiveImport: Hive history file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> 12/01/31 22:33:52 INFO hive.HiveImport: OK
> 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
> 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table default.appointment
> 12/01/31 22:33:53 INFO hive.HiveImport: OK
> 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
> 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> 
> 
> 
> 
> 
> -- 
> Thanks and Regards,
> Bhavesh Shah
> 
> 
> 
> On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <al...@gmail.com> wrote:
> This is because you need the metastore.
> If you aren't installed in a databases,
> it installed with derby in the directory when
> you access to hive, remember where was it.
> There you should find the directory name _metastore
> and in this directory access to hive.
> 
> Regards.
> 
> El 2 de febrero de 2012 05:46, Bhavesh Shah <bh...@gmail.com>escribió:
> 
> > Hello all,
> >
> > After successfully importing the tables in hive I am not able to see the
> > table in Hive.
> > When I imported the table I saw the dir on HDFS (under
> > /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> > the table is not in the list.
> >
> > I find a lot about it but not getting anything.
> > Pls suggest me some solution for it.
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Bhavesh Shah
> >
> 
> 
> 
> --
> Ing. Alexis de la Cruz Toledo.
> *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco. México,
> D.F, 07360 *
> *CINVESTAV, DF.*
> 
> 
> 
> 
> 
> 
> -- 
> Regards,
> Bhavesh Shah
> 


Fwd: Table not creating in hive

Posted by Bhavesh Shah <bh...@gmail.com>.
---------- Forwarded message ----------
From: Bhavesh Shah <bh...@gmail.com>
Date: Fri, Feb 3, 2012 at 10:38 AM
Subject: Re: Table not creating in hive
To: dev@hive.apache.org, sqoop-user@incubator.apache.org


Hello Bejoy & Alexis,
Thanks for your reply.
I am using mysql as a database (and not derby)
Previuosly I am using --split by 1 and is working fine, but when I
installed MySQL and change the database then I got the error for --split-by
option and thats why I use -m 1.
But again due to that it is showing that data retrieve is 0.

Here are the logs.
hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect
'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest'
--table Appointment --hive-table appointment -m 1 --hive-import --verbose

12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific delimiters
for output. You can override
12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with
--fields-terminated-by, etc.
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory
com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by
/home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.cloudera.sqoop.manager.DefaultManagerFactory
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using
Microsoft's SQL Server - Hadoop Connector
12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of 1000
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager
com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters
specified. Using regular API for making connection.
12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1,
ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1,
RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1,
Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12,
ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93,
ModifiedByUid:1, SingleDayAppointmentGroupUid:1,
MultiDayAppointmentGroupUid:1,
12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is Appointment.java
12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is
/home/hadoop/hadoop-0.20.2-cdh3u2
12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with args:
12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
12/01/31 22:33:41 DEBUG orm.CompilationManager:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
12/01/31 22:33:41 DEBUG orm.CompilationManager:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
12/01/31 22:33:41 DEBUG orm.CompilationManager:
/home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
java.io.IOException: Destination
'/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
    at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
    at
com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
    at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
    at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
    at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
    at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class files
in directory: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class
-> Appointment.class
12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar file
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of
Appointment
12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table class:
Appointment
12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat:
class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
12/01/31 22:33:43 INFO mapred.JobClient: Running job: job_201201311414_0051
12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
12/01/31 22:33:48 INFO mapred.JobClient: Job complete: job_201201311414_0051
12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
6.2606 seconds (0 bytes/sec)
12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from
import process: Appointment/_logs
12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters
specified. Using regular API for making connection.
12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be cast
to a less precise type in Hive
12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be cast
to a less precise type in Hive
12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to be
cast to a less precise type in Hive
12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to be
cast to a less precise type in Hive
12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE
IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID` STRING,
`PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note` STRING,
`AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, `CheckOutNote`
STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT 'Imported by
sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS TERMINATED BY
'\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA
INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE
`appointment`
12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
12/01/31 22:33:50 INFO hive.HiveImport: Hive history
file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
12/01/31 22:33:52 INFO hive.HiveImport: OK
12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table
default.appointment
12/01/31 22:33:53 INFO hive.HiveImport: OK
12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.





-- 
Thanks and Regards,
Bhavesh Shah



On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <
alexisdct@gmail.com> wrote:

> This is because you need the metastore.
> If you aren't installed in a databases,
> it installed with derby in the directory when
> you access to hive, remember where was it.
> There you should find the directory name _metastore
> and in this directory access to hive.
>
> Regards.
>
> El 2 de febrero de 2012 05:46, Bhavesh Shah <bhavesh25shah@gmail.com
> >escribió:
>
> > Hello all,
> >
> > After successfully importing the tables in hive I am not able to see the
> > table in Hive.
> > When I imported the table I saw the dir on HDFS (under
> > /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> > the table is not in the list.
> >
> > I find a lot about it but not getting anything.
> > Pls suggest me some solution for it.
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Bhavesh Shah
> >
>
>
>
> --
> Ing. Alexis de la Cruz Toledo.
> *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco.
> México,
> D.F, 07360 *
> *CINVESTAV, DF.*
>






-- 
Regards,
Bhavesh Shah

Re: Table not creating in hive

Posted by Bhavesh Shah <bh...@gmail.com>.
Hello Bejoy & Alexis,
Thanks for your reply.
I am using mysql as a database (and not derby)
Previuosly I am using --split by 1 and is working fine, but when I
installed MySQL and change the database then I got the error for --split-by
option and thats why I use -m 1.
But again due to that it is showing that data retrieve is 0.

Here are the logs.
hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import  --connect
'jdbc:sqlserver://192.168.1.1;username=abcd;password=12345;database=FIGMDHadoopTest'
--table Appointment --hive-table appointment -m 1 --hive-import --verbose

12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific delimiters
for output. You can override
12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with
--fields-terminated-by, etc.
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory
com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by
/home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.cloudera.sqoop.manager.DefaultManagerFactory
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using
Microsoft's SQL Server - Hadoop Connector
12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of 1000
12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager
com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation
12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters
specified. Using regular API for making connection.
12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   ExternalID
12/01/31 22:33:41 DEBUG orm.ClassWriter:   PatientUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   StartTime
12/01/31 22:33:41 DEBUG orm.ClassWriter:   EndTime
12/01/31 22:33:41 DEBUG orm.ClassWriter:   ResourceUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   Note
12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentTypeUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   AppointmentStatusUid
12/01/31 22:33:41 DEBUG orm.ClassWriter:   CheckOutNote
12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedDate
12/01/31 22:33:41 DEBUG orm.ClassWriter:   CreatedByUid
12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment
12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1,
ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1,
RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1,
Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12,
ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93,
ModifiedByUid:1, SingleDayAppointmentGroupUid:1,
MultiDayAppointmentGroupUid:1,
12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is Appointment.java
12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is
/home/hadoop/hadoop-0.20.2-cdh3u2
12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with args:
12/01/31 22:33:41 DEBUG orm.CompilationManager:   -sourcepath
12/01/31 22:33:41 DEBUG orm.CompilationManager:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
12/01/31 22:33:41 DEBUG orm.CompilationManager:   -d
12/01/31 22:33:41 DEBUG orm.CompilationManager:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
12/01/31 22:33:41 DEBUG orm.CompilationManager:   -classpath
12/01/31 22:33:41 DEBUG orm.CompilationManager:
/home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java
to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
java.io.IOException: Destination
'/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists
    at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
    at
com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
    at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
    at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337)
    at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
    at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class files
in directory: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff
12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile:
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class
-> Appointment.class
12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar file
/tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar
12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of
Appointment
12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table class:
Appointment
12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat:
class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
12/01/31 22:33:43 INFO mapred.JobClient: Running job: job_201201311414_0051
12/01/31 22:33:44 INFO mapred.JobClient:  map 0% reduce 0%
12/01/31 22:33:48 INFO mapred.JobClient:  map 100% reduce 0%
12/01/31 22:33:48 INFO mapred.JobClient: Job complete: job_201201311414_0051
12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
12/01/31 22:33:48 INFO mapred.JobClient:   Job Counters
12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4152
12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/01/31 22:33:48 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
12/01/31 22:33:48 INFO mapred.JobClient:     Launched map tasks=1
12/01/31 22:33:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/01/31 22:33:48 INFO mapred.JobClient:   FileSystemCounters
12/01/31 22:33:48 INFO mapred.JobClient:     HDFS_BYTES_READ=87
12/01/31 22:33:48 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=61985
12/01/31 22:33:48 INFO mapred.JobClient:   Map-Reduce Framework
12/01/31 22:33:48 INFO mapred.JobClient:     Map input records=0
12/01/31 22:33:48 INFO mapred.JobClient:     Spilled Records=0
12/01/31 22:33:48 INFO mapred.JobClient:     Map output records=0
12/01/31 22:33:48 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
6.2606 seconds (0 bytes/sec)
12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from
import process: Appointment/_logs
12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hive
12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment
12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointment
12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters
specified. Using regular API for making connection.
12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: SELECT
TOP 1 * FROM [Appointment]
12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to be cast
to a less precise type in Hive
12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be cast
to a less precise type in Hive
12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to be
cast to a less precise type in Hive
12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had to be
cast to a less precise type in Hive
12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE
IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID` STRING,
`PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note` STRING,
`AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, `CheckOutNote`
STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT 'Imported by
sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS TERMINATED BY
'\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DATA
INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE
`appointment`
12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process.
12/01/31 22:33:50 INFO hive.HiveImport: Hive history
file=/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
12/01/31 22:33:52 INFO hive.HiveImport: OK
12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds
12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table
default.appointment
12/01/31 22:33:53 INFO hive.HiveImport: OK
12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds
12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.




-- 
Thanks and Regards,
Bhavesh Shah



On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <
alexisdct@gmail.com> wrote:

> This is because you need the metastore.
> If you aren't installed in a databases,
> it installed with derby in the directory when
> you access to hive, remember where was it.
> There you should find the directory name _metastore
> and in this directory access to hive.
>
> Regards.
>
> El 2 de febrero de 2012 05:46, Bhavesh Shah <bhavesh25shah@gmail.com
> >escribió:
>
> > Hello all,
> >
> > After successfully importing the tables in hive I am not able to see the
> > table in Hive.
> > When I imported the table I saw the dir on HDFS (under
> > /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> > the table is not in the list.
> >
> > I find a lot about it but not getting anything.
> > Pls suggest me some solution for it.
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Bhavesh Shah
> >
>
>
>
> --
> Ing. Alexis de la Cruz Toledo.
> *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco.
> México,
> D.F, 07360 *
> *CINVESTAV, DF.*
>

Re: Table not creating in hive

Posted by Alexis De La Cruz Toledo <al...@gmail.com>.
This is because you need the metastore.
If you aren't installed in a databases,
it installed with derby in the directory when
you access to hive, remember where was it.
There you should find the directory name _metastore
and in this directory access to hive.

Regards.

El 2 de febrero de 2012 05:46, Bhavesh Shah <bh...@gmail.com>escribió:

> Hello all,
>
> After successfully importing the tables in hive I am not able to see the
> table in Hive.
> When I imported the table I saw the dir on HDFS (under
> /user/hive/warehouse/) but when I execute command in Hive "SHOW TABLES"
> the table is not in the list.
>
> I find a lot about it but not getting anything.
> Pls suggest me some solution for it.
>
>
>
>
> --
> Thanks and Regards,
> Bhavesh Shah
>



-- 
Ing. Alexis de la Cruz Toledo.
*Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco. México,
D.F, 07360 *
*CINVESTAV, DF.*