You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Parag Sarda <PS...@walmartlabs.com> on 2013/02/11 23:59:44 UTC

How to load hive metadata from conf dir

Hello Hive Users, 

I am writing a program in java which is bundled as JAR and executed using
hadoop jar command. I would like to access hive metadata (read partitions
informations) in this program. I can ask user to set HIVE_CONF_DIR
environment variable before calling my program or ask for any reasonable
parameters to be passed. I do not want to force user to run hive megastore
service if possible to increase reliability of program by avoiding
external dependencies.

What is the recommended way to get partitions information? Here is my
understanding
1. Make sure my jar is bundled with hive-metastore[1] library.
2. Use HiveMetastoreClient[2]

Is this correct? If yes, how to read the hive configuration[3] from
HIVE_CONF_DIR?

[1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
[2] 
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
eMetaStoreClient.html
[3] 
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
.html

Thanks in advance, 
Parag


Re: How to load hive metadata from conf dir

Posted by Parag Sarda <PS...@walmartlabs.com>.
Hive-thrift is definitely best option till now. That said, I am wondering
if its possible to load megastore in local mode[1] to avoid dependency on
external service. Can I read the HIVE_CONF_DIR for javax.jdo.option.*
parameters and talk to sql server hosting hive metadata?

[1] https://cwiki.apache.org/Hive/adminmanual-metastoreadmin.html

Thanks, 
Parag




On 12/02/13 10:23 PM, "Edward Capriolo" <ed...@gmail.com> wrote:

>If you use hive-thrift/hive-service you can get the location of a
>table through the Table API (instead of Dean's horrid bash-isms)
>
>http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/metastore/ap
>i/ThriftHiveMetastore.Client.html#get_table(java.lang.String,
>java.lang.String)
>
>Table t = ....
>t.getSd().getLocation()
>
>
>On Tue, Feb 12, 2013 at 9:41 AM, Dean Wampler
><de...@thinkbiganalytics.com> wrote:
>> I'll mention another bash hack that I use all the time:
>>
>> hive -e 'some_command' | grep for_what_i_want |
>> sed_command_to_remove_just_i_dont_want
>>
>> For example, the following command will print just the value of
>> hive.metastore.warehouse.dir, sending all the logging junk written to
>>stderr
>> to /dev/null and stripping off the leading
>>"hive.metastore.warehouse.dir="
>> from the stdout output:
>>
>> hive -e 'set hive.metastore.warehouse.dir;' 2> /dev/null | sed -e
>> 's/hive.metastore.warehouse.dir=//'
>>
>> (No grep subcommand required in this case...)
>>
>> You could do something similar with DESCRIBE EXTENDED table PARTION(...)
>> Suppose you want a script that works for any property. Put the
>>following in
>> a script file, say hive-prop.sh:
>>
>> #!/bin/sh
>> hive -e "set $1;" 2> /dev/null | sed -e "s/$1=//"
>>
>> Make it executable (chmod +x /path/to/hive-prop.sh), then run it this
>>way:
>>
>> /path/to/hive-prop.sh hive.metastore.warehouse.dir
>>
>> Back to asking for for metadata for a table. The following script will
>> determine the location of a particular partition for an external
>> "mydatabase.stocks" table:
>>
>> #!/bin/sh
>> hive -e "describe formatted mydatabase.stocks
>>partition(exchange='NASDAQ',
>> symbol='AAPL');" 2> /dev/null | grep Location | sed -e "s/Location:[
>>\t]*//"
>>
>> dean
>>
>> On Mon, Feb 11, 2013 at 4:59 PM, Parag Sarda <PS...@walmartlabs.com>
>>wrote:
>>>
>>> Hello Hive Users,
>>>
>>> I am writing a program in java which is bundled as JAR and executed
>>>using
>>> hadoop jar command. I would like to access hive metadata (read
>>>partitions
>>> informations) in this program. I can ask user to set HIVE_CONF_DIR
>>> environment variable before calling my program or ask for any
>>>reasonable
>>> parameters to be passed. I do not want to force user to run hive
>>>megastore
>>> service if possible to increase reliability of program by avoiding
>>> external dependencies.
>>>
>>> What is the recommended way to get partitions information? Here is my
>>> understanding
>>> 1. Make sure my jar is bundled with hive-metastore[1] library.
>>> 2. Use HiveMetastoreClient[2]
>>>
>>> Is this correct? If yes, how to read the hive configuration[3] from
>>> HIVE_CONF_DIR?
>>>
>>> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
>>> [2]
>>>
>>> 
>>>http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/
>>>Hiv
>>> eMetaStoreClient.html
>>> [3]
>>>
>>> 
>>>http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveC
>>>onf
>>> .html
>>>
>>> Thanks in advance,
>>> Parag
>>>
>>
>>
>>
>> --
>> Dean Wampler, Ph.D.
>> thinkbiganalytics.com
>> +1-312-339-1330
>>


Re: How to load hive metadata from conf dir

Posted by Dean Wampler <de...@thinkbiganalytics.com>.
But then you're writing Java code!!! The Horror!!!

;^P

On Tue, Feb 12, 2013 at 10:53 AM, Edward Capriolo <ed...@gmail.com>wrote:

> If you use hive-thrift/hive-service you can get the location of a
> table through the Table API (instead of Dean's horrid bash-isms)
>
>
> http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.Client.html#get_table(java.lang.String
> ,
> java.lang.String)
>
> Table t = ....
> t.getSd().getLocation()
>
>
> On Tue, Feb 12, 2013 at 9:41 AM, Dean Wampler
> <de...@thinkbiganalytics.com> wrote:
> > I'll mention another bash hack that I use all the time:
> >
> > hive -e 'some_command' | grep for_what_i_want |
> > sed_command_to_remove_just_i_dont_want
> >
> > For example, the following command will print just the value of
> > hive.metastore.warehouse.dir, sending all the logging junk written to
> stderr
> > to /dev/null and stripping off the leading
> "hive.metastore.warehouse.dir="
> > from the stdout output:
> >
> > hive -e 'set hive.metastore.warehouse.dir;' 2> /dev/null | sed -e
> > 's/hive.metastore.warehouse.dir=//'
> >
> > (No grep subcommand required in this case...)
> >
> > You could do something similar with DESCRIBE EXTENDED table PARTION(...)
> > Suppose you want a script that works for any property. Put the following
> in
> > a script file, say hive-prop.sh:
> >
> > #!/bin/sh
> > hive -e "set $1;" 2> /dev/null | sed -e "s/$1=//"
> >
> > Make it executable (chmod +x /path/to/hive-prop.sh), then run it this
> way:
> >
> > /path/to/hive-prop.sh hive.metastore.warehouse.dir
> >
> > Back to asking for for metadata for a table. The following script will
> > determine the location of a particular partition for an external
> > "mydatabase.stocks" table:
> >
> > #!/bin/sh
> > hive -e "describe formatted mydatabase.stocks
> partition(exchange='NASDAQ',
> > symbol='AAPL');" 2> /dev/null | grep Location | sed -e "s/Location:[
> \t]*//"
> >
> > dean
> >
> > On Mon, Feb 11, 2013 at 4:59 PM, Parag Sarda <PS...@walmartlabs.com>
> wrote:
> >>
> >> Hello Hive Users,
> >>
> >> I am writing a program in java which is bundled as JAR and executed
> using
> >> hadoop jar command. I would like to access hive metadata (read
> partitions
> >> informations) in this program. I can ask user to set HIVE_CONF_DIR
> >> environment variable before calling my program or ask for any reasonable
> >> parameters to be passed. I do not want to force user to run hive
> megastore
> >> service if possible to increase reliability of program by avoiding
> >> external dependencies.
> >>
> >> What is the recommended way to get partitions information? Here is my
> >> understanding
> >> 1. Make sure my jar is bundled with hive-metastore[1] library.
> >> 2. Use HiveMetastoreClient[2]
> >>
> >> Is this correct? If yes, how to read the hive configuration[3] from
> >> HIVE_CONF_DIR?
> >>
> >> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
> >> [2]
> >>
> >>
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
> >> eMetaStoreClient.html
> >> [3]
> >>
> >>
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
> >> .html
> >>
> >> Thanks in advance,
> >> Parag
> >>
> >
> >
> >
> > --
> > Dean Wampler, Ph.D.
> > thinkbiganalytics.com
> > +1-312-339-1330
> >
>



-- 
*Dean Wampler, Ph.D.*
thinkbiganalytics.com
+1-312-339-1330

Re: How to load hive metadata from conf dir

Posted by Edward Capriolo <ed...@gmail.com>.
If you use hive-thrift/hive-service you can get the location of a
table through the Table API (instead of Dean's horrid bash-isms)

http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.Client.html#get_table(java.lang.String,
java.lang.String)

Table t = ....
t.getSd().getLocation()


On Tue, Feb 12, 2013 at 9:41 AM, Dean Wampler
<de...@thinkbiganalytics.com> wrote:
> I'll mention another bash hack that I use all the time:
>
> hive -e 'some_command' | grep for_what_i_want |
> sed_command_to_remove_just_i_dont_want
>
> For example, the following command will print just the value of
> hive.metastore.warehouse.dir, sending all the logging junk written to stderr
> to /dev/null and stripping off the leading "hive.metastore.warehouse.dir="
> from the stdout output:
>
> hive -e 'set hive.metastore.warehouse.dir;' 2> /dev/null | sed -e
> 's/hive.metastore.warehouse.dir=//'
>
> (No grep subcommand required in this case...)
>
> You could do something similar with DESCRIBE EXTENDED table PARTION(...)
> Suppose you want a script that works for any property. Put the following in
> a script file, say hive-prop.sh:
>
> #!/bin/sh
> hive -e "set $1;" 2> /dev/null | sed -e "s/$1=//"
>
> Make it executable (chmod +x /path/to/hive-prop.sh), then run it this way:
>
> /path/to/hive-prop.sh hive.metastore.warehouse.dir
>
> Back to asking for for metadata for a table. The following script will
> determine the location of a particular partition for an external
> "mydatabase.stocks" table:
>
> #!/bin/sh
> hive -e "describe formatted mydatabase.stocks partition(exchange='NASDAQ',
> symbol='AAPL');" 2> /dev/null | grep Location | sed -e "s/Location:[ \t]*//"
>
> dean
>
> On Mon, Feb 11, 2013 at 4:59 PM, Parag Sarda <PS...@walmartlabs.com> wrote:
>>
>> Hello Hive Users,
>>
>> I am writing a program in java which is bundled as JAR and executed using
>> hadoop jar command. I would like to access hive metadata (read partitions
>> informations) in this program. I can ask user to set HIVE_CONF_DIR
>> environment variable before calling my program or ask for any reasonable
>> parameters to be passed. I do not want to force user to run hive megastore
>> service if possible to increase reliability of program by avoiding
>> external dependencies.
>>
>> What is the recommended way to get partitions information? Here is my
>> understanding
>> 1. Make sure my jar is bundled with hive-metastore[1] library.
>> 2. Use HiveMetastoreClient[2]
>>
>> Is this correct? If yes, how to read the hive configuration[3] from
>> HIVE_CONF_DIR?
>>
>> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
>> [2]
>>
>> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
>> eMetaStoreClient.html
>> [3]
>>
>> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
>> .html
>>
>> Thanks in advance,
>> Parag
>>
>
>
>
> --
> Dean Wampler, Ph.D.
> thinkbiganalytics.com
> +1-312-339-1330
>

Re: How to load hive metadata from conf dir

Posted by Dean Wampler <de...@thinkbiganalytics.com>.
I'll mention another bash hack that I use all the time:

hive -e 'some_command' | grep for_what_i_want |
sed_command_to_remove_just_i_dont_want

For example, the following command will print just the value of
hive.metastore.warehouse.dir, sending all the logging junk written to
stderr to /dev/null and stripping off the leading
"hive.metastore.warehouse.dir=" from the stdout output:

hive -e 'set hive.metastore.warehouse.dir;' 2> /dev/null | sed -e
's/hive.metastore.warehouse.dir=//'

(No grep subcommand required in this case...)

You could do something similar with DESCRIBE EXTENDED table PARTION(...)
Suppose you want a script that works for any property. Put the following in
a script file, say hive-prop.sh:

#!/bin/sh
hive -e "set $1;" 2> /dev/null | sed -e "s/$1=//"

Make it executable (chmod +x /path/to/hive-prop.sh), then run it this way:

/path/to/hive-prop.sh hive.metastore.warehouse.dir

Back to asking for for metadata for a table. The following script will
determine the location of a particular partition for an external
"mydatabase.stocks" table:

#!/bin/sh
hive -e "describe formatted mydatabase.stocks partition(exchange='NASDAQ',
symbol='AAPL');" 2> /dev/null | grep Location | sed -e "s/Location:[ \t]*//"

dean

On Mon, Feb 11, 2013 at 4:59 PM, Parag Sarda <PS...@walmartlabs.com> wrote:

> Hello Hive Users,
>
> I am writing a program in java which is bundled as JAR and executed using
> hadoop jar command. I would like to access hive metadata (read partitions
> informations) in this program. I can ask user to set HIVE_CONF_DIR
> environment variable before calling my program or ask for any reasonable
> parameters to be passed. I do not want to force user to run hive megastore
> service if possible to increase reliability of program by avoiding
> external dependencies.
>
> What is the recommended way to get partitions information? Here is my
> understanding
> 1. Make sure my jar is bundled with hive-metastore[1] library.
> 2. Use HiveMetastoreClient[2]
>
> Is this correct? If yes, how to read the hive configuration[3] from
> HIVE_CONF_DIR?
>
> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
> [2]
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
> eMetaStoreClient.html
> [3]
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
> .html
>
> Thanks in advance,
> Parag
>
>


-- 
*Dean Wampler, Ph.D.*
thinkbiganalytics.com
+1-312-339-1330

Re: How to load hive metadata from conf dir

Posted by Parag Sarda <PS...@walmartlabs.com>.
Look like I am not doing good job in explaining my requirements.

My program is like a workflow engine which reads a script/configuration file and only after reading a configuration file, it will know which metadata to read from hive. E.g. Here is simplified version of script file

 == Example Input script ==
input: type=hive, db=test, table=sample, partitions=*
output: type=hive, db=test2 table=sample2, partitions=*
program: type=exec, command=run.sh
  == END ==

Now after reading this script file, my program would like to look for all partitions information from test.sample table in hive.


--
Parag

From: Nitin Pawar <ni...@gmail.com>>
Reply-To: "user@hive.apache.org<ma...@hive.apache.org>" <us...@hive.apache.org>>
Date: Tuesday, 12 February 2013 1:55 PM
To: "user@hive.apache.org<ma...@hive.apache.org>" <us...@hive.apache.org>>
Subject: Re: How to load hive metadata from conf dir

In our case we needed  to access hive meta data inside our oozie workflows

we were using Hcatalog as our hive metadata store and it was easy to access table meta data directly via Hcatalog apis )

parag, will it be possible for you guys to change your metadata store ?
if not then you will need to write a node in your workflow which gets the meta-data and stores it for your next nodes in workflow as  Mark said


On Tue, Feb 12, 2013 at 1:48 PM, Parag Sarda <PS...@walmartlabs.com>> wrote:
Thanks Mark for your reply.

My program is like a workflow management application and it runs on client machine and not on hadoop cluster. I use 'hadoop jar' so that my application has access to DFS and hadoop API. I would also like my application to have access to Hive metadata the same way it has access to DFS. Users can then write the rules for their workflow against hive metadata.

Since users for my application are already using Hive, I need to support hive metadata and I can not ask them to move to Hcatlog.

Thanks again,
Parag

From: Mark Grover <gr...@gmail.com>>>
Reply-To: "user@hive.apache.org<ma...@hive.apache.org>>" <us...@hive.apache.org>>>
Date: Tuesday, 12 February 2013 10:27 AM
To: "user@hive.apache.org<ma...@hive.apache.org>>" <us...@hive.apache.org>>>
Subject: Re: How to load hive metadata from conf dir

Hi Parag,
I think your question boils down to:

How does one access Hive metadata from MapReduce jobs?

In the past, when I've had to write MR jobs and needed Hive metadata, I ended up writing a wrapper Hive query that used a custom mapper and reducer by using hive's transform functionality to do the job.

However, if you want to stick to MR job, you seem to be along the right lines.

Also, it seems that HCatalog's (http://incubator.apache.org/hcatalog/docs/r0.4.0/) premise is to make metadata access among Hive, Pig and MR easier. Perhaps, you want to take a look at that and see if that fits your use case?

Mark

On Mon, Feb 11, 2013 at 2:59 PM, Parag Sarda <PS...@walmartlabs.com>>> wrote:
Hello Hive Users,

I am writing a program in java which is bundled as JAR and executed using
hadoop jar command. I would like to access hive metadata (read partitions
informations) in this program. I can ask user to set HIVE_CONF_DIR
environment variable before calling my program or ask for any reasonable
parameters to be passed. I do not want to force user to run hive megastore
service if possible to increase reliability of program by avoiding
external dependencies.

What is the recommended way to get partitions information? Here is my
understanding
1. Make sure my jar is bundled with hive-metastore[1] library.
2. Use HiveMetastoreClient[2]

Is this correct? If yes, how to read the hive configuration[3] from
HIVE_CONF_DIR?

[1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
[2]
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
eMetaStoreClient.html
[3]
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
.html

Thanks in advance,
Parag





--
Nitin Pawar

Re: How to load hive metadata from conf dir

Posted by Nitin Pawar <ni...@gmail.com>.
In our case we needed  to access hive meta data inside our oozie workflows

we were using Hcatalog as our hive metadata store and it was easy to access
table meta data directly via Hcatalog apis )

parag, will it be possible for you guys to change your metadata store ?
if not then you will need to write a node in your workflow which gets the
meta-data and stores it for your next nodes in workflow as  Mark said


On Tue, Feb 12, 2013 at 1:48 PM, Parag Sarda <PS...@walmartlabs.com> wrote:

> Thanks Mark for your reply.
>
> My program is like a workflow management application and it runs on client
> machine and not on hadoop cluster. I use 'hadoop jar' so that my
> application has access to DFS and hadoop API. I would also like my
> application to have access to Hive metadata the same way it has access to
> DFS. Users can then write the rules for their workflow against hive
> metadata.
>
> Since users for my application are already using Hive, I need to support
> hive metadata and I can not ask them to move to Hcatlog.
>
> Thanks again,
> Parag
>
> From: Mark Grover <grover.markgrover@gmail.com<mailto:
> grover.markgrover@gmail.com>>
> Reply-To: "user@hive.apache.org<ma...@hive.apache.org>" <
> user@hive.apache.org<ma...@hive.apache.org>>
> Date: Tuesday, 12 February 2013 10:27 AM
> To: "user@hive.apache.org<ma...@hive.apache.org>" <
> user@hive.apache.org<ma...@hive.apache.org>>
> Subject: Re: How to load hive metadata from conf dir
>
> Hi Parag,
> I think your question boils down to:
>
> How does one access Hive metadata from MapReduce jobs?
>
> In the past, when I've had to write MR jobs and needed Hive metadata, I
> ended up writing a wrapper Hive query that used a custom mapper and reducer
> by using hive's transform functionality to do the job.
>
> However, if you want to stick to MR job, you seem to be along the right
> lines.
>
> Also, it seems that HCatalog's (
> http://incubator.apache.org/hcatalog/docs/r0.4.0/) premise is to make
> metadata access among Hive, Pig and MR easier. Perhaps, you want to take a
> look at that and see if that fits your use case?
>
> Mark
>
> On Mon, Feb 11, 2013 at 2:59 PM, Parag Sarda <PSarda@walmartlabs.com
> <ma...@walmartlabs.com>> wrote:
> Hello Hive Users,
>
> I am writing a program in java which is bundled as JAR and executed using
> hadoop jar command. I would like to access hive metadata (read partitions
> informations) in this program. I can ask user to set HIVE_CONF_DIR
> environment variable before calling my program or ask for any reasonable
> parameters to be passed. I do not want to force user to run hive megastore
> service if possible to increase reliability of program by avoiding
> external dependencies.
>
> What is the recommended way to get partitions information? Here is my
> understanding
> 1. Make sure my jar is bundled with hive-metastore[1] library.
> 2. Use HiveMetastoreClient[2]
>
> Is this correct? If yes, how to read the hive configuration[3] from
> HIVE_CONF_DIR?
>
> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
> [2]
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
> eMetaStoreClient.html
> [3]
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
> .html
>
> Thanks in advance,
> Parag
>
>
>


-- 
Nitin Pawar

Re: How to load hive metadata from conf dir

Posted by Parag Sarda <PS...@walmartlabs.com>.
Thanks Mark for your reply.

My program is like a workflow management application and it runs on client machine and not on hadoop cluster. I use 'hadoop jar' so that my application has access to DFS and hadoop API. I would also like my application to have access to Hive metadata the same way it has access to DFS. Users can then write the rules for their workflow against hive metadata.

Since users for my application are already using Hive, I need to support hive metadata and I can not ask them to move to Hcatlog.

Thanks again,
Parag

From: Mark Grover <gr...@gmail.com>>
Reply-To: "user@hive.apache.org<ma...@hive.apache.org>" <us...@hive.apache.org>>
Date: Tuesday, 12 February 2013 10:27 AM
To: "user@hive.apache.org<ma...@hive.apache.org>" <us...@hive.apache.org>>
Subject: Re: How to load hive metadata from conf dir

Hi Parag,
I think your question boils down to:

How does one access Hive metadata from MapReduce jobs?

In the past, when I've had to write MR jobs and needed Hive metadata, I ended up writing a wrapper Hive query that used a custom mapper and reducer by using hive's transform functionality to do the job.

However, if you want to stick to MR job, you seem to be along the right lines.

Also, it seems that HCatalog's (http://incubator.apache.org/hcatalog/docs/r0.4.0/) premise is to make metadata access among Hive, Pig and MR easier. Perhaps, you want to take a look at that and see if that fits your use case?

Mark

On Mon, Feb 11, 2013 at 2:59 PM, Parag Sarda <PS...@walmartlabs.com>> wrote:
Hello Hive Users,

I am writing a program in java which is bundled as JAR and executed using
hadoop jar command. I would like to access hive metadata (read partitions
informations) in this program. I can ask user to set HIVE_CONF_DIR
environment variable before calling my program or ask for any reasonable
parameters to be passed. I do not want to force user to run hive megastore
service if possible to increase reliability of program by avoiding
external dependencies.

What is the recommended way to get partitions information? Here is my
understanding
1. Make sure my jar is bundled with hive-metastore[1] library.
2. Use HiveMetastoreClient[2]

Is this correct? If yes, how to read the hive configuration[3] from
HIVE_CONF_DIR?

[1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
[2]
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
eMetaStoreClient.html
[3]
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
.html

Thanks in advance,
Parag



Re: How to load hive metadata from conf dir

Posted by Mark Grover <gr...@gmail.com>.
Hi Parag,
I think your question boils down to:

How does one access Hive metadata from MapReduce jobs?

In the past, when I've had to write MR jobs and needed Hive metadata, I
ended up writing a wrapper Hive query that used a custom mapper and reducer
by using hive's transform functionality to do the job.

However, if you want to stick to MR job, you seem to be along the right
lines.

Also, it seems that HCatalog's (
http://incubator.apache.org/hcatalog/docs/r0.4.0/) premise is to make
metadata access among Hive, Pig and MR easier. Perhaps, you want to take a
look at that and see if that fits your use case?

Mark

On Mon, Feb 11, 2013 at 2:59 PM, Parag Sarda <PS...@walmartlabs.com> wrote:

> Hello Hive Users,
>
> I am writing a program in java which is bundled as JAR and executed using
> hadoop jar command. I would like to access hive metadata (read partitions
> informations) in this program. I can ask user to set HIVE_CONF_DIR
> environment variable before calling my program or ask for any reasonable
> parameters to be passed. I do not want to force user to run hive megastore
> service if possible to increase reliability of program by avoiding
> external dependencies.
>
> What is the recommended way to get partitions information? Here is my
> understanding
> 1. Make sure my jar is bundled with hive-metastore[1] library.
> 2. Use HiveMetastoreClient[2]
>
> Is this correct? If yes, how to read the hive configuration[3] from
> HIVE_CONF_DIR?
>
> [1] http://mvnrepository.com/artifact/org.apache.hive/hive-metastore
> [2]
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/metastore/Hiv
> eMetaStoreClient.html
> [3]
> http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/conf/HiveConf
> .html
>
> Thanks in advance,
> Parag
>
>