You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@metron.apache.org by Athul Parambath <at...@gmail.com> on 2019/03/29 14:38:08 UTC

Unable to load Custom Stellar functions from HDFS

Hi Team,

 

We have HCP cluster installed along with HDP and here is the stack versions:

Ambari-2.6.2.2
HDP-2.6.5.0
HCP-1.8.0.0(Which includes Apache metron-0.7.0)
 

We are using custom stellar function while parsing the data. At present we have copied our custom stellar function into an HDFS location and specified the location in global.json( /usr/hcp/1.8.0.0-58/metron/config/zookeeper/global.json). We have HA enabled for NameNode Service and we would like to give the dfs name service name to access the file from HDFS. At present, our dfs name service name is set to “TTNNHA  and I am able to access the stellar function jar files using the dfs name service name(ie, hdfs://TTNNHA/apps/metron/stellar/custom-stellars-1.0.jar).  However, if I gave the same name in the global.json file, I am getting below error:

    Java.lang.IllegalArgumentExceptio: java.net.UnknownHostException: ttnnha

Caused by: java.net.unknownHostException: ttnaha

Not sure what went wrong here, I could understand from the error message that the dfs name is in lowercase where in my configuration it was in uppercase.

 

I have tried to give my two name node hostname as an array(comma separated list) in the global.json and got an error as it cannot read the file using the standby name node hostname.

 

I have also tried to export the jar file through a web server and pointed the HTTP address in the global.json file, this time I did not get any error however the custom functions were not loaded from the location.

 

Could someone please help me with this?

Re: Unable to load Custom Stellar functions from HDFS

Posted by James Sirota <js...@apache.org>.
Have you tried adding the IP instead of hostname? Does that work?

  

  

08.04.2019, 21:35, "Athul Parambath" <at...@gmail.com>:

> Hi Michael,

>

> Thanks for your reply.

>

> Please find the attached global.json files.

>

> global.json - we have pointed to active namenode. It's working fine,

>

> global_with_HA_name - We used namenode HA name to access the HDFS location,
and it throws an exception, details are added in my previous mail.  
>

>

> global_with_http - we used HTTP to access the jar file, we are not getting
an error, but the functions are not loading.  
>

>

>  
>

>

> On Mon, Apr 8, 2019 at 7:19 PM Michael Miklavcic
<[michael.miklavcic@gmail.com](mailto:michael.miklavcic@gmail.com)> wrote:  
>

>

>> Hi Athul,

>>

>>  
>

>>

>> Can you post your global.json?

>>

>>  
>

>>

>> On Fri, Mar 29, 2019 at 8:38 AM Athul Parambath
<[athulpersonal@gmail.com](mailto:athulpersonal@gmail.com)> wrote:  
>

>>

>>> Hi Team,  
>  
>  
>  
>  We have HCP cluster installed along with HDP and here is the stack
versions:  
>  
>  Ambari-2.6.2.2  
>  HDP-2.6.5.0  
>  HCP-1.8.0.0(Which includes Apache metron-0.7.0)  
>  
>  
>  We are using custom stellar function while parsing the data. At present we
have copied our custom stellar function into an HDFS location and specified
the location in global.json(
/usr/hcp/1.8.0.0-58/metron/config/zookeeper/global.json). We have HA enabled
for NameNode Service and we would like to give the dfs name service name to
access the file from HDFS. At present, our dfs name service name is set to
“TTNNHA and I am able to access the stellar function jar files using the dfs
name service name(ie, hdfs://TTNNHA/apps/metron/stellar/custom-
stellars-1.0.jar). However, if I gave the same name in the global.json file, I
am getting below error:  
>  
>  Java.lang.IllegalArgumentExceptio: java.net.UnknownHostException: ttnnha  
>  
>  Caused by: java.net.unknownHostException: ttnaha  
>  
>  Not sure what went wrong here, I could understand from the error message
that the dfs name is in lowercase where in my configuration it was in
uppercase.  
>  
>  
>  
>  I have tried to give my two name node hostname as an array(comma separated
list) in the global.json and got an error as it cannot read the file using the
standby name node hostname.  
>  
>  
>  
>  I have also tried to export the jar file through a web server and pointed
the HTTP address in the global.json file, this time I did not get any error
however the custom functions were not loaded from the location.  
>  
>  
>  
>  Could someone please help me with this?  
>

  

  

\-------------------

Thank you,

James Sirota

PMC- Apache Metron

jsirota AT apache DOT org

  


Re: Unable to load Custom Stellar functions from HDFS

Posted by Athul Parambath <at...@gmail.com>.
Hi Michael,
Thanks for your reply.
Please find the attached global.json files.
global.json -  we have pointed to active namenode. It's working fine,
global_with_HA_name - We used namenode HA name to access the HDFS location,
and it throws an exception, details are added in my previous mail.
global_with_http - we used HTTP to access the jar file, we are not getting
an error, but the functions are not loading.

On Mon, Apr 8, 2019 at 7:19 PM Michael Miklavcic <
michael.miklavcic@gmail.com> wrote:

> Hi Athul,
>
> Can you post your global.json?
>
> On Fri, Mar 29, 2019 at 8:38 AM Athul Parambath <at...@gmail.com>
> wrote:
>
>> Hi Team,
>>
>>
>>
>> We have HCP cluster installed along with HDP and here is the stack
>> versions:
>>
>> Ambari-2.6.2.2
>> HDP-2.6.5.0
>> HCP-1.8.0.0(Which includes Apache metron-0.7.0)
>>
>>
>> We are using custom stellar function while parsing the data. At present
>> we have copied our custom stellar function into an HDFS location and
>> specified the location in global.json(
>> /usr/hcp/1.8.0.0-58/metron/config/zookeeper/global.json). We have HA
>> enabled for NameNode Service and we would like to give the dfs name service
>> name to access the file from HDFS. At present, our dfs name service name is
>> set to “TTNNHA  and I am able to access the stellar function jar files
>> using the dfs name service name(ie,
>> hdfs://TTNNHA/apps/metron/stellar/custom-stellars-1.0.jar).  However, if I
>> gave the same name in the global.json file, I am getting below error:
>>
>>     Java.lang.IllegalArgumentExceptio: java.net.UnknownHostException:
>> ttnnha
>>
>> Caused by: java.net.unknownHostException: ttnaha
>>
>> Not sure what went wrong here, I could understand from the error message
>> that the dfs name is in lowercase where in my configuration it was in
>> uppercase.
>>
>>
>>
>> I have tried to give my two name node hostname as an array(comma
>> separated list) in the global.json and got an error as it cannot read the
>> file using the standby name node hostname.
>>
>>
>>
>> I have also tried to export the jar file through a web server and pointed
>> the HTTP address in the global.json file, this time I did not get any error
>> however the custom functions were not loaded from the location.
>>
>>
>>
>> Could someone please help me with this?
>>
>

Re: Unable to load Custom Stellar functions from HDFS

Posted by Michael Miklavcic <mi...@gmail.com>.
Hi Athul,

Can you post your global.json?

On Fri, Mar 29, 2019 at 8:38 AM Athul Parambath <at...@gmail.com>
wrote:

> Hi Team,
>
>
>
> We have HCP cluster installed along with HDP and here is the stack
> versions:
>
> Ambari-2.6.2.2
> HDP-2.6.5.0
> HCP-1.8.0.0(Which includes Apache metron-0.7.0)
>
>
> We are using custom stellar function while parsing the data. At present we
> have copied our custom stellar function into an HDFS location and specified
> the location in global.json(
> /usr/hcp/1.8.0.0-58/metron/config/zookeeper/global.json). We have HA
> enabled for NameNode Service and we would like to give the dfs name service
> name to access the file from HDFS. At present, our dfs name service name is
> set to “TTNNHA  and I am able to access the stellar function jar files
> using the dfs name service name(ie,
> hdfs://TTNNHA/apps/metron/stellar/custom-stellars-1.0.jar).  However, if I
> gave the same name in the global.json file, I am getting below error:
>
>     Java.lang.IllegalArgumentExceptio: java.net.UnknownHostException:
> ttnnha
>
> Caused by: java.net.unknownHostException: ttnaha
>
> Not sure what went wrong here, I could understand from the error message
> that the dfs name is in lowercase where in my configuration it was in
> uppercase.
>
>
>
> I have tried to give my two name node hostname as an array(comma separated
> list) in the global.json and got an error as it cannot read the file using
> the standby name node hostname.
>
>
>
> I have also tried to export the jar file through a web server and pointed
> the HTTP address in the global.json file, this time I did not get any error
> however the custom functions were not loaded from the location.
>
>
>
> Could someone please help me with this?
>