You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@atlas.apache.org by Daniel Lee <da...@narvar.com> on 2017/10/05 21:52:48 UTC

Still failing to start up a properly working Atlas instance in standalone mode

Hey guys,

Still running into problems starting up a fully functional standalone mode
atlas instance. This is all on a Mac 10.12.6 . After borking out with the
lock error running against berkeleydb, I followed the instructions on

http://atlas.apache.org/InstallationSteps.html

To try the embedded-hbase-solr instructions.

mvn clean package -Pdist,embedded-hbase-solr

works fine and even starts and runs a local instance during the testing
phases.

The line :

Using the embedded-hbase-solr profile will configure Atlas so that an HBase
instance and a Solr instance will be started and stopped along with the
Atlas server by default.
implies I should be able to start the whole shebang with

bin/atlas_start.py

But I get a pretty ugly error messages in both application.log and *.out. I
won't post it all, but should be easily replicable. Relevant portions in
the application.log are:

2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~
Session 0x0 for server null, unexpected error, closing socket connection
and attempting reconnect (ClientCnxn$SendThread:1102)

java.net.ConnectException: Connection refused

        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

        at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)

        at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)

        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

and

2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0,
quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode
(/hbase/hbaseid) (ZKUtil:544)

org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

        at
org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

        at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)

All of which point to a failure on the zookeeper node. Do I need to start
up my own zookeeper instance locally?

Thanks!

Daniel Lee

Re: Still failing to start up a properly working Atlas instance in standalone mode

Posted by Daniel Lee <da...@narvar.com>.
Ugh, spoke to soon,

It looks like solr isn't set up correctly since v1 queries complete cleanly
(with empty results since nothing has been added), but inserts and v2 isn't
working with a solr configuration problem:

Caused by:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error
from server at http://10.0.1.44:8983/solr/vertex_index_shard1_replica1:
Expected mime type application/octet-stream but got text/html. <html>

<head>

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>

<title>Error 500
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.core.SolrResourceNotFoundException},msg=SolrCore
&apos;vertex_index_shard1_replica1&apos; is not available due to init
failure: Could not load conf for core vertex_index_shard1_replica1: Error
loading solr config from
/Users/daniellee/Projects/apache-atlas-0.8.2-SNAPSHOT/solr/server/solr/vertex_index_shard1_replica1/conf/solrconfig.xml,trace=org.apache.solr.common.SolrException:
SolrCore &apos;vertex_index_shard1_replica1&apos; is not available due to
init failure: Could not load conf for core vertex_index_shard1_replica1:
Error loading solr config from
/Users/daniellee/Projects/apache-atlas-0.8.2-SNAPSHOT/solr/server/solr/vertex_index_shard1_replica1/conf/solrconfig.xml

        at
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1066)

        at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:250)

        at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:412)

Sigh,

Daniel Lee

On Fri, Oct 6, 2017 at 10:20 AM, Daniel Lee <da...@narvar.com> wrote:

> Many thanks for the reply Anthony. The above variables were set to true in
> the settings for most of my failures.
>
> In an attempt to get a stable version, I went to the 0.8 branch and tried
> that, but ended up in the same place. I do have it running now and
> accepting queries but had to do quite a bit of mucking around.
>
> I ended up starting the embedded hbase and solr by hand and doing the
> start up manually via hbase/bin/start-hbase.sh and solr/bin/solr start .
> Adding the vertex_index, edge_index, and fulltext_index manually seems to
> have cleared the final blockage. I'll have to double check my notes to make
> sure that that's all I did. However import-hive.sh still borks out on me
> although that could be a problem on my hive setup.
>
> Thanks again!
>
> Daniel Lee
>
> On Fri, Oct 6, 2017 at 9:23 AM, Anthony Daniell <an...@gmail.com>
> wrote:
>
>> Daniel/All,
>>
>> Not sure if this is full solution, but found an interesting post that
>> might push things along:  http://coheigea.blogspot.com/
>> 2017/04/securing-apache-hadoop-distributed-file_21.html
>> In particular, two additional environment variables need to be set to get
>> local standalone versions of HBASE and SOLR started (I presume with their
>> own zookeeper instances)
>>
>>    - export MANAGE_LOCAL_HBASE=true
>>    - export MANAGE_LOCAL_SOLR=true
>>
>>
>> If you read through the atlas_start.py script, you see that local mode
>> should be indicated when these are set to True.
>> I hope this helps.
>> Thanks,
>> -Anthony
>>
>>
>> On Oct 5, 2017, at 3:18 PM, Daniel Lee <da...@narvar.com> wrote:
>>
>> As a follow on, I tried doing the following:
>>
>> Uncommented
>>
>> export HBASE_MANAGES_ZK=true
>>
>> in hbase/conf/hbase-env.sh
>>
>> and set
>>
>> atlas.server.run.setup.on.start=true
>>
>> in conf/atlas-application.properties
>>
>> Thanks
>>
>> Daniel Lee
>>
>> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <da...@narvar.com> wrote:
>>
>>> Hey guys,
>>>
>>> Still running into problems starting up a fully functional standalone
>>> mode atlas instance. This is all on a Mac 10.12.6 . After borking out with
>>> the lock error running against berkeleydb, I followed the instructions on
>>>
>>> http://atlas.apache.org/InstallationSteps.html
>>>
>>> To try the embedded-hbase-solr instructions.
>>>
>>> mvn clean package -Pdist,embedded-hbase-solr
>>>
>>> works fine and even starts and runs a local instance during the testing
>>> phases.
>>>
>>> The line :
>>>
>>> Using the embedded-hbase-solr profile will configure Atlas so that an
>>> HBase instance and a Solr instance will be started and stopped along with
>>> the Atlas server by default.
>>> implies I should be able to start the whole shebang with
>>>
>>> bin/atlas_start.py
>>>
>>> But I get a pretty ugly error messages in both application.log and
>>> *.out. I won't post it all, but should be easily replicable. Relevant
>>> portions in the application.log are:
>>>
>>> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~
>>> Session 0x0 for server null, unexpected error, closing socket connection
>>> and attempting reconnect (ClientCnxn$SendThread:1102)
>>>
>>> java.net.ConnectException: Connection refused
>>>
>>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>
>>>         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl
>>> .java:717)
>>>
>>>         at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
>>> nxnSocketNIO.java:361)
>>>
>>>         at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.ja
>>> va:1081)
>>>
>>> and
>>>
>>> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0,
>>> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode
>>> (/hbase/hbaseid) (ZKUtil:544)
>>>
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>>>
>>>         at org.apache.zookeeper.KeeperException.create(KeeperException.
>>> java:99)
>>>
>>>         at org.apache.zookeeper.KeeperException.create(KeeperException.
>>> java:51)
>>>
>>>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
>>>
>>> All of which point to a failure on the zookeeper node. Do I need to
>>> start up my own zookeeper instance locally?
>>>
>>> Thanks!
>>>
>>> Daniel Lee
>>>
>>
>>
>>
>

Re: Still failing to start up a properly working Atlas instance in standalone mode

Posted by Anthony Daniell <an...@gmail.com>.
Daniel,  
I did some more digging and discovered that I can get Atlas server (with the UI at port 21000) to come up if I make a change to my mac network preferences.  In particular, in the DNS Server section (Apple menu | SystemPreferences | Network | Wifi | Advanced | DNS), I ended up removing the IP address of my local wifi router and replacing it with 8.8.8.8.  In fact, 8.8.8.8 is the only DNS entry.  After doing this, the atlas_start.py works, and the quick_start.py works.  You can log into the server at localhost:21000 using the default username and password, and also do curl based queries.  The Atlas login screen appears, and then you go into the UI itself within the browser, as expected. 

The issue seems to be related to the local router acting as a DNS and making "bad" resolves for things like localhost for the Java calls coming from Atlas.  As I mentioned earlier, for some reason the servers were being mapped to the router IP instead of the local mac.  Not sure I fully understand why that would be the case, but for now, this seems like things work for Atlas.

I hope this provides some direction in your situation.
Thanks,
-Anthony

> On Oct 6, 2017, at 12:23 PM, Anthony Daniell <an...@gmail.com> wrote:
> 
> Daniel,
> Thanks for the additional steps you noted.  I think I found a possible direction.  It appears that "localhost" is not being translated consistently (at least on my system).  The SOLR server is being mapped to the router IP address instead of the address of the mac in some cases.  I manually specified the IP address using the -h option for the solr start, rather than relying on the default "localhost" interpretation.  Not sure where it picks that up.
> In any case, was able to get the tutorial of SOLR running (Tutorial 1), so I think this might be helpful.
> I hope it helps with your case.
> Thanks,
> -Anthony
> 
> 
>> On Oct 6, 2017, at 10:20 AM, Daniel Lee <daniel.lee@narvar.com <ma...@narvar.com>> wrote:
>> 
>> Many thanks for the reply Anthony. The above variables were set to true in the settings for most of my failures.
>> 
>> In an attempt to get a stable version, I went to the 0.8 branch and tried that, but ended up in the same place. I do have it running now and accepting queries but had to do quite a bit of mucking around.
>> 
>> I ended up starting the embedded hbase and solr by hand and doing the start up manually via hbase/bin/start-hbase.sh and solr/bin/solr start . Adding the vertex_index, edge_index, and fulltext_index manually seems to have cleared the final blockage. I'll have to double check my notes to make sure that that's all I did. However import-hive.sh still borks out on me although that could be a problem on my hive setup.
>> 
>> Thanks again!
>> 
>> Daniel Lee
>> 
>> On Fri, Oct 6, 2017 at 9:23 AM, Anthony Daniell <anthonyd2004@gmail.com <ma...@gmail.com>> wrote:
>> Daniel/All,
>> 
>> Not sure if this is full solution, but found an interesting post that might push things along:  http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html <http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html>
>> In particular, two additional environment variables need to be set to get local standalone versions of HBASE and SOLR started (I presume with their own zookeeper instances)
>> export MANAGE_LOCAL_HBASE=true
>> export MANAGE_LOCAL_SOLR=true
>> 
>> If you read through the atlas_start.py script, you see that local mode should be indicated when these are set to True.  
>> I hope this helps.
>> Thanks,
>> -Anthony
>> 
>> 
>>> On Oct 5, 2017, at 3:18 PM, Daniel Lee <daniel.lee@narvar.com <ma...@narvar.com>> wrote:
>>> 
>>> As a follow on, I tried doing the following:
>>> 
>>> Uncommented
>>> export HBASE_MANAGES_ZK=true
>>> 
>>> in hbase/conf/hbase-env.sh
>>> 
>>> and set
>>> 
>>> 
>>> atlas.server.run.setup.on.start=true
>>> 
>>> in conf/atlas-application.properties
>>> 
>>> Thanks
>>> 
>>> Daniel Lee
>>> 
>>> 
>>> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <daniel.lee@narvar.com <ma...@narvar.com>> wrote:
>>> Hey guys,
>>> 
>>> Still running into problems starting up a fully functional standalone mode atlas instance. This is all on a Mac 10.12.6 . After borking out with the lock error running against berkeleydb, I followed the instructions on
>>> 
>>> http://atlas.apache.org/InstallationSteps.html <http://atlas.apache.org/InstallationSteps.html>
>>> 
>>> To try the embedded-hbase-solr instructions.
>>> 
>>> mvn clean package -Pdist,embedded-hbase-solr
>>> works fine and even starts and runs a local instance during the testing phases.
>>> 
>>> The line :
>>> Using the embedded-hbase-solr profile will configure Atlas so that an HBase instance and a Solr instance will be started and stopped along with the Atlas server by default.
>>> 
>>> implies I should be able to start the whole shebang with
>>> 
>>> bin/atlas_start.py
>>> 
>>> But I get a pretty ugly error messages in both application.log and *.out. I won't post it all, but should be easily replicable. Relevant portions in the application.log are:
>>> 
>>> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:1102)
>>> 
>>> java.net.ConnectException: Connection refused
>>> 
>>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> 
>>>         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>>> 
>>>         at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>>> 
>>>         at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
>>> 
>>> and
>>> 
>>> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
>>> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>>> 
>>>         at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>> 
>>>         at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>> 
>>> 
>>>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
>>> 
>>> All of which point to a failure on the zookeeper node. Do I need to start up my own zookeeper instance locally?
>>> 
>>> Thanks!
>>> 
>>> Daniel Lee
>>> 
>>> 
>> 
>> 
> 


Re: Still failing to start up a properly working Atlas instance in standalone mode

Posted by Anthony Daniell <an...@gmail.com>.
Daniel,
Thanks for the additional steps you noted.  I think I found a possible direction.  It appears that "localhost" is not being translated consistently (at least on my system).  The SOLR server is being mapped to the router IP address instead of the address of the mac in some cases.  I manually specified the IP address using the -h option for the solr start, rather than relying on the default "localhost" interpretation.  Not sure where it picks that up.
In any case, was able to get the tutorial of SOLR running (Tutorial 1), so I think this might be helpful.
I hope it helps with your case.
Thanks,
-Anthony


> On Oct 6, 2017, at 10:20 AM, Daniel Lee <da...@narvar.com> wrote:
> 
> Many thanks for the reply Anthony. The above variables were set to true in the settings for most of my failures.
> 
> In an attempt to get a stable version, I went to the 0.8 branch and tried that, but ended up in the same place. I do have it running now and accepting queries but had to do quite a bit of mucking around.
> 
> I ended up starting the embedded hbase and solr by hand and doing the start up manually via hbase/bin/start-hbase.sh and solr/bin/solr start . Adding the vertex_index, edge_index, and fulltext_index manually seems to have cleared the final blockage. I'll have to double check my notes to make sure that that's all I did. However import-hive.sh still borks out on me although that could be a problem on my hive setup.
> 
> Thanks again!
> 
> Daniel Lee
> 
> On Fri, Oct 6, 2017 at 9:23 AM, Anthony Daniell <anthonyd2004@gmail.com <ma...@gmail.com>> wrote:
> Daniel/All,
> 
> Not sure if this is full solution, but found an interesting post that might push things along:  http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html <http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html>
> In particular, two additional environment variables need to be set to get local standalone versions of HBASE and SOLR started (I presume with their own zookeeper instances)
> export MANAGE_LOCAL_HBASE=true
> export MANAGE_LOCAL_SOLR=true
> 
> If you read through the atlas_start.py script, you see that local mode should be indicated when these are set to True.  
> I hope this helps.
> Thanks,
> -Anthony
> 
> 
>> On Oct 5, 2017, at 3:18 PM, Daniel Lee <daniel.lee@narvar.com <ma...@narvar.com>> wrote:
>> 
>> As a follow on, I tried doing the following:
>> 
>> Uncommented
>> export HBASE_MANAGES_ZK=true
>> 
>> in hbase/conf/hbase-env.sh
>> 
>> and set
>> 
>> 
>> atlas.server.run.setup.on.start=true
>> 
>> in conf/atlas-application.properties
>> 
>> Thanks
>> 
>> Daniel Lee
>> 
>> 
>> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <daniel.lee@narvar.com <ma...@narvar.com>> wrote:
>> Hey guys,
>> 
>> Still running into problems starting up a fully functional standalone mode atlas instance. This is all on a Mac 10.12.6 . After borking out with the lock error running against berkeleydb, I followed the instructions on
>> 
>> http://atlas.apache.org/InstallationSteps.html <http://atlas.apache.org/InstallationSteps.html>
>> 
>> To try the embedded-hbase-solr instructions.
>> 
>> mvn clean package -Pdist,embedded-hbase-solr
>> works fine and even starts and runs a local instance during the testing phases.
>> 
>> The line :
>> Using the embedded-hbase-solr profile will configure Atlas so that an HBase instance and a Solr instance will be started and stopped along with the Atlas server by default.
>> 
>> implies I should be able to start the whole shebang with
>> 
>> bin/atlas_start.py
>> 
>> But I get a pretty ugly error messages in both application.log and *.out. I won't post it all, but should be easily replicable. Relevant portions in the application.log are:
>> 
>> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:1102)
>> 
>> java.net.ConnectException: Connection refused
>> 
>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> 
>>         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>> 
>>         at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>> 
>>         at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
>> 
>> and
>> 
>> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
>> 
>> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>> 
>>         at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>         at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>> 
>>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
>> 
>> All of which point to a failure on the zookeeper node. Do I need to start up my own zookeeper instance locally?
>> 
>> Thanks!
>> 
>> Daniel Lee
>> 
>> 
> 
> 


Re: Still failing to start up a properly working Atlas instance in standalone mode

Posted by Daniel Lee <da...@narvar.com>.
Many thanks for the reply Anthony. The above variables were set to true in
the settings for most of my failures.

In an attempt to get a stable version, I went to the 0.8 branch and tried
that, but ended up in the same place. I do have it running now and
accepting queries but had to do quite a bit of mucking around.

I ended up starting the embedded hbase and solr by hand and doing the start
up manually via hbase/bin/start-hbase.sh and solr/bin/solr start . Adding
the vertex_index, edge_index, and fulltext_index manually seems to have
cleared the final blockage. I'll have to double check my notes to make sure
that that's all I did. However import-hive.sh still borks out on me
although that could be a problem on my hive setup.

Thanks again!

Daniel Lee

On Fri, Oct 6, 2017 at 9:23 AM, Anthony Daniell <an...@gmail.com>
wrote:

> Daniel/All,
>
> Not sure if this is full solution, but found an interesting post that
> might push things along:  http://coheigea.blogspot.com/
> 2017/04/securing-apache-hadoop-distributed-file_21.html
> In particular, two additional environment variables need to be set to get
> local standalone versions of HBASE and SOLR started (I presume with their
> own zookeeper instances)
>
>    - export MANAGE_LOCAL_HBASE=true
>    - export MANAGE_LOCAL_SOLR=true
>
>
> If you read through the atlas_start.py script, you see that local mode
> should be indicated when these are set to True.
> I hope this helps.
> Thanks,
> -Anthony
>
>
> On Oct 5, 2017, at 3:18 PM, Daniel Lee <da...@narvar.com> wrote:
>
> As a follow on, I tried doing the following:
>
> Uncommented
>
> export HBASE_MANAGES_ZK=true
>
> in hbase/conf/hbase-env.sh
>
> and set
>
> atlas.server.run.setup.on.start=true
>
> in conf/atlas-application.properties
>
> Thanks
>
> Daniel Lee
>
> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <da...@narvar.com> wrote:
>
>> Hey guys,
>>
>> Still running into problems starting up a fully functional standalone
>> mode atlas instance. This is all on a Mac 10.12.6 . After borking out with
>> the lock error running against berkeleydb, I followed the instructions on
>>
>> http://atlas.apache.org/InstallationSteps.html
>>
>> To try the embedded-hbase-solr instructions.
>>
>> mvn clean package -Pdist,embedded-hbase-solr
>>
>> works fine and even starts and runs a local instance during the testing
>> phases.
>>
>> The line :
>>
>> Using the embedded-hbase-solr profile will configure Atlas so that an
>> HBase instance and a Solr instance will be started and stopped along with
>> the Atlas server by default.
>> implies I should be able to start the whole shebang with
>>
>> bin/atlas_start.py
>>
>> But I get a pretty ugly error messages in both application.log and *.out.
>> I won't post it all, but should be easily replicable. Relevant portions in
>> the application.log are:
>>
>> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~
>> Session 0x0 for server null, unexpected error, closing socket connection
>> and attempting reconnect (ClientCnxn$SendThread:1102)
>>
>> java.net.ConnectException: Connection refused
>>
>>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>
>>         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl
>> .java:717)
>>
>>         at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
>> nxnSocketNIO.java:361)
>>
>>         at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.
>> java:1081)
>>
>> and
>>
>> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0,
>> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode
>> (/hbase/hbaseid) (ZKUtil:544)
>>
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>>
>>         at org.apache.zookeeper.KeeperException.create(KeeperException.
>> java:99)
>>
>>         at org.apache.zookeeper.KeeperException.create(KeeperException.
>> java:51)
>>
>>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
>>
>> All of which point to a failure on the zookeeper node. Do I need to start
>> up my own zookeeper instance locally?
>>
>> Thanks!
>>
>> Daniel Lee
>>
>
>
>

Re: Still failing to start up a properly working Atlas instance in standalone mode

Posted by Anthony Daniell <an...@gmail.com>.
Daniel/All,

Not sure if this is full solution, but found an interesting post that might push things along:  http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html <http://coheigea.blogspot.com/2017/04/securing-apache-hadoop-distributed-file_21.html>
In particular, two additional environment variables need to be set to get local standalone versions of HBASE and SOLR started (I presume with their own zookeeper instances)
export MANAGE_LOCAL_HBASE=true
export MANAGE_LOCAL_SOLR=true

If you read through the atlas_start.py script, you see that local mode should be indicated when these are set to True.  
I hope this helps.
Thanks,
-Anthony


> On Oct 5, 2017, at 3:18 PM, Daniel Lee <da...@narvar.com> wrote:
> 
> As a follow on, I tried doing the following:
> 
> Uncommented
> export HBASE_MANAGES_ZK=true
> 
> in hbase/conf/hbase-env.sh
> 
> and set
> 
> 
> atlas.server.run.setup.on.start=true
> 
> in conf/atlas-application.properties
> 
> Thanks
> 
> Daniel Lee
> 
> 
> On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <daniel.lee@narvar.com <ma...@narvar.com>> wrote:
> Hey guys,
> 
> Still running into problems starting up a fully functional standalone mode atlas instance. This is all on a Mac 10.12.6 . After borking out with the lock error running against berkeleydb, I followed the instructions on
> 
> http://atlas.apache.org/InstallationSteps.html <http://atlas.apache.org/InstallationSteps.html>
> 
> To try the embedded-hbase-solr instructions.
> 
> mvn clean package -Pdist,embedded-hbase-solr
> works fine and even starts and runs a local instance during the testing phases.
> 
> The line :
> Using the embedded-hbase-solr profile will configure Atlas so that an HBase instance and a Solr instance will be started and stopped along with the Atlas server by default.
> 
> implies I should be able to start the whole shebang with
> 
> bin/atlas_start.py
> 
> But I get a pretty ugly error messages in both application.log and *.out. I won't post it all, but should be easily replicable. Relevant portions in the application.log are:
> 
> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:1102)
> 
> java.net.ConnectException: Connection refused
> 
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 
>         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> 
>         at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
> 
>         at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> 
> and
> 
> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
> 
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
> 
>         at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>         at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
> 
>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
> 
> All of which point to a failure on the zookeeper node. Do I need to start up my own zookeeper instance locally?
> 
> Thanks!
> 
> Daniel Lee
> 
> 


Re: Still failing to start up a properly working Atlas instance in standalone mode

Posted by Daniel Lee <da...@narvar.com>.
As a follow on, I tried doing the following:

Uncommented

export HBASE_MANAGES_ZK=true

in hbase/conf/hbase-env.sh

and set

atlas.server.run.setup.on.start=true

in conf/atlas-application.properties

Thanks

Daniel Lee

On Thu, Oct 5, 2017 at 2:52 PM, Daniel Lee <da...@narvar.com> wrote:

> Hey guys,
>
> Still running into problems starting up a fully functional standalone mode
> atlas instance. This is all on a Mac 10.12.6 . After borking out with the
> lock error running against berkeleydb, I followed the instructions on
>
> http://atlas.apache.org/InstallationSteps.html
>
> To try the embedded-hbase-solr instructions.
>
> mvn clean package -Pdist,embedded-hbase-solr
>
> works fine and even starts and runs a local instance during the testing
> phases.
>
> The line :
>
> Using the embedded-hbase-solr profile will configure Atlas so that an
> HBase instance and a Solr instance will be started and stopped along with
> the Atlas server by default.
> implies I should be able to start the whole shebang with
>
> bin/atlas_start.py
>
> But I get a pretty ugly error messages in both application.log and *.out.
> I won't post it all, but should be easily replicable. Relevant portions in
> the application.log are:
>
> 2017-10-05 14:45:35,349 WARN  - [main-SendThread(localhost:2181):] ~
> Session 0x0 for server null, unexpected error, closing socket connection
> and attempting reconnect (ClientCnxn$SendThread:1102)
>
> java.net.ConnectException: Connection refused
>
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>
>         at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>         at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>         at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> and
>
> 2017-10-05 14:45:52,059 WARN  - [main:] ~ hconnection-0x5e9f73b0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode
> (/hbase/hbaseid) (ZKUtil:544)
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>
>         at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>         at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>         at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
>
> All of which point to a failure on the zookeeper node. Do I need to start
> up my own zookeeper instance locally?
>
> Thanks!
>
> Daniel Lee
>