You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by "Xu (Simon) Chen" <xc...@gmail.com> on 2015/06/06 18:40:15 UTC

kerberos auth, getDelegationToken

Hi folks,

The latest kerberos doc seems to indicate that getDelegationToken can be
called without any parameters:
https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410

Yet the source code indicates a DelegationTokenConfig object must be passed
in:
https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359

Any ideas on how I should construct the DelegationTokenConfig object?

For context, I've been trying to get geomesa to work on my accumulo 1.7
with kerberos turned on. Right now, the code is somewhat tied to password
auth:
https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
My thought is that I should get a KerberosToken first, and then try
generate a DelegationToken, which is passed back for later interactions
between geomesa and accumulo.

Thanks.
-Simon

Re: kerberos auth, getDelegationToken

Posted by Christopher <ct...@apache.org>.
The only part of the plugin which would make sense to add to the
public API would just be the configuration parameters to the plugin
itself (what you'd set in the pom.xml or in system properties to
control the plugin, and not any of its code/implementation).

However, the way the plugin works is that it grabs the version of
Accumulo from your project's dependencies. So, we don't need to add
knobs to earlier releases in order to run it with earlier releases.
You could use a newer version of the plugin (which has any new knobs
added) to launch an older version of Accumulo (in theory, but I
haven't really put much effort into this, to be honest).

Many plugins work like this. For example, the maven-checkstyle-plugin
can execute many different versions of the checkstyle library (with
some configuration tweaking).

So, due to that fact, I think we'd want to focus adding the knobs only
to the latest major/minor version, and only fix bugs in earlier
releases of the plugin.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Sun, Jun 7, 2015 at 9:10 PM, Josh Elser <jo...@gmail.com> wrote:
> I don't think the accumulo-maven-plugin is in our definition of public API
> (although it really should be since it's primarily there for users...)
>
> So, probably, "it depends". This would be a good convo to have on dev@ to
> make sure the other devs see it. Would also be good to actually figure out
> what the scope of changes required would be (assuming you're already doing
> the leg-work to make it work w/ modifications on top of 1.7.0).
>
> tl;dr let's talk out specifics on dev@ :)
>
> James Hughes wrote:
>>
>> Thanks.  In terms of knobs for Kerberos, etc. would that be a small
>> enough change to make sense in the 1.7.x series?  If so, and if we end
>> up needing it, I'd definitely be interested in contributing it back.
>>
>> On Sun, Jun 7, 2015 at 4:36 PM, Josh Elser <josh.elser@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>>     Check out the accumulo-maven-plugin:
>>
>>     https://accumulo.apache.org/release_notes/1.6.0.html#maven-plugin
>>
>>     This will make it pretty simple to start a single MAC, run your
>>     tests and then stop it. A word of caution, I don't think it's
>>     heavily used, nor do I think it contains support for configuration
>>     all the knobs you could using MAC in your code (e.g. Kerberos, for
>> one).
>>
>>     A full pom example can be seen at
>>
>> https://github.com/apache/accumulo/blob/1.7.0/maven-plugin/src/it/plugin-test/pom.xml
>>
>>     James Hughes wrote:
>>
>>         Josh,
>>
>>         Thanks.  That's more or less what I expected.
>>
>>         As we work to transition from Mock to MiniAccumulo, I'd want to
>>         change
>>         from spinning up lots of MockInstances to one MiniAccumulo.  To
>>         understand that pattern, do I basically just need to read
>>         through test
>>         sub-module and the test/pom.xml?  Are there any other resources
>>         I should
>>         be checking out?
>>
>>         Cheers,
>>
>>         Jim
>>
>>         On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <josh.elser@gmail.com
>>         <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>> wrote:
>>
>>              MiniAccumulo, yes. MockAccumulo, no. In general, we've near
>>              completely moved away from MockAccumulo. I wouldn't be
>>         surprised if
>>              it gets deprecated and removed soon.
>>
>>
>> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>>
>>              Apache Directory provides a MiniKdc that can be used easily
>> w/
>>              MiniAccumulo. Many of the integration tests have already been
>>              altered to support running w/ or w/o kerberos.
>>
>>              James Hughes wrote:
>>
>>                  Hi all,
>>
>>                  For GeoMesa, stats writing is quite secondary and
>>         optional, so
>>                  we can
>>                  sort that out as a follow-on to seeing GeoMesa work with
>>                  Accumulo 1.7.
>>
>>                  I haven't had a chance to read in details yet, so
>>         forgive me if
>>                  this is
>>                  covered in the docs.  Does either Mock or MiniAccumulo
>>         provide
>>                  enough
>>                  hooks to test out Kerberos integration effectively?  I
>>         suppose I'm
>>                  really asking what kind of testing environment a
>>         project like
>>                  GeoMesa
>>                  would need to use to test out Accumulo 1.7.
>>
>>                  Even though MockAccumulo has a number of limitations,
>>         it is rather
>>                  effective in unit tests which can be part of a quick
>>         build.
>>
>>                  Thanks,
>>
>>                  Jim
>>
>>                  On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen
>>         <xchenum@gmail.com <ma...@gmail.com>
>>         <mailto:xchenum@gmail.com <ma...@gmail.com>>
>>         <mailto:xchenum@gmail.com <ma...@gmail.com>
>>         <mailto:xchenum@gmail.com <ma...@gmail.com>>>> wrote:
>>
>>                       Nope, I am running the example as what the readme
>> file
>>                  suggested:
>>
>>                       java -cp
>> ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>                       org.geomesa.QuickStart -instanceId somecloud
>>         -zookeepers
>>         "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>                       -tableName sometable
>>
>>                       I'll raise this question with the geomesa folks,
>>         but you're
>>                  right that
>>                       I can ignore it for now...
>>
>>                       Thanks!
>>                       -Simon
>>
>>
>>                       On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser
>>         <josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>> wrote:
>>          > Are you running it via `mvn exec:java` by chance or netbeans?
>>          >
>>          >
>>
>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>          >
>>          > If that's just a background thread writing in Stats, it might
>>                       just be a
>>          > factor of how you're invoking the program and you can ignore
>> it.
>>                       I don't
>>          > know enough about the inner-workings of GeoMesa to say one way
>> or
>>                       the other.
>>          >
>>          >
>>          > Xu (Simon) Chen wrote:
>>          >>
>>          >> Josh,
>>          >>
>>          >> Everything works well, except for one thing :-)
>>          >>
>>          >> I am running geomesa-quickstart program that ingest some data
>>                       and then
>>          >> perform a simple query:
>>          >> https://github.com/geomesa/geomesa-quickstart
>>          >>
>>          >> For some reason, the following error is emitted consistently
>>                  at the
>>          >> end of the execution, after outputting the correct result:
>>          >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error,
>> will
>>                       retry
>>          >> java.lang.InterruptedException
>>          >>          at java.lang.Object.wait(Native Method)
>>          >>          at java.lang.Object.wait(Object.java:503)
>>          >>          at
>>          >>
>>
>>
>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>          >>          at
>>
>>           org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>          >>          at
>>          >>
>>
>>
>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>          >>          at
>>          >>
>>
>>
>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>          >>          at
>>          >>
>>
>>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>          >>          at
>>          >>
>>
>>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>          >>          at
>>          >>
>>
>>
>>
>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>          >>          at
>>          >>
>>
>>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>          >>          at
>>          >>
>>
>>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>          >>          at
>>          >>
>>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>          >>          at
>>          >>
>>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>          >>          at
>>
>>           scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>          >>          at
>>          >>
>>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>          >>          at
>>          >>
>>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>          >>          at
>>          >>
>>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>          >>          at
>>          >>
>>
>>
>>
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>          >>          at
>>          >>
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>          >>          at
>>          >>
>>
>>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>          >>          at
>>          >>
>>
>>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>          >>          at
>>          >>
>>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>          >>          at
>>          >>
>>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>          >>          at java.lang.Thread.run(Thread.java:745)
>>          >>
>>          >> This is more annoying than a real problem. I am new to both
>>                  accumulo
>>          >> and geomesa, but I am curious what the problem might be.
>>          >>
>>          >> Thanks!
>>          >> -Simon
>>          >>
>>          >>
>>          >> On Sat, Jun 6, 2015 at 8:01 PM, Josh
>>                  Elser<josh.elser@gmail.com
>>         <ma...@gmail.com> <mailto:josh.elser@gmail.com
>>         <ma...@gmail.com>>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>>         wrote:
>>         >  >>
>>         >  >> Great! Glad to hear it. Please let us know how it works out!
>>         >  >>
>>         >  >>
>>         >  >> Xu (Simon) Chen wrote:
>>         >  >>>
>>         >  >>> Josh,
>>         >  >>>
>>         >  >>> You're right again.. Thanks!
>>         >  >>>
>>         >  >>> My ansible play actually pushed client.conf to all the
>> server
>>                       config
>>         >  >>> directories, but didn't do anything for the clients, and
>>                  that's my
>>         >  >>> problem. Now kerberos is working great for me.
>>         >  >>>
>>         >  >>> Thanks again!
>>         >  >>> -Simon
>>         >  >>>
>>         >  >>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>                       Elser<josh.elser@gmail.com
>>         <ma...@gmail.com> <mailto:josh.elser@gmail.com
>>         <ma...@gmail.com>>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>>
>>
>>          >>>> wrote:
>>          >>>>>
>>          >>>>> Simon,
>>          >>>>>
>>          >>>>> Did you create a client configuration file
>>                  (~/.accumulo/config or
>>          >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>                       Accumulo clients
>>          >>>>> to
>>          >>>>> actually use SASL when you're trying to use Kerberos
>>                       authentication.
>>          >>>>> Your
>>          >>>>> server is expecting that, but I would venture a guess that
>>                       your client
>>          >>>>> isn't.
>>          >>>>>
>>          >>>>> See
>>          >>>>>
>>          >>>>>
>>
>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>          >>>>>
>>          >>>>>
>>          >>>>> Xu (Simon) Chen wrote:
>>          >>>>>>
>>          >>>>>> Josh,
>>          >>>>>>
>>          >>>>>> Thanks. It makes sense...
>>          >>>>>>
>>          >>>>>> I used a KerberosToken, but my program got stuck when
>>                       running the
>>          >>>>>> following:
>>          >>>>>> new ZooKeeperInstance(instance,
>>                  zookeepers).getConnector(user,
>>          >>>>>> krbToken)
>>          >>>>>>
>>          >>>>>> It looks like my client is stuck here:
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>          >>>>>> failing in the receive part of
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>          >>>>>>
>>          >>>>>> On my tservers, I see the following:
>>          >>>>>>
>>          >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer]
>>                  ERROR: Error
>>          >>>>>> occurred during processing of message.
>>          >>>>>> java.lang.RuntimeException:
>>          >>>>>> org.apache.thrift.transport.TTransportException:
>>          >>>>>> java.net <http://java.net>
>>         <http://java.net>.SocketTimeoutException: Read
>>
>>                  timed out
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>          >>>>>>            at
>>                  java.security.AccessController.doPrivileged(Native
>>          >>>>>> Method)
>>          >>>>>>            at
>>                  javax.security.auth.Subject.doAs(Subject.java:356)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>>
>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>          >>>>>>            at java.lang.Thread.run(Thread.java:745)
>>          >>>>>> Caused by:
>> org.apache.thrift.transport.TTransportException:
>>          >>>>>> java.net <http://java.net>
>>         <http://java.net>.SocketTimeoutException: Read
>>                  timed out
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>         > >>>>>            at
>>         > >>>>>
>>
>>
>>         org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>         > >>>>>            ... 11 more
>>         > >>>>> Caused by: java.net <http://java.net>
>>         <http://java.net>.SocketTimeoutException: Read timed out
>>         > >>>>>            at
>> java.net.SocketInputStream.socketRead0(Native
>>                       Method)
>>         > >>>>>            at
>>         > >>>>>
>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>         > >>>>>            at
>>         > >>>>>
>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>         > >>>>>            at
>>         > >>>>>
>>
>>         java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>         > >>>>>            at
>>         > >>>>>
>>
>>         java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>         > >>>>>            at
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>
>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>         > >>>>>            ... 17 more
>>         > >>>>>
>>         > >>>>> Any ideas why?
>>         > >>>>>
>>         > >>>>> Thanks.
>>         > >>>>> -Simon
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>         > >>>>>
>>         > >>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>                       Elser<josh.elser@gmail.com
>>         <ma...@gmail.com> <mailto:josh.elser@gmail.com
>>         <ma...@gmail.com>>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>>
>>
>>          >>>>>> wrote:
>>          >>>>>>>
>>          >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>          >>>>>>>
>>          >>>>>>> <snip>
>>          >>>>>>> Obtain a delegation token by calling {@link
>>          >>>>>>>
>>          >>>>>>>
>>          >>>>>>>
>>          >>>>>>>
>>
>>
>>
>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>          >>>>>>> </snip>
>>          >>>>>>>
>>          >>>>>>> You cannot create a usable DelegationToken as the client
>>                       itself.
>>          >>>>>>>
>>          >>>>>>> Anyways, DelegationTokens are only relevant in cases
>> where
>>                       the client
>>          >>>>>>> Kerberos credentials are unavailable. The most common
>> case
>>                       is running
>>          >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>                       through the
>>          >>>>>>> Java
>>          >>>>>>> API, the KerberosToken is all you need to use.
>>          >>>>>>>
>>          >>>>>>> The user-manual likely just needs to be updated. I
>>                  believe the
>>          >>>>>>> DelegationTokenConfig was added after I wrote the initial
>>          >>>>>>> documentation.
>>          >>>>>>>
>>          >>>>>>>
>>          >>>>>>> Xu (Simon) Chen wrote:
>>          >>>>>>>>
>>          >>>>>>>> Hi folks,
>>          >>>>>>>>
>>          >>>>>>>> The latest kerberos doc seems to indicate that
>>                       getDelegationToken
>>          >>>>>>>> can
>>          >>>>>>>> be
>>          >>>>>>>> called without any parameters:
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>          >>>>>>>>
>>          >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>                       object must be
>>          >>>>>>>> passed in:
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>          >>>>>>>>
>>          >>>>>>>> Any ideas on how I should construct the
>>                  DelegationTokenConfig
>>          >>>>>>>> object?
>>          >>>>>>>>
>>          >>>>>>>> For context, I've been trying to get geomesa to work on
>> my
>>                       accumulo
>>          >>>>>>>> 1.7
>>          >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>                       tied to
>>          >>>>>>>> password auth:
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>
>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>          >>>>>>>> My thought is that I should get a KerberosToken first,
>> and
>>                       then try
>>          >>>>>>>> generate a DelegationToken, which is passed back for
>> later
>>          >>>>>>>> interactions
>>          >>>>>>>> between geomesa and accumulo.
>>          >>>>>>>>
>>          >>>>>>>> Thanks.
>>          >>>>>>>> -Simon
>>
>>
>>
>>
>

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
I don't think the accumulo-maven-plugin is in our definition of public 
API (although it really should be since it's primarily there for users...)

So, probably, "it depends". This would be a good convo to have on dev@ 
to make sure the other devs see it. Would also be good to actually 
figure out what the scope of changes required would be (assuming you're 
already doing the leg-work to make it work w/ modifications on top of 
1.7.0).

tl;dr let's talk out specifics on dev@ :)

James Hughes wrote:
> Thanks.  In terms of knobs for Kerberos, etc. would that be a small
> enough change to make sense in the 1.7.x series?  If so, and if we end
> up needing it, I'd definitely be interested in contributing it back.
>
> On Sun, Jun 7, 2015 at 4:36 PM, Josh Elser <josh.elser@gmail.com
> <ma...@gmail.com>> wrote:
>
>     Check out the accumulo-maven-plugin:
>
>     https://accumulo.apache.org/release_notes/1.6.0.html#maven-plugin
>
>     This will make it pretty simple to start a single MAC, run your
>     tests and then stop it. A word of caution, I don't think it's
>     heavily used, nor do I think it contains support for configuration
>     all the knobs you could using MAC in your code (e.g. Kerberos, for one).
>
>     A full pom example can be seen at
>     https://github.com/apache/accumulo/blob/1.7.0/maven-plugin/src/it/plugin-test/pom.xml
>
>     James Hughes wrote:
>
>         Josh,
>
>         Thanks.  That's more or less what I expected.
>
>         As we work to transition from Mock to MiniAccumulo, I'd want to
>         change
>         from spinning up lots of MockInstances to one MiniAccumulo.  To
>         understand that pattern, do I basically just need to read
>         through test
>         sub-module and the test/pom.xml?  Are there any other resources
>         I should
>         be checking out?
>
>         Cheers,
>
>         Jim
>
>         On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <josh.elser@gmail.com
>         <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>> wrote:
>
>              MiniAccumulo, yes. MockAccumulo, no. In general, we've near
>              completely moved away from MockAccumulo. I wouldn't be
>         surprised if
>              it gets deprecated and removed soon.
>
>         https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>
>              Apache Directory provides a MiniKdc that can be used easily w/
>              MiniAccumulo. Many of the integration tests have already been
>              altered to support running w/ or w/o kerberos.
>
>              James Hughes wrote:
>
>                  Hi all,
>
>                  For GeoMesa, stats writing is quite secondary and
>         optional, so
>                  we can
>                  sort that out as a follow-on to seeing GeoMesa work with
>                  Accumulo 1.7.
>
>                  I haven't had a chance to read in details yet, so
>         forgive me if
>                  this is
>                  covered in the docs.  Does either Mock or MiniAccumulo
>         provide
>                  enough
>                  hooks to test out Kerberos integration effectively?  I
>         suppose I'm
>                  really asking what kind of testing environment a
>         project like
>                  GeoMesa
>                  would need to use to test out Accumulo 1.7.
>
>                  Even though MockAccumulo has a number of limitations,
>         it is rather
>                  effective in unit tests which can be part of a quick
>         build.
>
>                  Thanks,
>
>                  Jim
>
>                  On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen
>         <xchenum@gmail.com <ma...@gmail.com>
>         <mailto:xchenum@gmail.com <ma...@gmail.com>>
>         <mailto:xchenum@gmail.com <ma...@gmail.com>
>         <mailto:xchenum@gmail.com <ma...@gmail.com>>>> wrote:
>
>                       Nope, I am running the example as what the readme file
>                  suggested:
>
>                       java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>                       org.geomesa.QuickStart -instanceId somecloud
>         -zookeepers
>         "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>                       -tableName sometable
>
>                       I'll raise this question with the geomesa folks,
>         but you're
>                  right that
>                       I can ignore it for now...
>
>                       Thanks!
>                       -Simon
>
>
>                       On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser
>         <josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>> wrote:
>          > Are you running it via `mvn exec:java` by chance or netbeans?
>          >
>          >
>         http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>          >
>          > If that's just a background thread writing in Stats, it might
>                       just be a
>          > factor of how you're invoking the program and you can ignore it.
>                       I don't
>          > know enough about the inner-workings of GeoMesa to say one way or
>                       the other.
>          >
>          >
>          > Xu (Simon) Chen wrote:
>          >>
>          >> Josh,
>          >>
>          >> Everything works well, except for one thing :-)
>          >>
>          >> I am running geomesa-quickstart program that ingest some data
>                       and then
>          >> perform a simple query:
>          >> https://github.com/geomesa/geomesa-quickstart
>          >>
>          >> For some reason, the following error is emitted consistently
>                  at the
>          >> end of the execution, after outputting the correct result:
>          >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>                       retry
>          >> java.lang.InterruptedException
>          >>          at java.lang.Object.wait(Native Method)
>          >>          at java.lang.Object.wait(Object.java:503)
>          >>          at
>          >>
>
>         org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>          >>          at
>
>           org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>          >>          at
>          >>
>
>         org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>          >>          at
>          >>
>
>         org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>          >>          at
>          >>
>
>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>          >>          at
>          >>
>
>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>          >>          at
>          >>
>
>
>         org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>          >>          at
>          >>
>
>
>         org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>          >>          at
>          >>
>
>
>         org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>          >>          at
>          >>
>
>
>         org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>          >>          at
>          >>
>
>
>         org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>          >>          at
>
>           scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>          >>          at
>          >>
>
>
>         org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>          >>          at
>          >>
>
>
>         org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>          >>          at
>          >>
>
>
>         org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>          >>          at
>          >>
>
>
>         java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>          >>          at
>          >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>          >>          at
>          >>
>
>
>         java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>          >>          at
>          >>
>
>
>         java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>          >>          at
>          >>
>
>
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          >>          at
>          >>
>
>
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          >>          at java.lang.Thread.run(Thread.java:745)
>          >>
>          >> This is more annoying than a real problem. I am new to both
>                  accumulo
>          >> and geomesa, but I am curious what the problem might be.
>          >>
>          >> Thanks!
>          >> -Simon
>          >>
>          >>
>          >> On Sat, Jun 6, 2015 at 8:01 PM, Josh
>                  Elser<josh.elser@gmail.com
>         <ma...@gmail.com> <mailto:josh.elser@gmail.com
>         <ma...@gmail.com>>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>         wrote:
>         >  >>
>         >  >> Great! Glad to hear it. Please let us know how it works out!
>         >  >>
>         >  >>
>         >  >> Xu (Simon) Chen wrote:
>         >  >>>
>         >  >>> Josh,
>         >  >>>
>         >  >>> You're right again.. Thanks!
>         >  >>>
>         >  >>> My ansible play actually pushed client.conf to all the server
>                       config
>         >  >>> directories, but didn't do anything for the clients, and
>                  that's my
>         >  >>> problem. Now kerberos is working great for me.
>         >  >>>
>         >  >>> Thanks again!
>         >  >>> -Simon
>         >  >>>
>         >  >>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>                       Elser<josh.elser@gmail.com
>         <ma...@gmail.com> <mailto:josh.elser@gmail.com
>         <ma...@gmail.com>>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>
>
>          >>>> wrote:
>          >>>>>
>          >>>>> Simon,
>          >>>>>
>          >>>>> Did you create a client configuration file
>                  (~/.accumulo/config or
>          >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>                       Accumulo clients
>          >>>>> to
>          >>>>> actually use SASL when you're trying to use Kerberos
>                       authentication.
>          >>>>> Your
>          >>>>> server is expecting that, but I would venture a guess that
>                       your client
>          >>>>> isn't.
>          >>>>>
>          >>>>> See
>          >>>>>
>          >>>>>
>         http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>          >>>>>
>          >>>>>
>          >>>>> Xu (Simon) Chen wrote:
>          >>>>>>
>          >>>>>> Josh,
>          >>>>>>
>          >>>>>> Thanks. It makes sense...
>          >>>>>>
>          >>>>>> I used a KerberosToken, but my program got stuck when
>                       running the
>          >>>>>> following:
>          >>>>>> new ZooKeeperInstance(instance,
>                  zookeepers).getConnector(user,
>          >>>>>> krbToken)
>          >>>>>>
>          >>>>>> It looks like my client is stuck here:
>          >>>>>>
>          >>>>>>
>          >>>>>>
>          >>>>>>
>         https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>          >>>>>> failing in the receive part of
>          >>>>>>
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>          >>>>>>
>          >>>>>> On my tservers, I see the following:
>          >>>>>>
>          >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer]
>                  ERROR: Error
>          >>>>>> occurred during processing of message.
>          >>>>>> java.lang.RuntimeException:
>          >>>>>> org.apache.thrift.transport.TTransportException:
>          >>>>>> java.net <http://java.net>
>         <http://java.net>.SocketTimeoutException: Read
>
>                  timed out
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>          >>>>>>            at
>                  java.security.AccessController.doPrivileged(Native
>          >>>>>> Method)
>          >>>>>>            at
>                  javax.security.auth.Subject.doAs(Subject.java:356)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>
>         org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>          >>>>>>            at java.lang.Thread.run(Thread.java:745)
>          >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>          >>>>>> java.net <http://java.net>
>         <http://java.net>.SocketTimeoutException: Read
>                  timed out
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>         > >>>>>            at
>         > >>>>>
>
>
>         org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>         > >>>>>            ... 11 more
>         > >>>>> Caused by: java.net <http://java.net>
>         <http://java.net>.SocketTimeoutException: Read timed out
>         > >>>>>            at java.net.SocketInputStream.socketRead0(Native
>                       Method)
>         > >>>>>            at
>         > >>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>         > >>>>>            at
>         > >>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>         > >>>>>            at
>         > >>>>>
>
>         java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>         > >>>>>            at
>         > >>>>>
>
>         java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>         > >>>>>            at
>         > >>>>>
>         > >>>>>
>         > >>>>>
>
>
>         org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>         > >>>>>            ... 17 more
>         > >>>>>
>         > >>>>> Any ideas why?
>         > >>>>>
>         > >>>>> Thanks.
>         > >>>>> -Simon
>         > >>>>>
>         > >>>>>
>         > >>>>>
>         > >>>>>
>         > >>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>                       Elser<josh.elser@gmail.com
>         <ma...@gmail.com> <mailto:josh.elser@gmail.com
>         <ma...@gmail.com>>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>>
>
>
>          >>>>>> wrote:
>          >>>>>>>
>          >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>          >>>>>>>
>          >>>>>>> <snip>
>          >>>>>>> Obtain a delegation token by calling {@link
>          >>>>>>>
>          >>>>>>>
>          >>>>>>>
>          >>>>>>>
>
>
>         SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>          >>>>>>> </snip>
>          >>>>>>>
>          >>>>>>> You cannot create a usable DelegationToken as the client
>                       itself.
>          >>>>>>>
>          >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>                       the client
>          >>>>>>> Kerberos credentials are unavailable. The most common case
>                       is running
>          >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>                       through the
>          >>>>>>> Java
>          >>>>>>> API, the KerberosToken is all you need to use.
>          >>>>>>>
>          >>>>>>> The user-manual likely just needs to be updated. I
>                  believe the
>          >>>>>>> DelegationTokenConfig was added after I wrote the initial
>          >>>>>>> documentation.
>          >>>>>>>
>          >>>>>>>
>          >>>>>>> Xu (Simon) Chen wrote:
>          >>>>>>>>
>          >>>>>>>> Hi folks,
>          >>>>>>>>
>          >>>>>>>> The latest kerberos doc seems to indicate that
>                       getDelegationToken
>          >>>>>>>> can
>          >>>>>>>> be
>          >>>>>>>> called without any parameters:
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>         https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>          >>>>>>>>
>          >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>                       object must be
>          >>>>>>>> passed in:
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>         https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>          >>>>>>>>
>          >>>>>>>> Any ideas on how I should construct the
>                  DelegationTokenConfig
>          >>>>>>>> object?
>          >>>>>>>>
>          >>>>>>>> For context, I've been trying to get geomesa to work on my
>                       accumulo
>          >>>>>>>> 1.7
>          >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>                       tied to
>          >>>>>>>> password auth:
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>         https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>          >>>>>>>> My thought is that I should get a KerberosToken first, and
>                       then try
>          >>>>>>>> generate a DelegationToken, which is passed back for later
>          >>>>>>>> interactions
>          >>>>>>>> between geomesa and accumulo.
>          >>>>>>>>
>          >>>>>>>> Thanks.
>          >>>>>>>> -Simon
>
>
>
>

Re: kerberos auth, getDelegationToken

Posted by James Hughes <jn...@virginia.edu>.
Thanks.  In terms of knobs for Kerberos, etc. would that be a small enough
change to make sense in the 1.7.x series?  If so, and if we end up needing
it, I'd definitely be interested in contributing it back.

On Sun, Jun 7, 2015 at 4:36 PM, Josh Elser <jo...@gmail.com> wrote:

> Check out the accumulo-maven-plugin:
>
> https://accumulo.apache.org/release_notes/1.6.0.html#maven-plugin
>
> This will make it pretty simple to start a single MAC, run your tests and
> then stop it. A word of caution, I don't think it's heavily used, nor do I
> think it contains support for configuration all the knobs you could using
> MAC in your code (e.g. Kerberos, for one).
>
> A full pom example can be seen at
> https://github.com/apache/accumulo/blob/1.7.0/maven-plugin/src/it/plugin-test/pom.xml
>
> James Hughes wrote:
>
>> Josh,
>>
>> Thanks.  That's more or less what I expected.
>>
>> As we work to transition from Mock to MiniAccumulo, I'd want to change
>> from spinning up lots of MockInstances to one MiniAccumulo.  To
>> understand that pattern, do I basically just need to read through test
>> sub-module and the test/pom.xml?  Are there any other resources I should
>> be checking out?
>>
>> Cheers,
>>
>> Jim
>>
>> On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <josh.elser@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>>     MiniAccumulo, yes. MockAccumulo, no. In general, we've near
>>     completely moved away from MockAccumulo. I wouldn't be surprised if
>>     it gets deprecated and removed soon.
>>
>>
>> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>>
>>     Apache Directory provides a MiniKdc that can be used easily w/
>>     MiniAccumulo. Many of the integration tests have already been
>>     altered to support running w/ or w/o kerberos.
>>
>>     James Hughes wrote:
>>
>>         Hi all,
>>
>>         For GeoMesa, stats writing is quite secondary and optional, so
>>         we can
>>         sort that out as a follow-on to seeing GeoMesa work with
>>         Accumulo 1.7.
>>
>>         I haven't had a chance to read in details yet, so forgive me if
>>         this is
>>         covered in the docs.  Does either Mock or MiniAccumulo provide
>>         enough
>>         hooks to test out Kerberos integration effectively?  I suppose I'm
>>         really asking what kind of testing environment a project like
>>         GeoMesa
>>         would need to use to test out Accumulo 1.7.
>>
>>         Even though MockAccumulo has a number of limitations, it is rather
>>         effective in unit tests which can be part of a quick  build.
>>
>>         Thanks,
>>
>>         Jim
>>
>>         On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen
>>         <xchenum@gmail.com <ma...@gmail.com>
>>         <mailto:xchenum@gmail.com <ma...@gmail.com>>> wrote:
>>
>>              Nope, I am running the example as what the readme file
>>         suggested:
>>
>>              java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>              org.geomesa.QuickStart -instanceId somecloud -zookeepers
>>         "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>              -tableName sometable
>>
>>              I'll raise this question with the geomesa folks, but you're
>>         right that
>>              I can ignore it for now...
>>
>>              Thanks!
>>              -Simon
>>
>>
>>              On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser
>>         <josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>> wrote:
>>          > Are you running it via `mvn exec:java` by chance or netbeans?
>>          >
>>          >
>>
>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>          >
>>          > If that's just a background thread writing in Stats, it might
>>              just be a
>>          > factor of how you're invoking the program and you can ignore
>> it.
>>              I don't
>>          > know enough about the inner-workings of GeoMesa to say one way
>> or
>>              the other.
>>          >
>>          >
>>          > Xu (Simon) Chen wrote:
>>          >>
>>          >> Josh,
>>          >>
>>          >> Everything works well, except for one thing :-)
>>          >>
>>          >> I am running geomesa-quickstart program that ingest some data
>>              and then
>>          >> perform a simple query:
>>          >> https://github.com/geomesa/geomesa-quickstart
>>          >>
>>          >> For some reason, the following error is emitted consistently
>>         at the
>>          >> end of the execution, after outputting the correct result:
>>          >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error,
>> will
>>              retry
>>          >> java.lang.InterruptedException
>>          >>          at java.lang.Object.wait(Native Method)
>>          >>          at java.lang.Object.wait(Object.java:503)
>>          >>          at
>>          >>
>>
>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>          >>          at
>>              org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>          >>          at
>>          >>
>>
>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>          >>          at
>>          >>
>>
>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>          >>          at
>>          >>
>>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>          >>          at
>>          >>
>>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>          >>          at
>>          >>
>>
>>
>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>          >>          at
>>          >>
>>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>          >>          at
>>          >>
>>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>          >>          at
>>          >>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>          >>          at
>>          >>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>          >>          at
>>              scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>          >>          at
>>          >>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>          >>          at
>>          >>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>          >>          at
>>          >>
>>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>          >>          at
>>          >>
>>
>>
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>          >>          at
>>          >>
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>          >>          at
>>          >>
>>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>          >>          at
>>          >>
>>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>          >>          at
>>          >>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>          >>          at
>>          >>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>          >>          at java.lang.Thread.run(Thread.java:745)
>>          >>
>>          >> This is more annoying than a real problem. I am new to both
>>         accumulo
>>          >> and geomesa, but I am curious what the problem might be.
>>          >>
>>          >> Thanks!
>>          >> -Simon
>>          >>
>>          >>
>>          >> On Sat, Jun 6, 2015 at 8:01 PM, Josh
>>         Elser<josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>> wrote:
>>         > >>
>>         > >> Great! Glad to hear it. Please let us know how it works out!
>>         > >>
>>         > >>
>>         > >> Xu (Simon) Chen wrote:
>>         > >>>
>>         > >>> Josh,
>>         > >>>
>>         > >>> You're right again.. Thanks!
>>         > >>>
>>         > >>> My ansible play actually pushed client.conf to all the
>> server
>>              config
>>         > >>> directories, but didn't do anything for the clients, and
>>         that's my
>>         > >>> problem. Now kerberos is working great for me.
>>         > >>>
>>         > >>> Thanks again!
>>         > >>> -Simon
>>         > >>>
>>         > >>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>              Elser<josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>>
>>
>>          >>>> wrote:
>>          >>>>>
>>          >>>>> Simon,
>>          >>>>>
>>          >>>>> Did you create a client configuration file
>>         (~/.accumulo/config or
>>          >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>              Accumulo clients
>>          >>>>> to
>>          >>>>> actually use SASL when you're trying to use Kerberos
>>              authentication.
>>          >>>>> Your
>>          >>>>> server is expecting that, but I would venture a guess that
>>              your client
>>          >>>>> isn't.
>>          >>>>>
>>          >>>>> See
>>          >>>>>
>>          >>>>>
>>
>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>          >>>>>
>>          >>>>>
>>          >>>>> Xu (Simon) Chen wrote:
>>          >>>>>>
>>          >>>>>> Josh,
>>          >>>>>>
>>          >>>>>> Thanks. It makes sense...
>>          >>>>>>
>>          >>>>>> I used a KerberosToken, but my program got stuck when
>>              running the
>>          >>>>>> following:
>>          >>>>>> new ZooKeeperInstance(instance,
>>         zookeepers).getConnector(user,
>>          >>>>>> krbToken)
>>          >>>>>>
>>          >>>>>> It looks like my client is stuck here:
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>          >>>>>> failing in the receive part of
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>          >>>>>>
>>          >>>>>> On my tservers, I see the following:
>>          >>>>>>
>>          >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer]
>>         ERROR: Error
>>          >>>>>> occurred during processing of message.
>>          >>>>>> java.lang.RuntimeException:
>>          >>>>>> org.apache.thrift.transport.TTransportException:
>>          >>>>>> java.net <http://java.net>.SocketTimeoutException: Read
>>
>>         timed out
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>          >>>>>>            at
>>         java.security.AccessController.doPrivileged(Native
>>          >>>>>> Method)
>>          >>>>>>            at
>>         javax.security.auth.Subject.doAs(Subject.java:356)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>          >>>>>>            at java.lang.Thread.run(Thread.java:745)
>>          >>>>>> Caused by:
>> org.apache.thrift.transport.TTransportException:
>>          >>>>>> java.net <http://java.net>.SocketTimeoutException: Read
>>         timed out
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>          >>>>>>            at
>>          >>>>>>
>>
>>         org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>          >>>>>>            ... 11 more
>>          >>>>>> Caused by: java.net
>>         <http://java.net>.SocketTimeoutException: Read timed out
>>          >>>>>>            at
>> java.net.SocketInputStream.socketRead0(Native
>>              Method)
>>          >>>>>>            at
>>          >>>>>>
>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>          >>>>>>            at
>>          >>>>>>
>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>          >>>>>>            at
>>          >>>>>>
>>         java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>          >>>>>>            at
>>          >>>>>>
>>         java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>          >>>>>>            at
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>          >>>>>>            ... 17 more
>>          >>>>>>
>>          >>>>>> Any ideas why?
>>          >>>>>>
>>          >>>>>> Thanks.
>>          >>>>>> -Simon
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>          >>>>>>
>>          >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>              Elser<josh.elser@gmail.com <ma...@gmail.com>
>>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>>
>>
>>          >>>>>> wrote:
>>          >>>>>>>
>>          >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>          >>>>>>>
>>          >>>>>>> <snip>
>>          >>>>>>> Obtain a delegation token by calling {@link
>>          >>>>>>>
>>          >>>>>>>
>>          >>>>>>>
>>          >>>>>>>
>>
>>
>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>          >>>>>>> </snip>
>>          >>>>>>>
>>          >>>>>>> You cannot create a usable DelegationToken as the client
>>              itself.
>>          >>>>>>>
>>          >>>>>>> Anyways, DelegationTokens are only relevant in cases
>> where
>>              the client
>>          >>>>>>> Kerberos credentials are unavailable. The most common
>> case
>>              is running
>>          >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>              through the
>>          >>>>>>> Java
>>          >>>>>>> API, the KerberosToken is all you need to use.
>>          >>>>>>>
>>          >>>>>>> The user-manual likely just needs to be updated. I
>>         believe the
>>          >>>>>>> DelegationTokenConfig was added after I wrote the initial
>>          >>>>>>> documentation.
>>          >>>>>>>
>>          >>>>>>>
>>          >>>>>>> Xu (Simon) Chen wrote:
>>          >>>>>>>>
>>          >>>>>>>> Hi folks,
>>          >>>>>>>>
>>          >>>>>>>> The latest kerberos doc seems to indicate that
>>              getDelegationToken
>>          >>>>>>>> can
>>          >>>>>>>> be
>>          >>>>>>>> called without any parameters:
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>          >>>>>>>>
>>          >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>              object must be
>>          >>>>>>>> passed in:
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>          >>>>>>>>
>>          >>>>>>>> Any ideas on how I should construct the
>>         DelegationTokenConfig
>>          >>>>>>>> object?
>>          >>>>>>>>
>>          >>>>>>>> For context, I've been trying to get geomesa to work on
>> my
>>              accumulo
>>          >>>>>>>> 1.7
>>          >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>              tied to
>>          >>>>>>>> password auth:
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>          >>>>>>>>
>>
>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>          >>>>>>>> My thought is that I should get a KerberosToken first,
>> and
>>              then try
>>          >>>>>>>> generate a DelegationToken, which is passed back for
>> later
>>          >>>>>>>> interactions
>>          >>>>>>>> between geomesa and accumulo.
>>          >>>>>>>>
>>          >>>>>>>> Thanks.
>>          >>>>>>>> -Simon
>>
>>
>>
>>

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
Check out the accumulo-maven-plugin:

https://accumulo.apache.org/release_notes/1.6.0.html#maven-plugin

This will make it pretty simple to start a single MAC, run your tests 
and then stop it. A word of caution, I don't think it's heavily used, 
nor do I think it contains support for configuration all the knobs you 
could using MAC in your code (e.g. Kerberos, for one).

A full pom example can be seen at 
https://github.com/apache/accumulo/blob/1.7.0/maven-plugin/src/it/plugin-test/pom.xml

James Hughes wrote:
> Josh,
>
> Thanks.  That's more or less what I expected.
>
> As we work to transition from Mock to MiniAccumulo, I'd want to change
> from spinning up lots of MockInstances to one MiniAccumulo.  To
> understand that pattern, do I basically just need to read through test
> sub-module and the test/pom.xml?  Are there any other resources I should
> be checking out?
>
> Cheers,
>
> Jim
>
> On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <josh.elser@gmail.com
> <ma...@gmail.com>> wrote:
>
>     MiniAccumulo, yes. MockAccumulo, no. In general, we've near
>     completely moved away from MockAccumulo. I wouldn't be surprised if
>     it gets deprecated and removed soon.
>
>     https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>
>     Apache Directory provides a MiniKdc that can be used easily w/
>     MiniAccumulo. Many of the integration tests have already been
>     altered to support running w/ or w/o kerberos.
>
>     James Hughes wrote:
>
>         Hi all,
>
>         For GeoMesa, stats writing is quite secondary and optional, so
>         we can
>         sort that out as a follow-on to seeing GeoMesa work with
>         Accumulo 1.7.
>
>         I haven't had a chance to read in details yet, so forgive me if
>         this is
>         covered in the docs.  Does either Mock or MiniAccumulo provide
>         enough
>         hooks to test out Kerberos integration effectively?  I suppose I'm
>         really asking what kind of testing environment a project like
>         GeoMesa
>         would need to use to test out Accumulo 1.7.
>
>         Even though MockAccumulo has a number of limitations, it is rather
>         effective in unit tests which can be part of a quick  build.
>
>         Thanks,
>
>         Jim
>
>         On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen
>         <xchenum@gmail.com <ma...@gmail.com>
>         <mailto:xchenum@gmail.com <ma...@gmail.com>>> wrote:
>
>              Nope, I am running the example as what the readme file
>         suggested:
>
>              java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>              org.geomesa.QuickStart -instanceId somecloud -zookeepers
>         "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>              -tableName sometable
>
>              I'll raise this question with the geomesa folks, but you're
>         right that
>              I can ignore it for now...
>
>              Thanks!
>              -Simon
>
>
>              On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser
>         <josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>> wrote:
>          > Are you running it via `mvn exec:java` by chance or netbeans?
>          >
>          >
>         http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>          >
>          > If that's just a background thread writing in Stats, it might
>              just be a
>          > factor of how you're invoking the program and you can ignore it.
>              I don't
>          > know enough about the inner-workings of GeoMesa to say one way or
>              the other.
>          >
>          >
>          > Xu (Simon) Chen wrote:
>          >>
>          >> Josh,
>          >>
>          >> Everything works well, except for one thing :-)
>          >>
>          >> I am running geomesa-quickstart program that ingest some data
>              and then
>          >> perform a simple query:
>          >> https://github.com/geomesa/geomesa-quickstart
>          >>
>          >> For some reason, the following error is emitted consistently
>         at the
>          >> end of the execution, after outputting the correct result:
>          >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>              retry
>          >> java.lang.InterruptedException
>          >>          at java.lang.Object.wait(Native Method)
>          >>          at java.lang.Object.wait(Object.java:503)
>          >>          at
>          >>
>         org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>          >>          at
>              org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>          >>          at
>          >>
>         org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>          >>          at
>          >>
>         org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>          >>          at
>          >>
>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>          >>          at
>          >>
>         org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>          >>          at
>          >>
>
>         org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>          >>          at
>          >>
>
>         org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>          >>          at
>          >>
>
>         org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>          >>          at
>          >>
>
>         org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>          >>          at
>          >>
>
>         org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>          >>          at
>              scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>          >>          at
>          >>
>
>         org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>          >>          at
>          >>
>
>         org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>          >>          at
>          >>
>
>         org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>          >>          at
>          >>
>
>         java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>          >>          at
>          >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>          >>          at
>          >>
>
>         java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>          >>          at
>          >>
>
>         java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>          >>          at
>          >>
>
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          >>          at
>          >>
>
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          >>          at java.lang.Thread.run(Thread.java:745)
>          >>
>          >> This is more annoying than a real problem. I am new to both
>         accumulo
>          >> and geomesa, but I am curious what the problem might be.
>          >>
>          >> Thanks!
>          >> -Simon
>          >>
>          >>
>          >> On Sat, Jun 6, 2015 at 8:01 PM, Josh
>         Elser<josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>  wrote:
>         > >>
>         > >> Great! Glad to hear it. Please let us know how it works out!
>         > >>
>         > >>
>         > >> Xu (Simon) Chen wrote:
>         > >>>
>         > >>> Josh,
>         > >>>
>         > >>> You're right again.. Thanks!
>         > >>>
>         > >>> My ansible play actually pushed client.conf to all the server
>              config
>         > >>> directories, but didn't do anything for the clients, and
>         that's my
>         > >>> problem. Now kerberos is working great for me.
>         > >>>
>         > >>> Thanks again!
>         > >>> -Simon
>         > >>>
>         > >>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>              Elser<josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>
>          >>>> wrote:
>          >>>>>
>          >>>>> Simon,
>          >>>>>
>          >>>>> Did you create a client configuration file
>         (~/.accumulo/config or
>          >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>              Accumulo clients
>          >>>>> to
>          >>>>> actually use SASL when you're trying to use Kerberos
>              authentication.
>          >>>>> Your
>          >>>>> server is expecting that, but I would venture a guess that
>              your client
>          >>>>> isn't.
>          >>>>>
>          >>>>> See
>          >>>>>
>          >>>>>
>         http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>          >>>>>
>          >>>>>
>          >>>>> Xu (Simon) Chen wrote:
>          >>>>>>
>          >>>>>> Josh,
>          >>>>>>
>          >>>>>> Thanks. It makes sense...
>          >>>>>>
>          >>>>>> I used a KerberosToken, but my program got stuck when
>              running the
>          >>>>>> following:
>          >>>>>> new ZooKeeperInstance(instance,
>         zookeepers).getConnector(user,
>          >>>>>> krbToken)
>          >>>>>>
>          >>>>>> It looks like my client is stuck here:
>          >>>>>>
>          >>>>>>
>          >>>>>>
>          >>>>>>
>         https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>          >>>>>> failing in the receive part of
>          >>>>>>
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>          >>>>>>
>          >>>>>> On my tservers, I see the following:
>          >>>>>>
>          >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer]
>         ERROR: Error
>          >>>>>> occurred during processing of message.
>          >>>>>> java.lang.RuntimeException:
>          >>>>>> org.apache.thrift.transport.TTransportException:
>          >>>>>> java.net <http://java.net>.SocketTimeoutException: Read
>         timed out
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>          >>>>>>            at
>         java.security.AccessController.doPrivileged(Native
>          >>>>>> Method)
>          >>>>>>            at
>         javax.security.auth.Subject.doAs(Subject.java:356)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>          >>>>>>            at java.lang.Thread.run(Thread.java:745)
>          >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>          >>>>>> java.net <http://java.net>.SocketTimeoutException: Read
>         timed out
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>          >>>>>>            at
>          >>>>>>
>
>         org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>          >>>>>>            ... 11 more
>          >>>>>> Caused by: java.net
>         <http://java.net>.SocketTimeoutException: Read timed out
>          >>>>>>            at java.net.SocketInputStream.socketRead0(Native
>              Method)
>          >>>>>>            at
>          >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>          >>>>>>            at
>          >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>          >>>>>>            at
>          >>>>>>
>         java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>          >>>>>>            at
>          >>>>>>
>         java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>          >>>>>>            at
>          >>>>>>
>          >>>>>>
>          >>>>>>
>
>         org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>          >>>>>>            ... 17 more
>          >>>>>>
>          >>>>>> Any ideas why?
>          >>>>>>
>          >>>>>> Thanks.
>          >>>>>> -Simon
>          >>>>>>
>          >>>>>>
>          >>>>>>
>          >>>>>>
>          >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>              Elser<josh.elser@gmail.com <ma...@gmail.com>
>         <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>
>          >>>>>> wrote:
>          >>>>>>>
>          >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>          >>>>>>>
>          >>>>>>> <snip>
>          >>>>>>> Obtain a delegation token by calling {@link
>          >>>>>>>
>          >>>>>>>
>          >>>>>>>
>          >>>>>>>
>
>         SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>          >>>>>>> </snip>
>          >>>>>>>
>          >>>>>>> You cannot create a usable DelegationToken as the client
>              itself.
>          >>>>>>>
>          >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>              the client
>          >>>>>>> Kerberos credentials are unavailable. The most common case
>              is running
>          >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>              through the
>          >>>>>>> Java
>          >>>>>>> API, the KerberosToken is all you need to use.
>          >>>>>>>
>          >>>>>>> The user-manual likely just needs to be updated. I
>         believe the
>          >>>>>>> DelegationTokenConfig was added after I wrote the initial
>          >>>>>>> documentation.
>          >>>>>>>
>          >>>>>>>
>          >>>>>>> Xu (Simon) Chen wrote:
>          >>>>>>>>
>          >>>>>>>> Hi folks,
>          >>>>>>>>
>          >>>>>>>> The latest kerberos doc seems to indicate that
>              getDelegationToken
>          >>>>>>>> can
>          >>>>>>>> be
>          >>>>>>>> called without any parameters:
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>         https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>          >>>>>>>>
>          >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>              object must be
>          >>>>>>>> passed in:
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>         https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>          >>>>>>>>
>          >>>>>>>> Any ideas on how I should construct the
>         DelegationTokenConfig
>          >>>>>>>> object?
>          >>>>>>>>
>          >>>>>>>> For context, I've been trying to get geomesa to work on my
>              accumulo
>          >>>>>>>> 1.7
>          >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>              tied to
>          >>>>>>>> password auth:
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>          >>>>>>>>
>         https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>          >>>>>>>> My thought is that I should get a KerberosToken first, and
>              then try
>          >>>>>>>> generate a DelegationToken, which is passed back for later
>          >>>>>>>> interactions
>          >>>>>>>> between geomesa and accumulo.
>          >>>>>>>>
>          >>>>>>>> Thanks.
>          >>>>>>>> -Simon
>
>
>

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
Sorry, I forgot to respond to this. I haven't tried Spark with this, and 
given that this code in Accumulo is less than a month old, I doubt 
anyone else has yet :)

If hadoopToken was null, that implies that it was never added to the 
instance of the Job during setConnectorInfo(...) (this error condition 
would be good to catch and log a good error, btw). If you have INFO 
logging for this set up, you should be able to see that the 
KerberosToken is received and a DelegationToken is fetched in 
setConnectorInfo(..).

I have no clue if there's another way that Spark adds these credentials 
to the Hadoop Job.

Xu (Simon) Chen wrote:
> I've been trying to find a way around the problem I mentioned.
>
> Is this documentation out-dated?
> https://github.com/apache/accumulo/blob/branch-1.7.0/docs/src/main/asciidoc/chapters/analytics.txt#L72
>
> I don't think setInputInfo is available in 1.7 anymore.
>
>
> On Sun, Jun 7, 2015 at 6:20 PM, Xu (Simon) Chen <xchenum@gmail.com
> <ma...@gmail.com>> wrote:
>
>     I ran into another problem when trying geomesa+spark:
>     http://www.geomesa.org/spark/
>
>     For some reason, running the geomesa+spark example resulted in the
>     following error:
>     scala> queryRdd.count
>     java.lang.NullPointerException
>              at
>     org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.unwrapAuthenticationToken(ConfiguratorBase.java:493)
>              at
>     org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:390)
>              at
>     org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
>              at
>     org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
>              at
>     org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>              at
>     org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>              at scala.Option.getOrElse(Option.scala:120)
>              at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
>              at
>     org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
>              at
>     org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>              at
>     org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>              at scala.Option.getOrElse(Option.scala:120)
>              at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
>              at
>     org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
>              at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
>     …
>
>     hadoopToken is probably null here:
>     https://github.com/apache/accumulo/blob/branch-1.7.0/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java#L493
>
>     I suspect it's related to geomesa's way of creating RDD. The following
>     doesn't seem to be sufficient:
>     https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-compute/src/main/scala/org/locationtech/geomesa/compute/spark/GeoMesaSpark.scala#L69
>
>     Because, accumulo is adding hadoop token here in hadoop.mapreduce.Job,
>     in addition to the Configuration:
>     https://github.com/apache/accumulo/blob/branch-1.7.0/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java#L130
>
>     Then in spark, NewAPIHadoopRDD of Spark is only taking in a
>     hadoop.conf.Configuration object, rather than a job:
>     https://github.com/apache/spark/blob/v1.3.1/core/src/main/scala/org/apache/spark/SparkContext.scala#L870
>
>
>     This seems to be a more general issue between accumulo and spark. So,
>     has anyone tried using spark to access data from a kerberized accumulo
>     cluster?
>
>     Thanks.
>     -Simon
>
>     Thanks.
>     -Simon
>
>
>
>     On Sun, Jun 7, 2015 at 3:54 PM, James Hughes <jnh5y@virginia.edu
>     <ma...@virginia.edu>> wrote:
>      > Josh,
>      >
>      > Thanks.  That's more or less what I expected.
>      >
>      > As we work to transition from Mock to MiniAccumulo, I'd want to
>     change from
>      > spinning up lots of MockInstances to one MiniAccumulo.  To
>     understand that
>      > pattern, do I basically just need to read through test sub-module
>     and the
>      > test/pom.xml?  Are there any other resources I should be checking
>     out?
>      >
>      > Cheers,
>      >
>      > Jim
>      >
>      > On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <josh.elser@gmail.com
>     <ma...@gmail.com>> wrote:
>      >>
>      >> MiniAccumulo, yes. MockAccumulo, no. In general, we've near
>     completely
>      >> moved away from MockAccumulo. I wouldn't be surprised if it gets
>     deprecated
>      >> and removed soon.
>      >>
>      >>
>      >>
>     https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>      >>
>      >> Apache Directory provides a MiniKdc that can be used easily w/
>      >> MiniAccumulo. Many of the integration tests have already been
>     altered to
>      >> support running w/ or w/o kerberos.
>      >>
>      >> James Hughes wrote:
>      >>>
>      >>> Hi all,
>      >>>
>      >>> For GeoMesa, stats writing is quite secondary and optional, so
>     we can
>      >>> sort that out as a follow-on to seeing GeoMesa work with
>     Accumulo 1.7.
>      >>>
>      >>> I haven't had a chance to read in details yet, so forgive me if
>     this is
>      >>> covered in the docs.  Does either Mock or MiniAccumulo provide
>     enough
>      >>> hooks to test out Kerberos integration effectively?  I suppose I'm
>      >>> really asking what kind of testing environment a project like
>     GeoMesa
>      >>> would need to use to test out Accumulo 1.7.
>      >>>
>      >>> Even though MockAccumulo has a number of limitations, it is rather
>      >>> effective in unit tests which can be part of a quick  build.
>      >>>
>      >>> Thanks,
>      >>>
>      >>> Jim
>      >>>
>      >>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen
>     <xchenum@gmail.com <ma...@gmail.com>
>      >>> <mailto:xchenum@gmail.com <ma...@gmail.com>>> wrote:
>      >>>
>      >>>     Nope, I am running the example as what the readme file
>     suggested:
>      >>>
>      >>>     java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>      >>>     org.geomesa.QuickStart -instanceId somecloud -zookeepers
>      >>> "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>      >>>     -tableName sometable
>      >>>
>      >>>     I'll raise this question with the geomesa folks, but you're
>     right
>      >>> that
>      >>>     I can ignore it for now...
>      >>>
>      >>>     Thanks!
>      >>>     -Simon
>      >>>
>      >>>
>      >>>     On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser
>     <josh.elser@gmail.com <ma...@gmail.com>
>      >>> <mailto:josh.elser@gmail.com <ma...@gmail.com>>> wrote:
>      >>> > Are you running it via `mvn exec:java` by chance or netbeans?
>      >>> >
>      >>> >
>      >>>
>      >>>
>     http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>      >>> >
>      >>> > If that's just a background thread writing in Stats, it might
>      >>>     just be a
>      >>> > factor of how you're invoking the program and you can ignore it.
>      >>>     I don't
>      >>> > know enough about the inner-workings of GeoMesa to say one way or
>      >>>     the other.
>      >>> >
>      >>> >
>      >>> > Xu (Simon) Chen wrote:
>      >>> >>
>      >>> >> Josh,
>      >>> >>
>      >>> >> Everything works well, except for one thing :-)
>      >>> >>
>      >>> >> I am running geomesa-quickstart program that ingest some data
>      >>>     and then
>      >>> >> perform a simple query:
>      >>> >> https://github.com/geomesa/geomesa-quickstart
>      >>> >>
>      >>> >> For some reason, the following error is emitted consistently at
>      >>> the
>      >>> >> end of the execution, after outputting the correct result:
>      >>> >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>      >>>     retry
>      >>> >> java.lang.InterruptedException
>      >>> >>          at java.lang.Object.wait(Native Method)
>      >>> >>          at java.lang.Object.wait(Object.java:503)
>      >>> >>          at
>      >>> >>
>      >>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>      >>> >>          at
>      >>>     org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>      >>> >>          at
>      >>> >>
>      >>>
>     org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>      >>> >>          at
>      >>> >>
>      >>>
>     org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>      >>> >>          at
>      >>> >>
>      >>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>      >>> >>          at
>      >>> >>
>      >>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>      >>> >>          at
>      >>>     scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>      >>> >>          at
>      >>> >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>      >>> >>          at
>      >>> >>
>      >>>
>      >>>
>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>      >>> >>          at java.lang.Thread.run(Thread.java:745)
>      >>> >>
>      >>> >> This is more annoying than a real problem. I am new to both
>      >>> accumulo
>      >>> >> and geomesa, but I am curious what the problem might be.
>      >>> >>
>      >>> >> Thanks!
>      >>> >> -Simon
>      >>> >>
>      >>> >>
>      >>> >> On Sat, Jun 6, 2015 at 8:01 PM, Josh
>     Elser<josh.elser@gmail.com <ma...@gmail.com>
>      >>> <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>     wrote:
>      >>> >>>
>      >>> >>> Great! Glad to hear it. Please let us know how it works out!
>      >>> >>>
>      >>> >>>
>      >>> >>> Xu (Simon) Chen wrote:
>      >>> >>>>
>      >>> >>>> Josh,
>      >>> >>>>
>      >>> >>>> You're right again.. Thanks!
>      >>> >>>>
>      >>> >>>> My ansible play actually pushed client.conf to all the server
>      >>>     config
>      >>> >>>> directories, but didn't do anything for the clients, and
>     that's
>      >>> my
>      >>> >>>> problem. Now kerberos is working great for me.
>      >>> >>>>
>      >>> >>>> Thanks again!
>      >>> >>>> -Simon
>      >>> >>>>
>      >>> >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>      >>>     Elser<josh.elser@gmail.com <ma...@gmail.com>
>     <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>      >>>
>      >>> >>>> wrote:
>      >>> >>>>>
>      >>> >>>>> Simon,
>      >>> >>>>>
>      >>> >>>>> Did you create a client configuration file
>     (~/.accumulo/config
>      >>> or
>      >>> >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>      >>>     Accumulo clients
>      >>> >>>>> to
>      >>> >>>>> actually use SASL when you're trying to use Kerberos
>      >>>     authentication.
>      >>> >>>>> Your
>      >>> >>>>> server is expecting that, but I would venture a guess that
>      >>>     your client
>      >>> >>>>> isn't.
>      >>> >>>>>
>      >>> >>>>> See
>      >>> >>>>>
>      >>> >>>>>
>      >>>
>      >>>
>     http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>      >>> >>>>>
>      >>> >>>>>
>      >>> >>>>> Xu (Simon) Chen wrote:
>      >>> >>>>>>
>      >>> >>>>>> Josh,
>      >>> >>>>>>
>      >>> >>>>>> Thanks. It makes sense...
>      >>> >>>>>>
>      >>> >>>>>> I used a KerberosToken, but my program got stuck when
>      >>>     running the
>      >>> >>>>>> following:
>      >>> >>>>>> new ZooKeeperInstance(instance,
>      >>> zookeepers).getConnector(user,
>      >>> >>>>>> krbToken)
>      >>> >>>>>>
>      >>> >>>>>> It looks like my client is stuck here:
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>      >>> >>>>>> failing in the receive part of
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>      >>> >>>>>>
>      >>> >>>>>> On my tservers, I see the following:
>      >>> >>>>>>
>      >>> >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
>      >>> Error
>      >>> >>>>>> occurred during processing of message.
>      >>> >>>>>> java.lang.RuntimeException:
>      >>> >>>>>> org.apache.thrift.transport.TTransportException:
>      >>> >>>>>> java.net.SocketTimeoutException: Read timed out
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>      >>> >>>>>>            at
>      >>> java.security.AccessController.doPrivileged(Native
>      >>> >>>>>> Method)
>      >>> >>>>>>            at
>      >>> javax.security.auth.Subject.doAs(Subject.java:356)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>      >>> >>>>>>            at java.lang.Thread.run(Thread.java:745)
>      >>> >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>      >>> >>>>>> java.net.SocketTimeoutException: Read timed out
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>>
>       org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>      >>> >>>>>>            ... 11 more
>      >>> >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>      >>> >>>>>>            at java.net.SocketInputStream.socketRead0(Native
>      >>>     Method)
>      >>> >>>>>>            at
>      >>> >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>      >>> >>>>>>            at
>      >>> >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>      >>> >>>>>>            at
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>>
>      >>>
>     org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>      >>> >>>>>>            ... 17 more
>      >>> >>>>>>
>      >>> >>>>>> Any ideas why?
>      >>> >>>>>>
>      >>> >>>>>> Thanks.
>      >>> >>>>>> -Simon
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>>
>      >>> >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>      >>>     Elser<josh.elser@gmail.com <ma...@gmail.com>
>     <mailto:josh.elser@gmail.com <ma...@gmail.com>>>
>      >>>
>      >>> >>>>>> wrote:
>      >>> >>>>>>>
>      >>> >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>      >>> >>>>>>>
>      >>> >>>>>>> <snip>
>      >>> >>>>>>> Obtain a delegation token by calling {@link
>      >>> >>>>>>>
>      >>> >>>>>>>
>      >>> >>>>>>>
>      >>> >>>>>>>
>      >>>
>      >>>
>     SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>      >>> >>>>>>> </snip>
>      >>> >>>>>>>
>      >>> >>>>>>> You cannot create a usable DelegationToken as the client
>      >>>     itself.
>      >>> >>>>>>>
>      >>> >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>      >>>     the client
>      >>> >>>>>>> Kerberos credentials are unavailable. The most common case
>      >>>     is running
>      >>> >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>      >>>     through the
>      >>> >>>>>>> Java
>      >>> >>>>>>> API, the KerberosToken is all you need to use.
>      >>> >>>>>>>
>      >>> >>>>>>> The user-manual likely just needs to be updated. I believe
>      >>> the
>      >>> >>>>>>> DelegationTokenConfig was added after I wrote the initial
>      >>> >>>>>>> documentation.
>      >>> >>>>>>>
>      >>> >>>>>>>
>      >>> >>>>>>> Xu (Simon) Chen wrote:
>      >>> >>>>>>>>
>      >>> >>>>>>>> Hi folks,
>      >>> >>>>>>>>
>      >>> >>>>>>>> The latest kerberos doc seems to indicate that
>      >>>     getDelegationToken
>      >>> >>>>>>>> can
>      >>> >>>>>>>> be
>      >>> >>>>>>>> called without any parameters:
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>>
>      >>>
>     https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>      >>> >>>>>>>>
>      >>> >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>      >>>     object must be
>      >>> >>>>>>>> passed in:
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>>
>      >>>
>     https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>      >>> >>>>>>>>
>      >>> >>>>>>>> Any ideas on how I should construct the
>      >>> DelegationTokenConfig
>      >>> >>>>>>>> object?
>      >>> >>>>>>>>
>      >>> >>>>>>>> For context, I've been trying to get geomesa to work on my
>      >>>     accumulo
>      >>> >>>>>>>> 1.7
>      >>> >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>      >>>     tied to
>      >>> >>>>>>>> password auth:
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>> >>>>>>>>
>      >>>
>      >>>
>     https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>      >>> >>>>>>>> My thought is that I should get a KerberosToken first, and
>      >>>     then try
>      >>> >>>>>>>> generate a DelegationToken, which is passed back for later
>      >>> >>>>>>>> interactions
>      >>> >>>>>>>> between geomesa and accumulo.
>      >>> >>>>>>>>
>      >>> >>>>>>>> Thanks.
>      >>> >>>>>>>> -Simon
>      >>>
>      >>>
>      >
>
>

Re: kerberos auth, getDelegationToken

Posted by "Xu (Simon) Chen" <xc...@gmail.com>.
I've been trying to find a way around the problem I mentioned.

Is this documentation out-dated?
https://github.com/apache/accumulo/blob/branch-1.7.0/docs/src/main/asciidoc/chapters/analytics.txt#L72

I don't think setInputInfo is available in 1.7 anymore.


On Sun, Jun 7, 2015 at 6:20 PM, Xu (Simon) Chen <xc...@gmail.com> wrote:

> I ran into another problem when trying geomesa+spark:
> http://www.geomesa.org/spark/
>
> For some reason, running the geomesa+spark example resulted in the
> following error:
> scala> queryRdd.count
> java.lang.NullPointerException
>         at
> org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.unwrapAuthenticationToken(ConfiguratorBase.java:493)
>         at
> org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:390)
>         at
> org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
>         at
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
>         at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
> …
>
> hadoopToken is probably null here:
>
> https://github.com/apache/accumulo/blob/branch-1.7.0/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java#L493
>
> I suspect it's related to geomesa's way of creating RDD. The following
> doesn't seem to be sufficient:
>
> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-compute/src/main/scala/org/locationtech/geomesa/compute/spark/GeoMesaSpark.scala#L69
>
> Because, accumulo is adding hadoop token here in hadoop.mapreduce.Job,
> in addition to the Configuration:
>
> https://github.com/apache/accumulo/blob/branch-1.7.0/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java#L130
>
> Then in spark, NewAPIHadoopRDD of Spark is only taking in a
> hadoop.conf.Configuration object, rather than a job:
>
> https://github.com/apache/spark/blob/v1.3.1/core/src/main/scala/org/apache/spark/SparkContext.scala#L870
>
>
> This seems to be a more general issue between accumulo and spark. So,
> has anyone tried using spark to access data from a kerberized accumulo
> cluster?
>
> Thanks.
> -Simon
>
> Thanks.
> -Simon
>
>
>
> On Sun, Jun 7, 2015 at 3:54 PM, James Hughes <jn...@virginia.edu> wrote:
> > Josh,
> >
> > Thanks.  That's more or less what I expected.
> >
> > As we work to transition from Mock to MiniAccumulo, I'd want to change
> from
> > spinning up lots of MockInstances to one MiniAccumulo.  To understand
> that
> > pattern, do I basically just need to read through test sub-module and the
> > test/pom.xml?  Are there any other resources I should be checking out?
> >
> > Cheers,
> >
> > Jim
> >
> > On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <jo...@gmail.com> wrote:
> >>
> >> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
> >> moved away from MockAccumulo. I wouldn't be surprised if it gets
> deprecated
> >> and removed soon.
> >>
> >>
> >>
> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
> >>
> >> Apache Directory provides a MiniKdc that can be used easily w/
> >> MiniAccumulo. Many of the integration tests have already been altered to
> >> support running w/ or w/o kerberos.
> >>
> >> James Hughes wrote:
> >>>
> >>> Hi all,
> >>>
> >>> For GeoMesa, stats writing is quite secondary and optional, so we can
> >>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
> >>>
> >>> I haven't had a chance to read in details yet, so forgive me if this is
> >>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
> >>> hooks to test out Kerberos integration effectively?  I suppose I'm
> >>> really asking what kind of testing environment a project like GeoMesa
> >>> would need to use to test out Accumulo 1.7.
> >>>
> >>> Even though MockAccumulo has a number of limitations, it is rather
> >>> effective in unit tests which can be part of a quick  build.
> >>>
> >>> Thanks,
> >>>
> >>> Jim
> >>>
> >>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
> >>> <ma...@gmail.com>> wrote:
> >>>
> >>>     Nope, I am running the example as what the readme file suggested:
> >>>
> >>>     java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
> >>>     org.geomesa.QuickStart -instanceId somecloud -zookeepers
> >>>     "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
> >>>     -tableName sometable
> >>>
> >>>     I'll raise this question with the geomesa folks, but you're right
> >>> that
> >>>     I can ignore it for now...
> >>>
> >>>     Thanks!
> >>>     -Simon
> >>>
> >>>
> >>>     On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
> >>>     <ma...@gmail.com>> wrote:
> >>>      > Are you running it via `mvn exec:java` by chance or netbeans?
> >>>      >
> >>>      >
> >>>
> >>>
> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
> >>>      >
> >>>      > If that's just a background thread writing in Stats, it might
> >>>     just be a
> >>>      > factor of how you're invoking the program and you can ignore it.
> >>>     I don't
> >>>      > know enough about the inner-workings of GeoMesa to say one way
> or
> >>>     the other.
> >>>      >
> >>>      >
> >>>      > Xu (Simon) Chen wrote:
> >>>      >>
> >>>      >> Josh,
> >>>      >>
> >>>      >> Everything works well, except for one thing :-)
> >>>      >>
> >>>      >> I am running geomesa-quickstart program that ingest some data
> >>>     and then
> >>>      >> perform a simple query:
> >>>      >> https://github.com/geomesa/geomesa-quickstart
> >>>      >>
> >>>      >> For some reason, the following error is emitted consistently at
> >>> the
> >>>      >> end of the execution, after outputting the correct result:
> >>>      >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error,
> will
> >>>     retry
> >>>      >> java.lang.InterruptedException
> >>>      >>          at java.lang.Object.wait(Native Method)
> >>>      >>          at java.lang.Object.wait(Object.java:503)
> >>>      >>          at
> >>>      >>
> >>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
> >>>      >>          at
> >>>     org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
> >>>      >>          at
> >>>      >>
> >>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
> >>>      >>          at
> >>>      >>
> >>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
> >>>      >>          at
> >>>      >>
> >>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
> >>>      >>          at
> >>>      >>
> >>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
> >>>      >>          at
> >>>     scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
> >>>      >>          at
> >>>      >>
> >>>
> >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >>>      >>          at
> >>>      >>
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>      >>          at
> >>>      >>
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>      >>          at java.lang.Thread.run(Thread.java:745)
> >>>      >>
> >>>      >> This is more annoying than a real problem. I am new to both
> >>> accumulo
> >>>      >> and geomesa, but I am curious what the problem might be.
> >>>      >>
> >>>      >> Thanks!
> >>>      >> -Simon
> >>>      >>
> >>>      >>
> >>>      >> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<
> josh.elser@gmail.com
> >>>     <ma...@gmail.com>>  wrote:
> >>>      >>>
> >>>      >>> Great! Glad to hear it. Please let us know how it works out!
> >>>      >>>
> >>>      >>>
> >>>      >>> Xu (Simon) Chen wrote:
> >>>      >>>>
> >>>      >>>> Josh,
> >>>      >>>>
> >>>      >>>> You're right again.. Thanks!
> >>>      >>>>
> >>>      >>>> My ansible play actually pushed client.conf to all the server
> >>>     config
> >>>      >>>> directories, but didn't do anything for the clients, and
> that's
> >>> my
> >>>      >>>> problem. Now kerberos is working great for me.
> >>>      >>>>
> >>>      >>>> Thanks again!
> >>>      >>>> -Simon
> >>>      >>>>
> >>>      >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
> >>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
> >>>
> >>>      >>>> wrote:
> >>>      >>>>>
> >>>      >>>>> Simon,
> >>>      >>>>>
> >>>      >>>>> Did you create a client configuration file
> (~/.accumulo/config
> >>> or
> >>>      >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
> >>>     Accumulo clients
> >>>      >>>>> to
> >>>      >>>>> actually use SASL when you're trying to use Kerberos
> >>>     authentication.
> >>>      >>>>> Your
> >>>      >>>>> server is expecting that, but I would venture a guess that
> >>>     your client
> >>>      >>>>> isn't.
> >>>      >>>>>
> >>>      >>>>> See
> >>>      >>>>>
> >>>      >>>>>
> >>>
> >>>
> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
> >>>      >>>>>
> >>>      >>>>>
> >>>      >>>>> Xu (Simon) Chen wrote:
> >>>      >>>>>>
> >>>      >>>>>> Josh,
> >>>      >>>>>>
> >>>      >>>>>> Thanks. It makes sense...
> >>>      >>>>>>
> >>>      >>>>>> I used a KerberosToken, but my program got stuck when
> >>>     running the
> >>>      >>>>>> following:
> >>>      >>>>>> new ZooKeeperInstance(instance,
> >>> zookeepers).getConnector(user,
> >>>      >>>>>> krbToken)
> >>>      >>>>>>
> >>>      >>>>>> It looks like my client is stuck here:
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
> >>>      >>>>>> failing in the receive part of
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
> >>>      >>>>>>
> >>>      >>>>>> On my tservers, I see the following:
> >>>      >>>>>>
> >>>      >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
> >>> Error
> >>>      >>>>>> occurred during processing of message.
> >>>      >>>>>> java.lang.RuntimeException:
> >>>      >>>>>> org.apache.thrift.transport.TTransportException:
> >>>      >>>>>> java.net.SocketTimeoutException: Read timed out
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
> >>>      >>>>>>            at
> >>> java.security.AccessController.doPrivileged(Native
> >>>      >>>>>> Method)
> >>>      >>>>>>            at
> >>> javax.security.auth.Subject.doAs(Subject.java:356)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> >>>      >>>>>>            at java.lang.Thread.run(Thread.java:745)
> >>>      >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
> >>>      >>>>>> java.net.SocketTimeoutException: Read timed out
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>     org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> >>>      >>>>>>            ... 11 more
> >>>      >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
> >>>      >>>>>>            at java.net.SocketInputStream.socketRead0(Native
> >>>     Method)
> >>>      >>>>>>            at
> >>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
> >>>      >>>>>>            at
> >>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> >>>      >>>>>>            at
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>
> >>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> >>>      >>>>>>            ... 17 more
> >>>      >>>>>>
> >>>      >>>>>> Any ideas why?
> >>>      >>>>>>
> >>>      >>>>>> Thanks.
> >>>      >>>>>> -Simon
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>>
> >>>      >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
> >>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
> >>>
> >>>      >>>>>> wrote:
> >>>      >>>>>>>
> >>>      >>>>>>> Make sure you read the JavaDoc on DelegationToken:
> >>>      >>>>>>>
> >>>      >>>>>>> <snip>
> >>>      >>>>>>> Obtain a delegation token by calling {@link
> >>>      >>>>>>>
> >>>      >>>>>>>
> >>>      >>>>>>>
> >>>      >>>>>>>
> >>>
> >>>
> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
> >>>      >>>>>>> </snip>
> >>>      >>>>>>>
> >>>      >>>>>>> You cannot create a usable DelegationToken as the client
> >>>     itself.
> >>>      >>>>>>>
> >>>      >>>>>>> Anyways, DelegationTokens are only relevant in cases where
> >>>     the client
> >>>      >>>>>>> Kerberos credentials are unavailable. The most common case
> >>>     is running
> >>>      >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
> >>>     through the
> >>>      >>>>>>> Java
> >>>      >>>>>>> API, the KerberosToken is all you need to use.
> >>>      >>>>>>>
> >>>      >>>>>>> The user-manual likely just needs to be updated. I believe
> >>> the
> >>>      >>>>>>> DelegationTokenConfig was added after I wrote the initial
> >>>      >>>>>>> documentation.
> >>>      >>>>>>>
> >>>      >>>>>>>
> >>>      >>>>>>> Xu (Simon) Chen wrote:
> >>>      >>>>>>>>
> >>>      >>>>>>>> Hi folks,
> >>>      >>>>>>>>
> >>>      >>>>>>>> The latest kerberos doc seems to indicate that
> >>>     getDelegationToken
> >>>      >>>>>>>> can
> >>>      >>>>>>>> be
> >>>      >>>>>>>> called without any parameters:
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>
> >>>
> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
> >>>      >>>>>>>>
> >>>      >>>>>>>> Yet the source code indicates a DelegationTokenConfig
> >>>     object must be
> >>>      >>>>>>>> passed in:
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>
> >>>
> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
> >>>      >>>>>>>>
> >>>      >>>>>>>> Any ideas on how I should construct the
> >>> DelegationTokenConfig
> >>>      >>>>>>>> object?
> >>>      >>>>>>>>
> >>>      >>>>>>>> For context, I've been trying to get geomesa to work on
> my
> >>>     accumulo
> >>>      >>>>>>>> 1.7
> >>>      >>>>>>>> with kerberos turned on. Right now, the code is somewhat
> >>>     tied to
> >>>      >>>>>>>> password auth:
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>      >>>>>>>>
> >>>
> >>>
> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
> >>>      >>>>>>>> My thought is that I should get a KerberosToken first,
> and
> >>>     then try
> >>>      >>>>>>>> generate a DelegationToken, which is passed back for
> later
> >>>      >>>>>>>> interactions
> >>>      >>>>>>>> between geomesa and accumulo.
> >>>      >>>>>>>>
> >>>      >>>>>>>> Thanks.
> >>>      >>>>>>>> -Simon
> >>>
> >>>
> >
>

Re: kerberos auth, getDelegationToken

Posted by "Xu (Simon) Chen" <xc...@gmail.com>.
I ran into another problem when trying geomesa+spark:
http://www.geomesa.org/spark/

For some reason, running the geomesa+spark example resulted in the
following error:
scala> queryRdd.count
java.lang.NullPointerException
        at org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.unwrapAuthenticationToken(ConfiguratorBase.java:493)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:390)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
        at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
        at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
…

hadoopToken is probably null here:
https://github.com/apache/accumulo/blob/branch-1.7.0/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java#L493

I suspect it's related to geomesa's way of creating RDD. The following
doesn't seem to be sufficient:
https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-compute/src/main/scala/org/locationtech/geomesa/compute/spark/GeoMesaSpark.scala#L69

Because, accumulo is adding hadoop token here in hadoop.mapreduce.Job,
in addition to the Configuration:
https://github.com/apache/accumulo/blob/branch-1.7.0/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java#L130

Then in spark, NewAPIHadoopRDD of Spark is only taking in a
hadoop.conf.Configuration object, rather than a job:
https://github.com/apache/spark/blob/v1.3.1/core/src/main/scala/org/apache/spark/SparkContext.scala#L870


This seems to be a more general issue between accumulo and spark. So,
has anyone tried using spark to access data from a kerberized accumulo
cluster?

Thanks.
-Simon

Thanks.
-Simon



On Sun, Jun 7, 2015 at 3:54 PM, James Hughes <jn...@virginia.edu> wrote:
> Josh,
>
> Thanks.  That's more or less what I expected.
>
> As we work to transition from Mock to MiniAccumulo, I'd want to change from
> spinning up lots of MockInstances to one MiniAccumulo.  To understand that
> pattern, do I basically just need to read through test sub-module and the
> test/pom.xml?  Are there any other resources I should be checking out?
>
> Cheers,
>
> Jim
>
> On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <jo...@gmail.com> wrote:
>>
>> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
>> moved away from MockAccumulo. I wouldn't be surprised if it gets deprecated
>> and removed soon.
>>
>>
>> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>>
>> Apache Directory provides a MiniKdc that can be used easily w/
>> MiniAccumulo. Many of the integration tests have already been altered to
>> support running w/ or w/o kerberos.
>>
>> James Hughes wrote:
>>>
>>> Hi all,
>>>
>>> For GeoMesa, stats writing is quite secondary and optional, so we can
>>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
>>>
>>> I haven't had a chance to read in details yet, so forgive me if this is
>>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
>>> hooks to test out Kerberos integration effectively?  I suppose I'm
>>> really asking what kind of testing environment a project like GeoMesa
>>> would need to use to test out Accumulo 1.7.
>>>
>>> Even though MockAccumulo has a number of limitations, it is rather
>>> effective in unit tests which can be part of a quick  build.
>>>
>>> Thanks,
>>>
>>> Jim
>>>
>>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
>>> <ma...@gmail.com>> wrote:
>>>
>>>     Nope, I am running the example as what the readme file suggested:
>>>
>>>     java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>>     org.geomesa.QuickStart -instanceId somecloud -zookeepers
>>>     "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>>     -tableName sometable
>>>
>>>     I'll raise this question with the geomesa folks, but you're right
>>> that
>>>     I can ignore it for now...
>>>
>>>     Thanks!
>>>     -Simon
>>>
>>>
>>>     On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
>>>     <ma...@gmail.com>> wrote:
>>>      > Are you running it via `mvn exec:java` by chance or netbeans?
>>>      >
>>>      >
>>>
>>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>>      >
>>>      > If that's just a background thread writing in Stats, it might
>>>     just be a
>>>      > factor of how you're invoking the program and you can ignore it.
>>>     I don't
>>>      > know enough about the inner-workings of GeoMesa to say one way or
>>>     the other.
>>>      >
>>>      >
>>>      > Xu (Simon) Chen wrote:
>>>      >>
>>>      >> Josh,
>>>      >>
>>>      >> Everything works well, except for one thing :-)
>>>      >>
>>>      >> I am running geomesa-quickstart program that ingest some data
>>>     and then
>>>      >> perform a simple query:
>>>      >> https://github.com/geomesa/geomesa-quickstart
>>>      >>
>>>      >> For some reason, the following error is emitted consistently at
>>> the
>>>      >> end of the execution, after outputting the correct result:
>>>      >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>>>     retry
>>>      >> java.lang.InterruptedException
>>>      >>          at java.lang.Object.wait(Native Method)
>>>      >>          at java.lang.Object.wait(Object.java:503)
>>>      >>          at
>>>      >>
>>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>>      >>          at
>>>     org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>>      >>          at
>>>      >>
>>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>>      >>          at
>>>      >>
>>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>>      >>          at
>>>      >>
>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>>      >>          at
>>>      >>
>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>>      >>          at
>>>      >>
>>>
>>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>>      >>          at
>>>      >>
>>>
>>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>>      >>          at
>>>      >>
>>>
>>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>>      >>          at
>>>      >>
>>>
>>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>>      >>          at
>>>      >>
>>>
>>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>>      >>          at
>>>     scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>>      >>          at
>>>      >>
>>>
>>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>>      >>          at
>>>      >>
>>>
>>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>>      >>          at
>>>      >>
>>>
>>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>>      >>          at
>>>      >>
>>>
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>      >>          at
>>>      >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>>      >>          at
>>>      >>
>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>>      >>          at
>>>      >>
>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>      >>          at
>>>      >>
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>      >>          at
>>>      >>
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>      >>          at java.lang.Thread.run(Thread.java:745)
>>>      >>
>>>      >> This is more annoying than a real problem. I am new to both
>>> accumulo
>>>      >> and geomesa, but I am curious what the problem might be.
>>>      >>
>>>      >> Thanks!
>>>      >> -Simon
>>>      >>
>>>      >>
>>>      >> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
>>>     <ma...@gmail.com>>  wrote:
>>>      >>>
>>>      >>> Great! Glad to hear it. Please let us know how it works out!
>>>      >>>
>>>      >>>
>>>      >>> Xu (Simon) Chen wrote:
>>>      >>>>
>>>      >>>> Josh,
>>>      >>>>
>>>      >>>> You're right again.. Thanks!
>>>      >>>>
>>>      >>>> My ansible play actually pushed client.conf to all the server
>>>     config
>>>      >>>> directories, but didn't do anything for the clients, and that's
>>> my
>>>      >>>> problem. Now kerberos is working great for me.
>>>      >>>>
>>>      >>>> Thanks again!
>>>      >>>> -Simon
>>>      >>>>
>>>      >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>>
>>>      >>>> wrote:
>>>      >>>>>
>>>      >>>>> Simon,
>>>      >>>>>
>>>      >>>>> Did you create a client configuration file (~/.accumulo/config
>>> or
>>>      >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>>     Accumulo clients
>>>      >>>>> to
>>>      >>>>> actually use SASL when you're trying to use Kerberos
>>>     authentication.
>>>      >>>>> Your
>>>      >>>>> server is expecting that, but I would venture a guess that
>>>     your client
>>>      >>>>> isn't.
>>>      >>>>>
>>>      >>>>> See
>>>      >>>>>
>>>      >>>>>
>>>
>>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>>      >>>>>
>>>      >>>>>
>>>      >>>>> Xu (Simon) Chen wrote:
>>>      >>>>>>
>>>      >>>>>> Josh,
>>>      >>>>>>
>>>      >>>>>> Thanks. It makes sense...
>>>      >>>>>>
>>>      >>>>>> I used a KerberosToken, but my program got stuck when
>>>     running the
>>>      >>>>>> following:
>>>      >>>>>> new ZooKeeperInstance(instance,
>>> zookeepers).getConnector(user,
>>>      >>>>>> krbToken)
>>>      >>>>>>
>>>      >>>>>> It looks like my client is stuck here:
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>>      >>>>>> failing in the receive part of
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>      >>>>>>
>>>      >>>>>> On my tservers, I see the following:
>>>      >>>>>>
>>>      >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
>>> Error
>>>      >>>>>> occurred during processing of message.
>>>      >>>>>> java.lang.RuntimeException:
>>>      >>>>>> org.apache.thrift.transport.TTransportException:
>>>      >>>>>> java.net.SocketTimeoutException: Read timed out
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>      >>>>>>            at
>>> java.security.AccessController.doPrivileged(Native
>>>      >>>>>> Method)
>>>      >>>>>>            at
>>> javax.security.auth.Subject.doAs(Subject.java:356)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>      >>>>>>            at java.lang.Thread.run(Thread.java:745)
>>>      >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>>      >>>>>> java.net.SocketTimeoutException: Read timed out
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>      >>>>>>            at
>>>      >>>>>>
>>>     org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>      >>>>>>            ... 11 more
>>>      >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>      >>>>>>            at java.net.SocketInputStream.socketRead0(Native
>>>     Method)
>>>      >>>>>>            at
>>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>      >>>>>>            at
>>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>      >>>>>>            at
>>>      >>>>>>
>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>      >>>>>>            at
>>>      >>>>>>
>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>      >>>>>>            at
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>      >>>>>>            ... 17 more
>>>      >>>>>>
>>>      >>>>>> Any ideas why?
>>>      >>>>>>
>>>      >>>>>> Thanks.
>>>      >>>>>> -Simon
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>>
>>>      >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>>
>>>      >>>>>> wrote:
>>>      >>>>>>>
>>>      >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>>      >>>>>>>
>>>      >>>>>>> <snip>
>>>      >>>>>>> Obtain a delegation token by calling {@link
>>>      >>>>>>>
>>>      >>>>>>>
>>>      >>>>>>>
>>>      >>>>>>>
>>>
>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>      >>>>>>> </snip>
>>>      >>>>>>>
>>>      >>>>>>> You cannot create a usable DelegationToken as the client
>>>     itself.
>>>      >>>>>>>
>>>      >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>>>     the client
>>>      >>>>>>> Kerberos credentials are unavailable. The most common case
>>>     is running
>>>      >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>>     through the
>>>      >>>>>>> Java
>>>      >>>>>>> API, the KerberosToken is all you need to use.
>>>      >>>>>>>
>>>      >>>>>>> The user-manual likely just needs to be updated. I believe
>>> the
>>>      >>>>>>> DelegationTokenConfig was added after I wrote the initial
>>>      >>>>>>> documentation.
>>>      >>>>>>>
>>>      >>>>>>>
>>>      >>>>>>> Xu (Simon) Chen wrote:
>>>      >>>>>>>>
>>>      >>>>>>>> Hi folks,
>>>      >>>>>>>>
>>>      >>>>>>>> The latest kerberos doc seems to indicate that
>>>     getDelegationToken
>>>      >>>>>>>> can
>>>      >>>>>>>> be
>>>      >>>>>>>> called without any parameters:
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>
>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>      >>>>>>>>
>>>      >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>>     object must be
>>>      >>>>>>>> passed in:
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>
>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>      >>>>>>>>
>>>      >>>>>>>> Any ideas on how I should construct the
>>> DelegationTokenConfig
>>>      >>>>>>>> object?
>>>      >>>>>>>>
>>>      >>>>>>>> For context, I've been trying to get geomesa to work on my
>>>     accumulo
>>>      >>>>>>>> 1.7
>>>      >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>>     tied to
>>>      >>>>>>>> password auth:
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>      >>>>>>>>
>>>
>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>      >>>>>>>> My thought is that I should get a KerberosToken first, and
>>>     then try
>>>      >>>>>>>> generate a DelegationToken, which is passed back for later
>>>      >>>>>>>> interactions
>>>      >>>>>>>> between geomesa and accumulo.
>>>      >>>>>>>>
>>>      >>>>>>>> Thanks.
>>>      >>>>>>>> -Simon
>>>
>>>
>

Re: kerberos auth, getDelegationToken

Posted by James Hughes <jn...@virginia.edu>.
Josh,

Thanks.  That's more or less what I expected.

As we work to transition from Mock to MiniAccumulo, I'd want to change from
spinning up lots of MockInstances to one MiniAccumulo.  To understand that
pattern, do I basically just need to read through test sub-module and the
test/pom.xml?  Are there any other resources I should be checking out?

Cheers,

Jim

On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <jo...@gmail.com> wrote:

> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
> moved away from MockAccumulo. I wouldn't be surprised if it gets deprecated
> and removed soon.
>
>
> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>
> Apache Directory provides a MiniKdc that can be used easily w/
> MiniAccumulo. Many of the integration tests have already been altered to
> support running w/ or w/o kerberos.
>
> James Hughes wrote:
>
>> Hi all,
>>
>> For GeoMesa, stats writing is quite secondary and optional, so we can
>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
>>
>> I haven't had a chance to read in details yet, so forgive me if this is
>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
>> hooks to test out Kerberos integration effectively?  I suppose I'm
>> really asking what kind of testing environment a project like GeoMesa
>> would need to use to test out Accumulo 1.7.
>>
>> Even though MockAccumulo has a number of limitations, it is rather
>> effective in unit tests which can be part of a quick  build.
>>
>> Thanks,
>>
>> Jim
>>
>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>>     Nope, I am running the example as what the readme file suggested:
>>
>>     java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>     org.geomesa.QuickStart -instanceId somecloud -zookeepers
>>     "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>     -tableName sometable
>>
>>     I'll raise this question with the geomesa folks, but you're right that
>>     I can ignore it for now...
>>
>>     Thanks!
>>     -Simon
>>
>>
>>     On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
>>     <ma...@gmail.com>> wrote:
>>      > Are you running it via `mvn exec:java` by chance or netbeans?
>>      >
>>      >
>>
>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>      >
>>      > If that's just a background thread writing in Stats, it might
>>     just be a
>>      > factor of how you're invoking the program and you can ignore it.
>>     I don't
>>      > know enough about the inner-workings of GeoMesa to say one way or
>>     the other.
>>      >
>>      >
>>      > Xu (Simon) Chen wrote:
>>      >>
>>      >> Josh,
>>      >>
>>      >> Everything works well, except for one thing :-)
>>      >>
>>      >> I am running geomesa-quickstart program that ingest some data
>>     and then
>>      >> perform a simple query:
>>      >> https://github.com/geomesa/geomesa-quickstart
>>      >>
>>      >> For some reason, the following error is emitted consistently at
>> the
>>      >> end of the execution, after outputting the correct result:
>>      >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>>     retry
>>      >> java.lang.InterruptedException
>>      >>          at java.lang.Object.wait(Native Method)
>>      >>          at java.lang.Object.wait(Object.java:503)
>>      >>          at
>>      >>
>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>      >>          at
>>     org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>      >>          at
>>      >>
>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>      >>          at
>>      >>
>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>      >>          at
>>      >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>      >>          at
>>      >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>      >>          at
>>      >>
>>
>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>      >>          at
>>      >>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>      >>          at
>>      >>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>      >>          at
>>     scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>      >>          at
>>      >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>      >>          at java.lang.Thread.run(Thread.java:745)
>>      >>
>>      >> This is more annoying than a real problem. I am new to both
>> accumulo
>>      >> and geomesa, but I am curious what the problem might be.
>>      >>
>>      >> Thanks!
>>      >> -Simon
>>      >>
>>      >>
>>      >> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
>>     <ma...@gmail.com>>  wrote:
>>      >>>
>>      >>> Great! Glad to hear it. Please let us know how it works out!
>>      >>>
>>      >>>
>>      >>> Xu (Simon) Chen wrote:
>>      >>>>
>>      >>>> Josh,
>>      >>>>
>>      >>>> You're right again.. Thanks!
>>      >>>>
>>      >>>> My ansible play actually pushed client.conf to all the server
>>     config
>>      >>>> directories, but didn't do anything for the clients, and that's
>> my
>>      >>>> problem. Now kerberos is working great for me.
>>      >>>>
>>      >>>> Thanks again!
>>      >>>> -Simon
>>      >>>>
>>      >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>
>>      >>>> wrote:
>>      >>>>>
>>      >>>>> Simon,
>>      >>>>>
>>      >>>>> Did you create a client configuration file (~/.accumulo/config
>> or
>>      >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>     Accumulo clients
>>      >>>>> to
>>      >>>>> actually use SASL when you're trying to use Kerberos
>>     authentication.
>>      >>>>> Your
>>      >>>>> server is expecting that, but I would venture a guess that
>>     your client
>>      >>>>> isn't.
>>      >>>>>
>>      >>>>> See
>>      >>>>>
>>      >>>>>
>>
>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>      >>>>>
>>      >>>>>
>>      >>>>> Xu (Simon) Chen wrote:
>>      >>>>>>
>>      >>>>>> Josh,
>>      >>>>>>
>>      >>>>>> Thanks. It makes sense...
>>      >>>>>>
>>      >>>>>> I used a KerberosToken, but my program got stuck when
>>     running the
>>      >>>>>> following:
>>      >>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
>>      >>>>>> krbToken)
>>      >>>>>>
>>      >>>>>> It looks like my client is stuck here:
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>      >>>>>> failing in the receive part of
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>      >>>>>>
>>      >>>>>> On my tservers, I see the following:
>>      >>>>>>
>>      >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
>> Error
>>      >>>>>> occurred during processing of message.
>>      >>>>>> java.lang.RuntimeException:
>>      >>>>>> org.apache.thrift.transport.TTransportException:
>>      >>>>>> java.net.SocketTimeoutException: Read timed out
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>      >>>>>>            at
>> java.security.AccessController.doPrivileged(Native
>>      >>>>>> Method)
>>      >>>>>>            at
>> javax.security.auth.Subject.doAs(Subject.java:356)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>      >>>>>>            at java.lang.Thread.run(Thread.java:745)
>>      >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>      >>>>>> java.net.SocketTimeoutException: Read timed out
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>      >>>>>>            at
>>      >>>>>>
>>     org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>      >>>>>>            ... 11 more
>>      >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>      >>>>>>            at java.net.SocketInputStream.socketRead0(Native
>>     Method)
>>      >>>>>>            at
>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>      >>>>>>            at
>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>      >>>>>>            at
>>      >>>>>>
>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>      >>>>>>            at
>>      >>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>      >>>>>>            ... 17 more
>>      >>>>>>
>>      >>>>>> Any ideas why?
>>      >>>>>>
>>      >>>>>> Thanks.
>>      >>>>>> -Simon
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>
>>      >>>>>> wrote:
>>      >>>>>>>
>>      >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>      >>>>>>>
>>      >>>>>>> <snip>
>>      >>>>>>> Obtain a delegation token by calling {@link
>>      >>>>>>>
>>      >>>>>>>
>>      >>>>>>>
>>      >>>>>>>
>>
>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>      >>>>>>> </snip>
>>      >>>>>>>
>>      >>>>>>> You cannot create a usable DelegationToken as the client
>>     itself.
>>      >>>>>>>
>>      >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>>     the client
>>      >>>>>>> Kerberos credentials are unavailable. The most common case
>>     is running
>>      >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>     through the
>>      >>>>>>> Java
>>      >>>>>>> API, the KerberosToken is all you need to use.
>>      >>>>>>>
>>      >>>>>>> The user-manual likely just needs to be updated. I believe
>> the
>>      >>>>>>> DelegationTokenConfig was added after I wrote the initial
>>      >>>>>>> documentation.
>>      >>>>>>>
>>      >>>>>>>
>>      >>>>>>> Xu (Simon) Chen wrote:
>>      >>>>>>>>
>>      >>>>>>>> Hi folks,
>>      >>>>>>>>
>>      >>>>>>>> The latest kerberos doc seems to indicate that
>>     getDelegationToken
>>      >>>>>>>> can
>>      >>>>>>>> be
>>      >>>>>>>> called without any parameters:
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>      >>>>>>>>
>>      >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>     object must be
>>      >>>>>>>> passed in:
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>      >>>>>>>>
>>      >>>>>>>> Any ideas on how I should construct the
>> DelegationTokenConfig
>>      >>>>>>>> object?
>>      >>>>>>>>
>>      >>>>>>>> For context, I've been trying to get geomesa to work on my
>>     accumulo
>>      >>>>>>>> 1.7
>>      >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>     tied to
>>      >>>>>>>> password auth:
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>
>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>      >>>>>>>> My thought is that I should get a KerberosToken first, and
>>     then try
>>      >>>>>>>> generate a DelegationToken, which is passed back for later
>>      >>>>>>>> interactions
>>      >>>>>>>> between geomesa and accumulo.
>>      >>>>>>>>
>>      >>>>>>>> Thanks.
>>      >>>>>>>> -Simon
>>
>>
>>

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely 
moved away from MockAccumulo. I wouldn't be surprised if it gets 
deprecated and removed soon.

https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java

Apache Directory provides a MiniKdc that can be used easily w/ 
MiniAccumulo. Many of the integration tests have already been altered to 
support running w/ or w/o kerberos.

James Hughes wrote:
> Hi all,
>
> For GeoMesa, stats writing is quite secondary and optional, so we can
> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
>
> I haven't had a chance to read in details yet, so forgive me if this is
> covered in the docs.  Does either Mock or MiniAccumulo provide enough
> hooks to test out Kerberos integration effectively?  I suppose I'm
> really asking what kind of testing environment a project like GeoMesa
> would need to use to test out Accumulo 1.7.
>
> Even though MockAccumulo has a number of limitations, it is rather
> effective in unit tests which can be part of a quick  build.
>
> Thanks,
>
> Jim
>
> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
> <ma...@gmail.com>> wrote:
>
>     Nope, I am running the example as what the readme file suggested:
>
>     java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>     org.geomesa.QuickStart -instanceId somecloud -zookeepers
>     "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>     -tableName sometable
>
>     I'll raise this question with the geomesa folks, but you're right that
>     I can ignore it for now...
>
>     Thanks!
>     -Simon
>
>
>     On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
>     <ma...@gmail.com>> wrote:
>      > Are you running it via `mvn exec:java` by chance or netbeans?
>      >
>      >
>     http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>      >
>      > If that's just a background thread writing in Stats, it might
>     just be a
>      > factor of how you're invoking the program and you can ignore it.
>     I don't
>      > know enough about the inner-workings of GeoMesa to say one way or
>     the other.
>      >
>      >
>      > Xu (Simon) Chen wrote:
>      >>
>      >> Josh,
>      >>
>      >> Everything works well, except for one thing :-)
>      >>
>      >> I am running geomesa-quickstart program that ingest some data
>     and then
>      >> perform a simple query:
>      >> https://github.com/geomesa/geomesa-quickstart
>      >>
>      >> For some reason, the following error is emitted consistently at the
>      >> end of the execution, after outputting the correct result:
>      >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>     retry
>      >> java.lang.InterruptedException
>      >>          at java.lang.Object.wait(Native Method)
>      >>          at java.lang.Object.wait(Object.java:503)
>      >>          at
>      >> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>      >>          at
>     org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>      >>          at
>      >> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>      >>          at
>      >> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>      >>          at
>      >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>      >>          at
>      >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>      >>          at
>      >>
>     org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>      >>          at
>      >>
>     org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>      >>          at
>      >>
>     org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>      >>          at
>      >>
>     org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>      >>          at
>      >>
>     org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>      >>          at
>     scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>      >>          at
>      >>
>     org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>      >>          at
>      >>
>     org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>      >>          at
>      >>
>     org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>      >>          at
>      >>
>     java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>      >>          at
>      >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>      >>          at
>      >>
>     java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>      >>          at
>      >>
>     java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>      >>          at
>      >>
>     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>      >>          at
>      >>
>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>      >>          at java.lang.Thread.run(Thread.java:745)
>      >>
>      >> This is more annoying than a real problem. I am new to both accumulo
>      >> and geomesa, but I am curious what the problem might be.
>      >>
>      >> Thanks!
>      >> -Simon
>      >>
>      >>
>      >> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
>     <ma...@gmail.com>>  wrote:
>      >>>
>      >>> Great! Glad to hear it. Please let us know how it works out!
>      >>>
>      >>>
>      >>> Xu (Simon) Chen wrote:
>      >>>>
>      >>>> Josh,
>      >>>>
>      >>>> You're right again.. Thanks!
>      >>>>
>      >>>> My ansible play actually pushed client.conf to all the server
>     config
>      >>>> directories, but didn't do anything for the clients, and that's my
>      >>>> problem. Now kerberos is working great for me.
>      >>>>
>      >>>> Thanks again!
>      >>>> -Simon
>      >>>>
>      >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>      >>>> wrote:
>      >>>>>
>      >>>>> Simon,
>      >>>>>
>      >>>>> Did you create a client configuration file (~/.accumulo/config or
>      >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>     Accumulo clients
>      >>>>> to
>      >>>>> actually use SASL when you're trying to use Kerberos
>     authentication.
>      >>>>> Your
>      >>>>> server is expecting that, but I would venture a guess that
>     your client
>      >>>>> isn't.
>      >>>>>
>      >>>>> See
>      >>>>>
>      >>>>>
>     http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>      >>>>>
>      >>>>>
>      >>>>> Xu (Simon) Chen wrote:
>      >>>>>>
>      >>>>>> Josh,
>      >>>>>>
>      >>>>>> Thanks. It makes sense...
>      >>>>>>
>      >>>>>> I used a KerberosToken, but my program got stuck when
>     running the
>      >>>>>> following:
>      >>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
>      >>>>>> krbToken)
>      >>>>>>
>      >>>>>> It looks like my client is stuck here:
>      >>>>>>
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>      >>>>>> failing in the receive part of
>      >>>>>>
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>      >>>>>>
>      >>>>>> On my tservers, I see the following:
>      >>>>>>
>      >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
>      >>>>>> occurred during processing of message.
>      >>>>>> java.lang.RuntimeException:
>      >>>>>> org.apache.thrift.transport.TTransportException:
>      >>>>>> java.net.SocketTimeoutException: Read timed out
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>      >>>>>>            at java.security.AccessController.doPrivileged(Native
>      >>>>>> Method)
>      >>>>>>            at javax.security.auth.Subject.doAs(Subject.java:356)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>      >>>>>>            at java.lang.Thread.run(Thread.java:745)
>      >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>      >>>>>> java.net.SocketTimeoutException: Read timed out
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>      >>>>>>            at
>      >>>>>>
>     org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>      >>>>>>            ... 11 more
>      >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>      >>>>>>            at java.net.SocketInputStream.socketRead0(Native
>     Method)
>      >>>>>>            at
>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>      >>>>>>            at
>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>      >>>>>>            at
>      >>>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>      >>>>>>            at
>      >>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>      >>>>>>            at
>      >>>>>>
>      >>>>>>
>      >>>>>>
>     org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>      >>>>>>            ... 17 more
>      >>>>>>
>      >>>>>> Any ideas why?
>      >>>>>>
>      >>>>>> Thanks.
>      >>>>>> -Simon
>      >>>>>>
>      >>>>>>
>      >>>>>>
>      >>>>>>
>      >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>      >>>>>> wrote:
>      >>>>>>>
>      >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>      >>>>>>>
>      >>>>>>> <snip>
>      >>>>>>> Obtain a delegation token by calling {@link
>      >>>>>>>
>      >>>>>>>
>      >>>>>>>
>      >>>>>>>
>     SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>      >>>>>>> </snip>
>      >>>>>>>
>      >>>>>>> You cannot create a usable DelegationToken as the client
>     itself.
>      >>>>>>>
>      >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>     the client
>      >>>>>>> Kerberos credentials are unavailable. The most common case
>     is running
>      >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>     through the
>      >>>>>>> Java
>      >>>>>>> API, the KerberosToken is all you need to use.
>      >>>>>>>
>      >>>>>>> The user-manual likely just needs to be updated. I believe the
>      >>>>>>> DelegationTokenConfig was added after I wrote the initial
>      >>>>>>> documentation.
>      >>>>>>>
>      >>>>>>>
>      >>>>>>> Xu (Simon) Chen wrote:
>      >>>>>>>>
>      >>>>>>>> Hi folks,
>      >>>>>>>>
>      >>>>>>>> The latest kerberos doc seems to indicate that
>     getDelegationToken
>      >>>>>>>> can
>      >>>>>>>> be
>      >>>>>>>> called without any parameters:
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>     https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>      >>>>>>>>
>      >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>     object must be
>      >>>>>>>> passed in:
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>     https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>      >>>>>>>>
>      >>>>>>>> Any ideas on how I should construct the DelegationTokenConfig
>      >>>>>>>> object?
>      >>>>>>>>
>      >>>>>>>> For context, I've been trying to get geomesa to work on my
>     accumulo
>      >>>>>>>> 1.7
>      >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>     tied to
>      >>>>>>>> password auth:
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>      >>>>>>>>
>     https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>      >>>>>>>> My thought is that I should get a KerberosToken first, and
>     then try
>      >>>>>>>> generate a DelegationToken, which is passed back for later
>      >>>>>>>> interactions
>      >>>>>>>> between geomesa and accumulo.
>      >>>>>>>>
>      >>>>>>>> Thanks.
>      >>>>>>>> -Simon
>
>

Re: kerberos auth, getDelegationToken

Posted by James Hughes <jn...@virginia.edu>.
Hi all,

For GeoMesa, stats writing is quite secondary and optional, so we can sort
that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.

I haven't had a chance to read in details yet, so forgive me if this is
covered in the docs.  Does either Mock or MiniAccumulo provide enough hooks
to test out Kerberos integration effectively?  I suppose I'm really asking
what kind of testing environment a project like GeoMesa would need to use
to test out Accumulo 1.7.

Even though MockAccumulo has a number of limitations, it is rather
effective in unit tests which can be part of a quick  build.

Thanks,

Jim

On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xc...@gmail.com> wrote:

> Nope, I am running the example as what the readme file suggested:
>
> java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
> org.geomesa.QuickStart -instanceId somecloud -zookeepers
> "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
> -tableName sometable
>
> I'll raise this question with the geomesa folks, but you're right that
> I can ignore it for now...
>
> Thanks!
> -Simon
>
>
> On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <jo...@gmail.com> wrote:
> > Are you running it via `mvn exec:java` by chance or netbeans?
> >
> >
> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
> >
> > If that's just a background thread writing in Stats, it might just be a
> > factor of how you're invoking the program and you can ignore it. I don't
> > know enough about the inner-workings of GeoMesa to say one way or the
> other.
> >
> >
> > Xu (Simon) Chen wrote:
> >>
> >> Josh,
> >>
> >> Everything works well, except for one thing :-)
> >>
> >> I am running geomesa-quickstart program that ingest some data and then
> >> perform a simple query:
> >> https://github.com/geomesa/geomesa-quickstart
> >>
> >> For some reason, the following error is emitted consistently at the
> >> end of the execution, after outputting the correct result:
> >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will retry
> >> java.lang.InterruptedException
> >>          at java.lang.Object.wait(Native Method)
> >>          at java.lang.Object.wait(Object.java:503)
> >>          at
> >> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
> >>          at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
> >>          at
> >> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
> >>          at
> >> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
> >>          at
> >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
> >>          at
> >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
> >>          at
> >>
> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
> >>          at
> >>
> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
> >>          at
> >>
> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
> >>          at
> >>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
> >>          at
> >>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
> >>          at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> >>          at
> >>
> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
> >>          at
> >>
> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
> >>          at
> >>
> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
> >>          at
> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >>          at
> >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> >>          at
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> >>          at
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >>          at
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>          at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>          at java.lang.Thread.run(Thread.java:745)
> >>
> >> This is more annoying than a real problem. I am new to both accumulo
> >> and geomesa, but I am curious what the problem might be.
> >>
> >> Thanks!
> >> -Simon
> >>
> >>
> >> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<jo...@gmail.com>
> wrote:
> >>>
> >>> Great! Glad to hear it. Please let us know how it works out!
> >>>
> >>>
> >>> Xu (Simon) Chen wrote:
> >>>>
> >>>> Josh,
> >>>>
> >>>> You're right again.. Thanks!
> >>>>
> >>>> My ansible play actually pushed client.conf to all the server config
> >>>> directories, but didn't do anything for the clients, and that's my
> >>>> problem. Now kerberos is working great for me.
> >>>>
> >>>> Thanks again!
> >>>> -Simon
> >>>>
> >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh Elser<jo...@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> Simon,
> >>>>>
> >>>>> Did you create a client configuration file (~/.accumulo/config or
> >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo
> clients
> >>>>> to
> >>>>> actually use SASL when you're trying to use Kerberos authentication.
> >>>>> Your
> >>>>> server is expecting that, but I would venture a guess that your
> client
> >>>>> isn't.
> >>>>>
> >>>>> See
> >>>>>
> >>>>>
> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
> >>>>>
> >>>>>
> >>>>> Xu (Simon) Chen wrote:
> >>>>>>
> >>>>>> Josh,
> >>>>>>
> >>>>>> Thanks. It makes sense...
> >>>>>>
> >>>>>> I used a KerberosToken, but my program got stuck when running the
> >>>>>> following:
> >>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
> >>>>>> krbToken)
> >>>>>>
> >>>>>> It looks like my client is stuck here:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
> >>>>>> failing in the receive part of
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
> >>>>>>
> >>>>>> On my tservers, I see the following:
> >>>>>>
> >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
> >>>>>> occurred during processing of message.
> >>>>>> java.lang.RuntimeException:
> >>>>>> org.apache.thrift.transport.TTransportException:
> >>>>>> java.net.SocketTimeoutException: Read timed out
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
> >>>>>>            at java.security.AccessController.doPrivileged(Native
> >>>>>> Method)
> >>>>>>            at javax.security.auth.Subject.doAs(Subject.java:356)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> >>>>>>            at java.lang.Thread.run(Thread.java:745)
> >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
> >>>>>> java.net.SocketTimeoutException: Read timed out
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
> >>>>>>            at
> >>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> >>>>>>            at
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> >>>>>>            ... 11 more
> >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
> >>>>>>            at java.net.SocketInputStream.socketRead0(Native Method)
> >>>>>>            at
> >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
> >>>>>>            at
> >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
> >>>>>>            at
> >>>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> >>>>>>            at
> >>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> >>>>>>            at
> >>>>>>
> >>>>>>
> >>>>>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> >>>>>>            ... 17 more
> >>>>>>
> >>>>>> Any ideas why?
> >>>>>>
> >>>>>> Thanks.
> >>>>>> -Simon
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>> Make sure you read the JavaDoc on DelegationToken:
> >>>>>>>
> >>>>>>> <snip>
> >>>>>>> Obtain a delegation token by calling {@link
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
> >>>>>>> </snip>
> >>>>>>>
> >>>>>>> You cannot create a usable DelegationToken as the client itself.
> >>>>>>>
> >>>>>>> Anyways, DelegationTokens are only relevant in cases where the
> client
> >>>>>>> Kerberos credentials are unavailable. The most common case is
> running
> >>>>>>> MapReduce jobs. If you are just interacting with Accumulo through
> the
> >>>>>>> Java
> >>>>>>> API, the KerberosToken is all you need to use.
> >>>>>>>
> >>>>>>> The user-manual likely just needs to be updated. I believe the
> >>>>>>> DelegationTokenConfig was added after I wrote the initial
> >>>>>>> documentation.
> >>>>>>>
> >>>>>>>
> >>>>>>> Xu (Simon) Chen wrote:
> >>>>>>>>
> >>>>>>>> Hi folks,
> >>>>>>>>
> >>>>>>>> The latest kerberos doc seems to indicate that getDelegationToken
> >>>>>>>> can
> >>>>>>>> be
> >>>>>>>> called without any parameters:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
> >>>>>>>>
> >>>>>>>> Yet the source code indicates a DelegationTokenConfig object must
> be
> >>>>>>>> passed in:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
> >>>>>>>>
> >>>>>>>> Any ideas on how I should construct the DelegationTokenConfig
> >>>>>>>> object?
> >>>>>>>>
> >>>>>>>> For context, I've been trying to get geomesa to work on my
> accumulo
> >>>>>>>> 1.7
> >>>>>>>> with kerberos turned on. Right now, the code is somewhat tied to
> >>>>>>>> password auth:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
> >>>>>>>> My thought is that I should get a KerberosToken first, and then
> try
> >>>>>>>> generate a DelegationToken, which is passed back for later
> >>>>>>>> interactions
> >>>>>>>> between geomesa and accumulo.
> >>>>>>>>
> >>>>>>>> Thanks.
> >>>>>>>> -Simon
>

Re: kerberos auth, getDelegationToken

Posted by "Xu (Simon) Chen" <xc...@gmail.com>.
Nope, I am running the example as what the readme file suggested:

java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
org.geomesa.QuickStart -instanceId somecloud -zookeepers
"zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
-tableName sometable

I'll raise this question with the geomesa folks, but you're right that
I can ignore it for now...

Thanks!
-Simon


On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <jo...@gmail.com> wrote:
> Are you running it via `mvn exec:java` by chance or netbeans?
>
> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>
> If that's just a background thread writing in Stats, it might just be a
> factor of how you're invoking the program and you can ignore it. I don't
> know enough about the inner-workings of GeoMesa to say one way or the other.
>
>
> Xu (Simon) Chen wrote:
>>
>> Josh,
>>
>> Everything works well, except for one thing :-)
>>
>> I am running geomesa-quickstart program that ingest some data and then
>> perform a simple query:
>> https://github.com/geomesa/geomesa-quickstart
>>
>> For some reason, the following error is emitted consistently at the
>> end of the execution, after outputting the correct result:
>> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will retry
>> java.lang.InterruptedException
>>          at java.lang.Object.wait(Native Method)
>>          at java.lang.Object.wait(Object.java:503)
>>          at
>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>          at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>          at
>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>          at
>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>          at
>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>          at
>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>          at
>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>          at
>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>          at
>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>          at
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>          at
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>          at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>          at
>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>          at
>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>          at
>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>          at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>          at
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>          at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>          at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>          at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>          at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>          at java.lang.Thread.run(Thread.java:745)
>>
>> This is more annoying than a real problem. I am new to both accumulo
>> and geomesa, but I am curious what the problem might be.
>>
>> Thanks!
>> -Simon
>>
>>
>> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<jo...@gmail.com>  wrote:
>>>
>>> Great! Glad to hear it. Please let us know how it works out!
>>>
>>>
>>> Xu (Simon) Chen wrote:
>>>>
>>>> Josh,
>>>>
>>>> You're right again.. Thanks!
>>>>
>>>> My ansible play actually pushed client.conf to all the server config
>>>> directories, but didn't do anything for the clients, and that's my
>>>> problem. Now kerberos is working great for me.
>>>>
>>>> Thanks again!
>>>> -Simon
>>>>
>>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh Elser<jo...@gmail.com>
>>>> wrote:
>>>>>
>>>>> Simon,
>>>>>
>>>>> Did you create a client configuration file (~/.accumulo/config or
>>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo clients
>>>>> to
>>>>> actually use SASL when you're trying to use Kerberos authentication.
>>>>> Your
>>>>> server is expecting that, but I would venture a guess that your client
>>>>> isn't.
>>>>>
>>>>> See
>>>>>
>>>>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>>>>
>>>>>
>>>>> Xu (Simon) Chen wrote:
>>>>>>
>>>>>> Josh,
>>>>>>
>>>>>> Thanks. It makes sense...
>>>>>>
>>>>>> I used a KerberosToken, but my program got stuck when running the
>>>>>> following:
>>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
>>>>>> krbToken)
>>>>>>
>>>>>> It looks like my client is stuck here:
>>>>>>
>>>>>>
>>>>>>
>>>>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>>>>> failing in the receive part of
>>>>>>
>>>>>>
>>>>>>
>>>>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>>>>
>>>>>> On my tservers, I see the following:
>>>>>>
>>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
>>>>>> occurred during processing of message.
>>>>>> java.lang.RuntimeException:
>>>>>> org.apache.thrift.transport.TTransportException:
>>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>>>>            at java.security.AccessController.doPrivileged(Native
>>>>>> Method)
>>>>>>            at javax.security.auth.Subject.doAs(Subject.java:356)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>>>>            at java.lang.Thread.run(Thread.java:745)
>>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>>>>            at
>>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>>>>            at
>>>>>>
>>>>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>>>>            ... 11 more
>>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>>>>            at java.net.SocketInputStream.socketRead0(Native Method)
>>>>>>            at
>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>>>>            at
>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>>>>            at
>>>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>>>>            at
>>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>>>>            ... 17 more
>>>>>>
>>>>>> Any ideas why?
>>>>>>
>>>>>> Thanks.
>>>>>> -Simon
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>>>>>>
>>>>>>> <snip>
>>>>>>> Obtain a delegation token by calling {@link
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>>>>> </snip>
>>>>>>>
>>>>>>> You cannot create a usable DelegationToken as the client itself.
>>>>>>>
>>>>>>> Anyways, DelegationTokens are only relevant in cases where the client
>>>>>>> Kerberos credentials are unavailable. The most common case is running
>>>>>>> MapReduce jobs. If you are just interacting with Accumulo through the
>>>>>>> Java
>>>>>>> API, the KerberosToken is all you need to use.
>>>>>>>
>>>>>>> The user-manual likely just needs to be updated. I believe the
>>>>>>> DelegationTokenConfig was added after I wrote the initial
>>>>>>> documentation.
>>>>>>>
>>>>>>>
>>>>>>> Xu (Simon) Chen wrote:
>>>>>>>>
>>>>>>>> Hi folks,
>>>>>>>>
>>>>>>>> The latest kerberos doc seems to indicate that getDelegationToken
>>>>>>>> can
>>>>>>>> be
>>>>>>>> called without any parameters:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>>>>>
>>>>>>>> Yet the source code indicates a DelegationTokenConfig object must be
>>>>>>>> passed in:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>>>>>
>>>>>>>> Any ideas on how I should construct the DelegationTokenConfig
>>>>>>>> object?
>>>>>>>>
>>>>>>>> For context, I've been trying to get geomesa to work on my accumulo
>>>>>>>> 1.7
>>>>>>>> with kerberos turned on. Right now, the code is somewhat tied to
>>>>>>>> password auth:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>>>>> My thought is that I should get a KerberosToken first, and then try
>>>>>>>> generate a DelegationToken, which is passed back for later
>>>>>>>> interactions
>>>>>>>> between geomesa and accumulo.
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>> -Simon

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
Are you running it via `mvn exec:java` by chance or netbeans?

http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E

If that's just a background thread writing in Stats, it might just be a 
factor of how you're invoking the program and you can ignore it. I don't 
know enough about the inner-workings of GeoMesa to say one way or the other.

Xu (Simon) Chen wrote:
> Josh,
>
> Everything works well, except for one thing :-)
>
> I am running geomesa-quickstart program that ingest some data and then
> perform a simple query:
> https://github.com/geomesa/geomesa-quickstart
>
> For some reason, the following error is emitted consistently at the
> end of the execution, after outputting the correct result:
> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will retry
> java.lang.InterruptedException
>          at java.lang.Object.wait(Native Method)
>          at java.lang.Object.wait(Object.java:503)
>          at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>          at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>          at org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>          at org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>          at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>          at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>          at org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>          at org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>          at org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>          at org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>          at org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>          at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>          at org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>          at org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>          at org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>          at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:745)
>
> This is more annoying than a real problem. I am new to both accumulo
> and geomesa, but I am curious what the problem might be.
>
> Thanks!
> -Simon
>
>
> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<jo...@gmail.com>  wrote:
>> Great! Glad to hear it. Please let us know how it works out!
>>
>>
>> Xu (Simon) Chen wrote:
>>> Josh,
>>>
>>> You're right again.. Thanks!
>>>
>>> My ansible play actually pushed client.conf to all the server config
>>> directories, but didn't do anything for the clients, and that's my
>>> problem. Now kerberos is working great for me.
>>>
>>> Thanks again!
>>> -Simon
>>>
>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh Elser<jo...@gmail.com>   wrote:
>>>> Simon,
>>>>
>>>> Did you create a client configuration file (~/.accumulo/config or
>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo clients
>>>> to
>>>> actually use SASL when you're trying to use Kerberos authentication. Your
>>>> server is expecting that, but I would venture a guess that your client
>>>> isn't.
>>>>
>>>> See
>>>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>>>
>>>>
>>>> Xu (Simon) Chen wrote:
>>>>> Josh,
>>>>>
>>>>> Thanks. It makes sense...
>>>>>
>>>>> I used a KerberosToken, but my program got stuck when running the
>>>>> following:
>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user, krbToken)
>>>>>
>>>>> It looks like my client is stuck here:
>>>>>
>>>>>
>>>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>>>> failing in the receive part of
>>>>>
>>>>>
>>>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>>>
>>>>> On my tservers, I see the following:
>>>>>
>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
>>>>> occurred during processing of message.
>>>>> java.lang.RuntimeException:
>>>>> org.apache.thrift.transport.TTransportException:
>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>>>            at
>>>>>
>>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>>>            at
>>>>>
>>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>>>            at java.security.AccessController.doPrivileged(Native Method)
>>>>>            at javax.security.auth.Subject.doAs(Subject.java:356)
>>>>>            at
>>>>>
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>>>            at
>>>>>
>>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>>>            at
>>>>>
>>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>>>            at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>            at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>            at
>>>>>
>>>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>>>            at java.lang.Thread.run(Thread.java:745)
>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>>>            at
>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>>>            at
>>>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>>>            ... 11 more
>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>>>            at java.net.SocketInputStream.socketRead0(Native Method)
>>>>>            at java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>>>            at java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>>>            at
>>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>>>            at
>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>>>            at
>>>>>
>>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>>>            ... 17 more
>>>>>
>>>>> Any ideas why?
>>>>>
>>>>> Thanks.
>>>>> -Simon
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>
>>>>> wrote:
>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>>>>>
>>>>>> <snip>
>>>>>> Obtain a delegation token by calling {@link
>>>>>>
>>>>>>
>>>>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>>>> </snip>
>>>>>>
>>>>>> You cannot create a usable DelegationToken as the client itself.
>>>>>>
>>>>>> Anyways, DelegationTokens are only relevant in cases where the client
>>>>>> Kerberos credentials are unavailable. The most common case is running
>>>>>> MapReduce jobs. If you are just interacting with Accumulo through the
>>>>>> Java
>>>>>> API, the KerberosToken is all you need to use.
>>>>>>
>>>>>> The user-manual likely just needs to be updated. I believe the
>>>>>> DelegationTokenConfig was added after I wrote the initial
>>>>>> documentation.
>>>>>>
>>>>>>
>>>>>> Xu (Simon) Chen wrote:
>>>>>>> Hi folks,
>>>>>>>
>>>>>>> The latest kerberos doc seems to indicate that getDelegationToken can
>>>>>>> be
>>>>>>> called without any parameters:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>>>>
>>>>>>> Yet the source code indicates a DelegationTokenConfig object must be
>>>>>>> passed in:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>>>>
>>>>>>> Any ideas on how I should construct the DelegationTokenConfig object?
>>>>>>>
>>>>>>> For context, I've been trying to get geomesa to work on my accumulo
>>>>>>> 1.7
>>>>>>> with kerberos turned on. Right now, the code is somewhat tied to
>>>>>>> password auth:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>>>> My thought is that I should get a KerberosToken first, and then try
>>>>>>> generate a DelegationToken, which is passed back for later
>>>>>>> interactions
>>>>>>> between geomesa and accumulo.
>>>>>>>
>>>>>>> Thanks.
>>>>>>> -Simon

Re: kerberos auth, getDelegationToken

Posted by "Xu (Simon) Chen" <xc...@gmail.com>.
Josh,

Everything works well, except for one thing :-)

I am running geomesa-quickstart program that ingest some data and then
perform a simple query:
https://github.com/geomesa/geomesa-quickstart

For some reason, the following error is emitted consistently at the
end of the execution, after outputting the correct result:
15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will retry
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
        at org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
        at org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
        at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
        at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
        at org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
        at org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
        at org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
        at org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
        at org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
        at org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
        at org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
        at org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

This is more annoying than a real problem. I am new to both accumulo
and geomesa, but I am curious what the problem might be.

Thanks!
-Simon


On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser <jo...@gmail.com> wrote:
> Great! Glad to hear it. Please let us know how it works out!
>
>
> Xu (Simon) Chen wrote:
>>
>> Josh,
>>
>> You're right again.. Thanks!
>>
>> My ansible play actually pushed client.conf to all the server config
>> directories, but didn't do anything for the clients, and that's my
>> problem. Now kerberos is working great for me.
>>
>> Thanks again!
>> -Simon
>>
>> On Sat, Jun 6, 2015 at 5:04 PM, Josh Elser<jo...@gmail.com>  wrote:
>>>
>>> Simon,
>>>
>>> Did you create a client configuration file (~/.accumulo/config or
>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo clients
>>> to
>>> actually use SASL when you're trying to use Kerberos authentication. Your
>>> server is expecting that, but I would venture a guess that your client
>>> isn't.
>>>
>>> See
>>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>>
>>>
>>> Xu (Simon) Chen wrote:
>>>>
>>>> Josh,
>>>>
>>>> Thanks. It makes sense...
>>>>
>>>> I used a KerberosToken, but my program got stuck when running the
>>>> following:
>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user, krbToken)
>>>>
>>>> It looks like my client is stuck here:
>>>>
>>>>
>>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>>> failing in the receive part of
>>>>
>>>>
>>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>>
>>>> On my tservers, I see the following:
>>>>
>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
>>>> occurred during processing of message.
>>>> java.lang.RuntimeException:
>>>> org.apache.thrift.transport.TTransportException:
>>>> java.net.SocketTimeoutException: Read timed out
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>>           at
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>>           at
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>>           at java.security.AccessController.doPrivileged(Native Method)
>>>>           at javax.security.auth.Subject.doAs(Subject.java:356)
>>>>           at
>>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>>           at
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>>           at
>>>>
>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>>           at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>           at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>           at
>>>>
>>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>>           at java.lang.Thread.run(Thread.java:745)
>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>>> java.net.SocketTimeoutException: Read timed out
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>>           at
>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>>           at
>>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>>           ... 11 more
>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>>           at java.net.SocketInputStream.socketRead0(Native Method)
>>>>           at java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>>           at java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>>           at
>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>>           at
>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>>           at
>>>>
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>>           ... 17 more
>>>>
>>>> Any ideas why?
>>>>
>>>> Thanks.
>>>> -Simon
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>
>>>> wrote:
>>>>>
>>>>> Make sure you read the JavaDoc on DelegationToken:
>>>>>
>>>>> <snip>
>>>>> Obtain a delegation token by calling {@link
>>>>>
>>>>>
>>>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>>> </snip>
>>>>>
>>>>> You cannot create a usable DelegationToken as the client itself.
>>>>>
>>>>> Anyways, DelegationTokens are only relevant in cases where the client
>>>>> Kerberos credentials are unavailable. The most common case is running
>>>>> MapReduce jobs. If you are just interacting with Accumulo through the
>>>>> Java
>>>>> API, the KerberosToken is all you need to use.
>>>>>
>>>>> The user-manual likely just needs to be updated. I believe the
>>>>> DelegationTokenConfig was added after I wrote the initial
>>>>> documentation.
>>>>>
>>>>>
>>>>> Xu (Simon) Chen wrote:
>>>>>>
>>>>>> Hi folks,
>>>>>>
>>>>>> The latest kerberos doc seems to indicate that getDelegationToken can
>>>>>> be
>>>>>> called without any parameters:
>>>>>>
>>>>>>
>>>>>>
>>>>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>>>
>>>>>> Yet the source code indicates a DelegationTokenConfig object must be
>>>>>> passed in:
>>>>>>
>>>>>>
>>>>>>
>>>>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>>>
>>>>>> Any ideas on how I should construct the DelegationTokenConfig object?
>>>>>>
>>>>>> For context, I've been trying to get geomesa to work on my accumulo
>>>>>> 1.7
>>>>>> with kerberos turned on. Right now, the code is somewhat tied to
>>>>>> password auth:
>>>>>>
>>>>>>
>>>>>>
>>>>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>>> My thought is that I should get a KerberosToken first, and then try
>>>>>> generate a DelegationToken, which is passed back for later
>>>>>> interactions
>>>>>> between geomesa and accumulo.
>>>>>>
>>>>>> Thanks.
>>>>>> -Simon

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
Great! Glad to hear it. Please let us know how it works out!

Xu (Simon) Chen wrote:
> Josh,
>
> You're right again.. Thanks!
>
> My ansible play actually pushed client.conf to all the server config
> directories, but didn't do anything for the clients, and that's my
> problem. Now kerberos is working great for me.
>
> Thanks again!
> -Simon
>
> On Sat, Jun 6, 2015 at 5:04 PM, Josh Elser<jo...@gmail.com>  wrote:
>> Simon,
>>
>> Did you create a client configuration file (~/.accumulo/config or
>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo clients to
>> actually use SASL when you're trying to use Kerberos authentication. Your
>> server is expecting that, but I would venture a guess that your client
>> isn't.
>>
>> See
>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>
>>
>> Xu (Simon) Chen wrote:
>>> Josh,
>>>
>>> Thanks. It makes sense...
>>>
>>> I used a KerberosToken, but my program got stuck when running the
>>> following:
>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user, krbToken)
>>>
>>> It looks like my client is stuck here:
>>>
>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>> failing in the receive part of
>>>
>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>
>>> On my tservers, I see the following:
>>>
>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
>>> occurred during processing of message.
>>> java.lang.RuntimeException:
>>> org.apache.thrift.transport.TTransportException:
>>> java.net.SocketTimeoutException: Read timed out
>>>           at
>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>           at
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>           at
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>           at java.security.AccessController.doPrivileged(Native Method)
>>>           at javax.security.auth.Subject.doAs(Subject.java:356)
>>>           at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>           at
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>           at
>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>           at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>           at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>           at
>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>           at java.lang.Thread.run(Thread.java:745)
>>> Caused by: org.apache.thrift.transport.TTransportException:
>>> java.net.SocketTimeoutException: Read timed out
>>>           at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>           at
>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>           at
>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>           at
>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>           at
>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>           at
>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>           at
>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>           ... 11 more
>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>           at java.net.SocketInputStream.socketRead0(Native Method)
>>>           at java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>           at java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>           at
>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>           at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>           at
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>           ... 17 more
>>>
>>> Any ideas why?
>>>
>>> Thanks.
>>> -Simon
>>>
>>>
>>>
>>>
>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>   wrote:
>>>> Make sure you read the JavaDoc on DelegationToken:
>>>>
>>>> <snip>
>>>> Obtain a delegation token by calling {@link
>>>>
>>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>> </snip>
>>>>
>>>> You cannot create a usable DelegationToken as the client itself.
>>>>
>>>> Anyways, DelegationTokens are only relevant in cases where the client
>>>> Kerberos credentials are unavailable. The most common case is running
>>>> MapReduce jobs. If you are just interacting with Accumulo through the
>>>> Java
>>>> API, the KerberosToken is all you need to use.
>>>>
>>>> The user-manual likely just needs to be updated. I believe the
>>>> DelegationTokenConfig was added after I wrote the initial documentation.
>>>>
>>>>
>>>> Xu (Simon) Chen wrote:
>>>>> Hi folks,
>>>>>
>>>>> The latest kerberos doc seems to indicate that getDelegationToken can be
>>>>> called without any parameters:
>>>>>
>>>>>
>>>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>>
>>>>> Yet the source code indicates a DelegationTokenConfig object must be
>>>>> passed in:
>>>>>
>>>>>
>>>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>>
>>>>> Any ideas on how I should construct the DelegationTokenConfig object?
>>>>>
>>>>> For context, I've been trying to get geomesa to work on my accumulo 1.7
>>>>> with kerberos turned on. Right now, the code is somewhat tied to
>>>>> password auth:
>>>>>
>>>>>
>>>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>> My thought is that I should get a KerberosToken first, and then try
>>>>> generate a DelegationToken, which is passed back for later interactions
>>>>> between geomesa and accumulo.
>>>>>
>>>>> Thanks.
>>>>> -Simon

Re: kerberos auth, getDelegationToken

Posted by "Xu (Simon) Chen" <xc...@gmail.com>.
Josh,

You're right again.. Thanks!

My ansible play actually pushed client.conf to all the server config
directories, but didn't do anything for the clients, and that's my
problem. Now kerberos is working great for me.

Thanks again!
-Simon

On Sat, Jun 6, 2015 at 5:04 PM, Josh Elser <jo...@gmail.com> wrote:
> Simon,
>
> Did you create a client configuration file (~/.accumulo/config or
> $ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo clients to
> actually use SASL when you're trying to use Kerberos authentication. Your
> server is expecting that, but I would venture a guess that your client
> isn't.
>
> See
> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>
>
> Xu (Simon) Chen wrote:
>>
>> Josh,
>>
>> Thanks. It makes sense...
>>
>> I used a KerberosToken, but my program got stuck when running the
>> following:
>> new ZooKeeperInstance(instance, zookeepers).getConnector(user, krbToken)
>>
>> It looks like my client is stuck here:
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>> failing in the receive part of
>>
>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>
>> On my tservers, I see the following:
>>
>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
>> occurred during processing of message.
>> java.lang.RuntimeException:
>> org.apache.thrift.transport.TTransportException:
>> java.net.SocketTimeoutException: Read timed out
>>          at
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>          at
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>          at
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>          at java.security.AccessController.doPrivileged(Native Method)
>>          at javax.security.auth.Subject.doAs(Subject.java:356)
>>          at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>          at
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>          at
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>          at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>          at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>          at
>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>          at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.thrift.transport.TTransportException:
>> java.net.SocketTimeoutException: Read timed out
>>          at
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>          at
>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>          at
>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>          at
>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>          at
>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>          at
>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>          at
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>          ... 11 more
>> Caused by: java.net.SocketTimeoutException: Read timed out
>>          at java.net.SocketInputStream.socketRead0(Native Method)
>>          at java.net.SocketInputStream.read(SocketInputStream.java:152)
>>          at java.net.SocketInputStream.read(SocketInputStream.java:122)
>>          at
>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>          at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>          at
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>          ... 17 more
>>
>> Any ideas why?
>>
>> Thanks.
>> -Simon
>>
>>
>>
>>
>> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>  wrote:
>>>
>>> Make sure you read the JavaDoc on DelegationToken:
>>>
>>> <snip>
>>> Obtain a delegation token by calling {@link
>>>
>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>> </snip>
>>>
>>> You cannot create a usable DelegationToken as the client itself.
>>>
>>> Anyways, DelegationTokens are only relevant in cases where the client
>>> Kerberos credentials are unavailable. The most common case is running
>>> MapReduce jobs. If you are just interacting with Accumulo through the
>>> Java
>>> API, the KerberosToken is all you need to use.
>>>
>>> The user-manual likely just needs to be updated. I believe the
>>> DelegationTokenConfig was added after I wrote the initial documentation.
>>>
>>>
>>> Xu (Simon) Chen wrote:
>>>>
>>>> Hi folks,
>>>>
>>>> The latest kerberos doc seems to indicate that getDelegationToken can be
>>>> called without any parameters:
>>>>
>>>>
>>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>
>>>> Yet the source code indicates a DelegationTokenConfig object must be
>>>> passed in:
>>>>
>>>>
>>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>
>>>> Any ideas on how I should construct the DelegationTokenConfig object?
>>>>
>>>> For context, I've been trying to get geomesa to work on my accumulo 1.7
>>>> with kerberos turned on. Right now, the code is somewhat tied to
>>>> password auth:
>>>>
>>>>
>>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>> My thought is that I should get a KerberosToken first, and then try
>>>> generate a DelegationToken, which is passed back for later interactions
>>>> between geomesa and accumulo.
>>>>
>>>> Thanks.
>>>> -Simon

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
Simon,

Did you create a client configuration file (~/.accumulo/config or 
$ACCUMULO_CONF_DIR/client.conf)? You need to configure Accumulo clients 
to actually use SASL when you're trying to use Kerberos authentication. 
Your server is expecting that, but I would venture a guess that your 
client isn't.

See 
http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3

Xu (Simon) Chen wrote:
> Josh,
>
> Thanks. It makes sense...
>
> I used a KerberosToken, but my program got stuck when running the following:
> new ZooKeeperInstance(instance, zookeepers).getConnector(user, krbToken)
>
> It looks like my client is stuck here:
> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
> failing in the receive part of
> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>
> On my tservers, I see the following:
>
> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
> occurred during processing of message.
> java.lang.RuntimeException:
> org.apache.thrift.transport.TTransportException:
> java.net.SocketTimeoutException: Read timed out
>          at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>          at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>          at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>          at java.security.AccessController.doPrivileged(Native Method)
>          at javax.security.auth.Subject.doAs(Subject.java:356)
>          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>          at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>          at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>          at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException:
> java.net.SocketTimeoutException: Read timed out
>          at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>          at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>          at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>          at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>          at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>          at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>          at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>          ... 11 more
> Caused by: java.net.SocketTimeoutException: Read timed out
>          at java.net.SocketInputStream.socketRead0(Native Method)
>          at java.net.SocketInputStream.read(SocketInputStream.java:152)
>          at java.net.SocketInputStream.read(SocketInputStream.java:122)
>          at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>          at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>          at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>          ... 17 more
>
> Any ideas why?
>
> Thanks.
> -Simon
>
>
>
>
> On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser<jo...@gmail.com>  wrote:
>> Make sure you read the JavaDoc on DelegationToken:
>>
>> <snip>
>> Obtain a delegation token by calling {@link
>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>> </snip>
>>
>> You cannot create a usable DelegationToken as the client itself.
>>
>> Anyways, DelegationTokens are only relevant in cases where the client
>> Kerberos credentials are unavailable. The most common case is running
>> MapReduce jobs. If you are just interacting with Accumulo through the Java
>> API, the KerberosToken is all you need to use.
>>
>> The user-manual likely just needs to be updated. I believe the
>> DelegationTokenConfig was added after I wrote the initial documentation.
>>
>>
>> Xu (Simon) Chen wrote:
>>> Hi folks,
>>>
>>> The latest kerberos doc seems to indicate that getDelegationToken can be
>>> called without any parameters:
>>>
>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>
>>> Yet the source code indicates a DelegationTokenConfig object must be
>>> passed in:
>>>
>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>
>>> Any ideas on how I should construct the DelegationTokenConfig object?
>>>
>>> For context, I've been trying to get geomesa to work on my accumulo 1.7
>>> with kerberos turned on. Right now, the code is somewhat tied to
>>> password auth:
>>>
>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>> My thought is that I should get a KerberosToken first, and then try
>>> generate a DelegationToken, which is passed back for later interactions
>>> between geomesa and accumulo.
>>>
>>> Thanks.
>>> -Simon

Re: kerberos auth, getDelegationToken

Posted by "Xu (Simon) Chen" <xc...@gmail.com>.
Josh,

Thanks. It makes sense...

I used a KerberosToken, but my program got stuck when running the following:
new ZooKeeperInstance(instance, zookeepers).getConnector(user, krbToken)

It looks like my client is stuck here:
https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
failing in the receive part of
org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().

On my tservers, I see the following:

2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR: Error
occurred during processing of message.
java.lang.RuntimeException:
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
        at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
        at org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 11 more
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:152)
        at java.net.SocketInputStream.read(SocketInputStream.java:122)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
        ... 17 more

Any ideas why?

Thanks.
-Simon




On Sat, Jun 6, 2015 at 2:19 PM, Josh Elser <jo...@gmail.com> wrote:
> Make sure you read the JavaDoc on DelegationToken:
>
> <snip>
> Obtain a delegation token by calling {@link
> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
> </snip>
>
> You cannot create a usable DelegationToken as the client itself.
>
> Anyways, DelegationTokens are only relevant in cases where the client
> Kerberos credentials are unavailable. The most common case is running
> MapReduce jobs. If you are just interacting with Accumulo through the Java
> API, the KerberosToken is all you need to use.
>
> The user-manual likely just needs to be updated. I believe the
> DelegationTokenConfig was added after I wrote the initial documentation.
>
>
> Xu (Simon) Chen wrote:
>>
>> Hi folks,
>>
>> The latest kerberos doc seems to indicate that getDelegationToken can be
>> called without any parameters:
>>
>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>
>> Yet the source code indicates a DelegationTokenConfig object must be
>> passed in:
>>
>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>
>> Any ideas on how I should construct the DelegationTokenConfig object?
>>
>> For context, I've been trying to get geomesa to work on my accumulo 1.7
>> with kerberos turned on. Right now, the code is somewhat tied to
>> password auth:
>>
>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>> My thought is that I should get a KerberosToken first, and then try
>> generate a DelegationToken, which is passed back for later interactions
>> between geomesa and accumulo.
>>
>> Thanks.
>> -Simon

Re: kerberos auth, getDelegationToken

Posted by Josh Elser <jo...@gmail.com>.
Make sure you read the JavaDoc on DelegationToken:

<snip>
Obtain a delegation token by calling {@link 
SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
</snip>

You cannot create a usable DelegationToken as the client itself.

Anyways, DelegationTokens are only relevant in cases where the client 
Kerberos credentials are unavailable. The most common case is running 
MapReduce jobs. If you are just interacting with Accumulo through the 
Java API, the KerberosToken is all you need to use.

The user-manual likely just needs to be updated. I believe the 
DelegationTokenConfig was added after I wrote the initial documentation.

Xu (Simon) Chen wrote:
> Hi folks,
>
> The latest kerberos doc seems to indicate that getDelegationToken can be
> called without any parameters:
> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>
> Yet the source code indicates a DelegationTokenConfig object must be
> passed in:
> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>
> Any ideas on how I should construct the DelegationTokenConfig object?
>
> For context, I've been trying to get geomesa to work on my accumulo 1.7
> with kerberos turned on. Right now, the code is somewhat tied to
> password auth:
> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
> My thought is that I should get a KerberosToken first, and then try
> generate a DelegationToken, which is passed back for later interactions
> between geomesa and accumulo.
>
> Thanks.
> -Simon