You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Sean Busbey <bu...@cloudera.com> on 2015/06/09 04:10:17 UTC

Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Josh's comment below made me realize we still haven't formally deprecated
MockAccumulo.

What do folks think about doing it soon-ish with an aim of removing it in
Accumulo 3.0? (that's version three, so that it can remain deprecated for
all of version 2).


-Sean

On Sun, Jun 7, 2015 at 12:37 PM, Josh Elser <jo...@gmail.com> wrote:

> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
> moved away from MockAccumulo. I wouldn't be surprised if it gets deprecated
> and removed soon.
>
>
> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>
> Apache Directory provides a MiniKdc that can be used easily w/
> MiniAccumulo. Many of the integration tests have already been altered to
> support running w/ or w/o kerberos.
>
> James Hughes wrote:
>
>> Hi all,
>>
>> For GeoMesa, stats writing is quite secondary and optional, so we can
>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
>>
>> I haven't had a chance to read in details yet, so forgive me if this is
>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
>> hooks to test out Kerberos integration effectively?  I suppose I'm
>> really asking what kind of testing environment a project like GeoMesa
>> would need to use to test out Accumulo 1.7.
>>
>> Even though MockAccumulo has a number of limitations, it is rather
>> effective in unit tests which can be part of a quick  build.
>>
>> Thanks,
>>
>> Jim
>>
>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>>     Nope, I am running the example as what the readme file suggested:
>>
>>     java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>     org.geomesa.QuickStart -instanceId somecloud -zookeepers
>>     "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>     -tableName sometable
>>
>>     I'll raise this question with the geomesa folks, but you're right that
>>     I can ignore it for now...
>>
>>     Thanks!
>>     -Simon
>>
>>
>>     On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
>>     <ma...@gmail.com>> wrote:
>>      > Are you running it via `mvn exec:java` by chance or netbeans?
>>      >
>>      >
>>
>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>      >
>>      > If that's just a background thread writing in Stats, it might
>>     just be a
>>      > factor of how you're invoking the program and you can ignore it.
>>     I don't
>>      > know enough about the inner-workings of GeoMesa to say one way or
>>     the other.
>>      >
>>      >
>>      > Xu (Simon) Chen wrote:
>>      >>
>>      >> Josh,
>>      >>
>>      >> Everything works well, except for one thing :-)
>>      >>
>>      >> I am running geomesa-quickstart program that ingest some data
>>     and then
>>      >> perform a simple query:
>>      >> https://github.com/geomesa/geomesa-quickstart
>>      >>
>>      >> For some reason, the following error is emitted consistently at
>> the
>>      >> end of the execution, after outputting the correct result:
>>      >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>>     retry
>>      >> java.lang.InterruptedException
>>      >>          at java.lang.Object.wait(Native Method)
>>      >>          at java.lang.Object.wait(Object.java:503)
>>      >>          at
>>      >>
>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>      >>          at
>>     org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>      >>          at
>>      >>
>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>      >>          at
>>      >>
>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>      >>          at
>>      >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>      >>          at
>>      >> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>      >>          at
>>      >>
>>
>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>      >>          at
>>      >>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>      >>          at
>>      >>
>>
>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>      >>          at
>>     scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>      >>          at
>>      >>
>>
>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>      >>          at
>>      >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>      >>          at
>>      >>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>      >>          at java.lang.Thread.run(Thread.java:745)
>>      >>
>>      >> This is more annoying than a real problem. I am new to both
>> accumulo
>>      >> and geomesa, but I am curious what the problem might be.
>>      >>
>>      >> Thanks!
>>      >> -Simon
>>      >>
>>      >>
>>      >> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
>>     <ma...@gmail.com>>  wrote:
>>      >>>
>>      >>> Great! Glad to hear it. Please let us know how it works out!
>>      >>>
>>      >>>
>>      >>> Xu (Simon) Chen wrote:
>>      >>>>
>>      >>>> Josh,
>>      >>>>
>>      >>>> You're right again.. Thanks!
>>      >>>>
>>      >>>> My ansible play actually pushed client.conf to all the server
>>     config
>>      >>>> directories, but didn't do anything for the clients, and that's
>> my
>>      >>>> problem. Now kerberos is working great for me.
>>      >>>>
>>      >>>> Thanks again!
>>      >>>> -Simon
>>      >>>>
>>      >>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>
>>      >>>> wrote:
>>      >>>>>
>>      >>>>> Simon,
>>      >>>>>
>>      >>>>> Did you create a client configuration file (~/.accumulo/config
>> or
>>      >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>     Accumulo clients
>>      >>>>> to
>>      >>>>> actually use SASL when you're trying to use Kerberos
>>     authentication.
>>      >>>>> Your
>>      >>>>> server is expecting that, but I would venture a guess that
>>     your client
>>      >>>>> isn't.
>>      >>>>>
>>      >>>>> See
>>      >>>>>
>>      >>>>>
>>
>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>      >>>>>
>>      >>>>>
>>      >>>>> Xu (Simon) Chen wrote:
>>      >>>>>>
>>      >>>>>> Josh,
>>      >>>>>>
>>      >>>>>> Thanks. It makes sense...
>>      >>>>>>
>>      >>>>>> I used a KerberosToken, but my program got stuck when
>>     running the
>>      >>>>>> following:
>>      >>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
>>      >>>>>> krbToken)
>>      >>>>>>
>>      >>>>>> It looks like my client is stuck here:
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>      >>>>>> failing in the receive part of
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>      >>>>>>
>>      >>>>>> On my tservers, I see the following:
>>      >>>>>>
>>      >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
>> Error
>>      >>>>>> occurred during processing of message.
>>      >>>>>> java.lang.RuntimeException:
>>      >>>>>> org.apache.thrift.transport.TTransportException:
>>      >>>>>> java.net.SocketTimeoutException: Read timed out
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>      >>>>>>            at
>> java.security.AccessController.doPrivileged(Native
>>      >>>>>> Method)
>>      >>>>>>            at
>> javax.security.auth.Subject.doAs(Subject.java:356)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>      >>>>>>            at java.lang.Thread.run(Thread.java:745)
>>      >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>      >>>>>> java.net.SocketTimeoutException: Read timed out
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>      >>>>>>            at
>>      >>>>>>
>>     org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>      >>>>>>            ... 11 more
>>      >>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>      >>>>>>            at java.net.SocketInputStream.socketRead0(Native
>>     Method)
>>      >>>>>>            at
>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>      >>>>>>            at
>>      >>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>      >>>>>>            at
>>      >>>>>>
>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>      >>>>>>            at
>>      >>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>      >>>>>>            at
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>
>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>      >>>>>>            ... 17 more
>>      >>>>>>
>>      >>>>>> Any ideas why?
>>      >>>>>>
>>      >>>>>> Thanks.
>>      >>>>>> -Simon
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>>
>>      >>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>     Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>
>>      >>>>>> wrote:
>>      >>>>>>>
>>      >>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>      >>>>>>>
>>      >>>>>>> <snip>
>>      >>>>>>> Obtain a delegation token by calling {@link
>>      >>>>>>>
>>      >>>>>>>
>>      >>>>>>>
>>      >>>>>>>
>>
>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>      >>>>>>> </snip>
>>      >>>>>>>
>>      >>>>>>> You cannot create a usable DelegationToken as the client
>>     itself.
>>      >>>>>>>
>>      >>>>>>> Anyways, DelegationTokens are only relevant in cases where
>>     the client
>>      >>>>>>> Kerberos credentials are unavailable. The most common case
>>     is running
>>      >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>     through the
>>      >>>>>>> Java
>>      >>>>>>> API, the KerberosToken is all you need to use.
>>      >>>>>>>
>>      >>>>>>> The user-manual likely just needs to be updated. I believe
>> the
>>      >>>>>>> DelegationTokenConfig was added after I wrote the initial
>>      >>>>>>> documentation.
>>      >>>>>>>
>>      >>>>>>>
>>      >>>>>>> Xu (Simon) Chen wrote:
>>      >>>>>>>>
>>      >>>>>>>> Hi folks,
>>      >>>>>>>>
>>      >>>>>>>> The latest kerberos doc seems to indicate that
>>     getDelegationToken
>>      >>>>>>>> can
>>      >>>>>>>> be
>>      >>>>>>>> called without any parameters:
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>      >>>>>>>>
>>      >>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>     object must be
>>      >>>>>>>> passed in:
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>
>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>      >>>>>>>>
>>      >>>>>>>> Any ideas on how I should construct the
>> DelegationTokenConfig
>>      >>>>>>>> object?
>>      >>>>>>>>
>>      >>>>>>>> For context, I've been trying to get geomesa to work on my
>>     accumulo
>>      >>>>>>>> 1.7
>>      >>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>     tied to
>>      >>>>>>>> password auth:
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>      >>>>>>>>
>>
>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>      >>>>>>>> My thought is that I should get a KerberosToken first, and
>>     then try
>>      >>>>>>>> generate a DelegationToken, which is passed back for later
>>      >>>>>>>> interactions
>>      >>>>>>>> between geomesa and accumulo.
>>      >>>>>>>>
>>      >>>>>>>> Thanks.
>>      >>>>>>>> -Simon
>>
>>
>>


-- 
Sean

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Christopher <ct...@apache.org>.
The lack of activity in this thread indicates to me a general
disinterest in the subject by anyone willing to do the work to
continue to preserve/fix/stabilize/maintain mock. As such, I think we
should proceed to deprecated it for 1.8, as that seems to be the
general desire by people who have expressed a willingness to do the
work. I've created ACCUMULO-3920 to get it done.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Tue, Jun 9, 2015 at 7:19 PM, Josh Elser <jo...@gmail.com> wrote:
> Keith Turner wrote:
>>>>
>>>> >  >  +1
>>>> >  >
>>>> >  >  It should be deprecated ASAP inorder to clearly communicate its
>>>> > status as
>>>> >  >  unmaintained.   If its deprecated in 1.8, its eligible to be
>>>> > dropped in
>>>
>>> >  2.0
>>>>
>>>> >  >  but does not have to be dropped.    When its dropped can be a
>>>> > separate
>>>> >  >  decision.
>>>> >  >
>>>
>>> >
>>> >  Doesn't semver require a major version of deprecation? Still def
>>> > deprecate
>>> >  immediately, remove eventually.
>>> >
>>
>>
>> No.  Christopher had proposed that more strict requirement, but we
>> eventually just went w/ unmodified semver.  The following is from the
>> semver page.
>>
>>
>>    Before you completely remove the functionality in a new major release
>> there should be at least one minor release that contains the deprecation
>> so
>> that users can smoothly transition to the new API.
>>
>
> Ok, cool. Thanks for the clarification. Was on my mobile and didn't have the
> official docs handy to double check :)

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Josh Elser <jo...@gmail.com>.
Keith Turner wrote:
>>> >  >  +1
>>> >  >
>>> >  >  It should be deprecated ASAP inorder to clearly communicate its status as
>>> >  >  unmaintained.   If its deprecated in 1.8, its eligible to be dropped in
>> >  2.0
>>> >  >  but does not have to be dropped.    When its dropped can be a separate
>>> >  >  decision.
>>> >  >
>> >
>> >  Doesn't semver require a major version of deprecation? Still def deprecate
>> >  immediately, remove eventually.
>> >
>
> No.  Christopher had proposed that more strict requirement, but we
> eventually just went w/ unmodified semver.  The following is from the
> semver page.
>
>
>    Before you completely remove the functionality in a new major release
> there should be at least one minor release that contains the deprecation so
> that users can smoothly transition to the new API.
>

Ok, cool. Thanks for the clarification. Was on my mobile and didn't have 
the official docs handy to double check :)

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Keith Turner <ke...@deenlo.com>.
On Tue, Jun 9, 2015 at 5:29 PM, Josh Elser <jo...@gmail.com> wrote:

> On Jun 9, 2015 2:46 PM, "Keith Turner" <ke...@deenlo.com> wrote:
> >
> > On Mon, Jun 8, 2015 at 11:42 PM, Ryan Leary <ry...@bbn.com> wrote:
> >
> > > Just an anecdote from someone who has been bitten by mock more than a
> > > couple times. I would try to deprecate it in 1.8 and remove in 2.0 if
> at
> > > all possible. People really shouldn't write tests against it.
> > >
> >
> > +1
> >
> > It should be deprecated ASAP inorder to clearly communicate its status as
> > unmaintained.   If its deprecated in 1.8, its eligible to be dropped in
> 2.0
> > but does not have to be dropped.    When its dropped can be a separate
> > decision.
> >
>
> Doesn't semver require a major version of deprecation? Still def deprecate
> immediately, remove eventually.
>

No.  Christopher had proposed that more strict requirement, but we
eventually just went w/ unmodified semver.  The following is from the
semver page.


  Before you completely remove the functionality in a new major release
there should be at least one minor release that contains the deprecation so
that users can smoothly transition to the new API.



>
> >
> > >
> > > > On Jun 8, 2015, at 10:10 PM, Sean Busbey <bu...@cloudera.com>
> wrote:
> > > >
> > > > Josh's comment below made me realize we still haven't formally
> deprecated
> > > > MockAccumulo.
> > > >
> > > > What do folks think about doing it soon-ish with an aim of removing
> it in
> > > > Accumulo 3.0? (that's version three, so that it can remain deprecated
> for
> > > > all of version 2).
> > > >
> > > >
> > > > -Sean
> > > >
> > > >> On Sun, Jun 7, 2015 at 12:37 PM, Josh Elser <jo...@gmail.com>
> > > wrote:
> > > >>
> > > >> MiniAccumulo, yes. MockAccumulo, no. In general, we've near
> completely
> > > >> moved away from MockAccumulo. I wouldn't be surprised if it gets
> > > deprecated
> > > >> and removed soon.
> > > >>
> > > >>
> > > >>
> > >
>
> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
> > > >>
> > > >> Apache Directory provides a MiniKdc that can be used easily w/
> > > >> MiniAccumulo. Many of the integration tests have already been
> altered to
> > > >> support running w/ or w/o kerberos.
> > > >>
> > > >> James Hughes wrote:
> > > >>
> > > >>> Hi all,
> > > >>>
> > > >>> For GeoMesa, stats writing is quite secondary and optional, so we
> can
> > > >>> sort that out as a follow-on to seeing GeoMesa work with Accumulo
> 1.7.
> > > >>>
> > > >>> I haven't had a chance to read in details yet, so forgive me if
> this is
> > > >>> covered in the docs.  Does either Mock or MiniAccumulo provide
> enough
> > > >>> hooks to test out Kerberos integration effectively?  I suppose I'm
> > > >>> really asking what kind of testing environment a project like
> GeoMesa
> > > >>> would need to use to test out Accumulo 1.7.
> > > >>>
> > > >>> Even though MockAccumulo has a number of limitations, it is rather
> > > >>> effective in unit tests which can be part of a quick  build.
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Jim
> > > >>>
> > > >>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <
> xchenum@gmail.com
> > > >>> <ma...@gmail.com>> wrote:
> > > >>>
> > > >>>    Nope, I am running the example as what the readme file
> suggested:
> > > >>>
> > > >>>    java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
> > > >>>    org.geomesa.QuickStart -instanceId somecloud -zookeepers
> > > >>>    "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
> > > >>>    -tableName sometable
> > > >>>
> > > >>>    I'll raise this question with the geomesa folks, but you're
> right
> > > that
> > > >>>    I can ignore it for now...
> > > >>>
> > > >>>    Thanks!
> > > >>>    -Simon
> > > >>>
> > > >>>
> > > >>>    On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <
> josh.elser@gmail.com
> > > >>>    <ma...@gmail.com>> wrote:
> > > >>>> Are you running it via `mvn exec:java` by chance or netbeans?
> > > >>>
> > > >>>
> > >
>
> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
> > > >>>>
> > > >>>> If that's just a background thread writing in Stats, it might
> > > >>>    just be a
> > > >>>> factor of how you're invoking the program and you can ignore it.
> > > >>>    I don't
> > > >>>> know enough about the inner-workings of GeoMesa to say one way or
> > > >>>    the other.
> > > >>>>
> > > >>>>
> > > >>>> Xu (Simon) Chen wrote:
> > > >>>>>
> > > >>>>> Josh,
> > > >>>>>
> > > >>>>> Everything works well, except for one thing :-)
> > > >>>>>
> > > >>>>> I am running geomesa-quickstart program that ingest some data
> > > >>>    and then
> > > >>>>> perform a simple query:
> > > >>>>> https://github.com/geomesa/geomesa-quickstart
> > > >>>>>
> > > >>>>> For some reason, the following error is emitted consistently at
> > > >>> the
> > > >>>>> end of the execution, after outputting the correct result:
> > > >>>>> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
> > > >>>    retry
> > > >>>>> java.lang.InterruptedException
> > > >>>>>         at java.lang.Object.wait(Native Method)
> > > >>>>>         at java.lang.Object.wait(Object.java:503)
> > > >>>>>         at
> > > >>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
> > > >>>>>         at
> > > >>>    org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
> > > >>>>>         at
> > > >>>
> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
> > > >>>>>         at
> > > >>>
> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
> > > >>>>>         at
> > > >>>>>
> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
> > > >>>>>         at
> > > >>>>>
> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
> > > >>>>>         at
> > > >>>    scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
> > > >>>>>         at
> > > >>>
> > > >>>
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > >>>>>         at
> > > >>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >>>>>         at
> > > >>>
> > > >>>
> > >
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >>>>>         at java.lang.Thread.run(Thread.java:745)
> > > >>>>>
> > > >>>>> This is more annoying than a real problem. I am new to both
> > > >>> accumulo
> > > >>>>> and geomesa, but I am curious what the problem might be.
> > > >>>>>
> > > >>>>> Thanks!
> > > >>>>> -Simon
> > > >>>>>
> > > >>>>>
> > > >>>>> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
> > > >>>    <ma...@gmail.com>>  wrote:
> > > >>>>>>
> > > >>>>>> Great! Glad to hear it. Please let us know how it works out!
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> Xu (Simon) Chen wrote:
> > > >>>>>>>
> > > >>>>>>> Josh,
> > > >>>>>>>
> > > >>>>>>> You're right again.. Thanks!
> > > >>>>>>>
> > > >>>>>>> My ansible play actually pushed client.conf to all the server
> > > >>>    config
> > > >>>>>>> directories, but didn't do anything for the clients, and that's
> > > >>> my
> > > >>>>>>> problem. Now kerberos is working great for me.
> > > >>>>>>>
> > > >>>>>>> Thanks again!
> > > >>>>>>> -Simon
> > > >>>>>>>
> > > >>>>>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
> > > >>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
> > > >>>
> > > >>>>>>> wrote:
> > > >>>>>>>>
> > > >>>>>>>> Simon,
> > > >>>>>>>>
> > > >>>>>>>> Did you create a client configuration file (~/.accumulo/config
> > > >>> or
> > > >>>>>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
> > > >>>    Accumulo clients
> > > >>>>>>>> to
> > > >>>>>>>> actually use SASL when you're trying to use Kerberos
> > > >>>    authentication.
> > > >>>>>>>> Your
> > > >>>>>>>> server is expecting that, but I would venture a guess that
> > > >>>    your client
> > > >>>>>>>> isn't.
> > > >>>>>>>>
> > > >>>>>>>> See
> > > >>>
> > > >>>
> > >
> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
> > > >>>>>>>>
> > > >>>>>>>>
> > > >>>>>>>> Xu (Simon) Chen wrote:
> > > >>>>>>>>>
> > > >>>>>>>>> Josh,
> > > >>>>>>>>>
> > > >>>>>>>>> Thanks. It makes sense...
> > > >>>>>>>>>
> > > >>>>>>>>> I used a KerberosToken, but my program got stuck when
> > > >>>    running the
> > > >>>>>>>>> following:
> > > >>>>>>>>> new ZooKeeperInstance(instance,
> zookeepers).getConnector(user,
> > > >>>>>>>>> krbToken)
> > > >>>>>>>>>
> > > >>>>>>>>> It looks like my client is stuck here:
> > > >>>
> > > >>>
> > >
>
> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
> > > >>>>>>>>> failing in the receive part of
> > > >>>
> > > >>>
> > >
>
> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
> > > >>>>>>>>>
> > > >>>>>>>>> On my tservers, I see the following:
> > > >>>>>>>>>
> > > >>>>>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
> > > >>> Error
> > > >>>>>>>>> occurred during processing of message.
> > > >>>>>>>>> java.lang.RuntimeException:
> > > >>>>>>>>> org.apache.thrift.transport.TTransportException:
> > > >>>>>>>>> java.net.SocketTimeoutException: Read timed out
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
> > > >>>>>>>>>           at
> > > >>> java.security.AccessController.doPrivileged(Native
> > > >>>>>>>>> Method)
> > > >>>>>>>>>           at
> > > >>> javax.security.auth.Subject.doAs(Subject.java:356)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> > > >>>>>>>>>           at java.lang.Thread.run(Thread.java:745)
> > > >>>>>>>>> Caused by: org.apache.thrift.transport.TTransportException:
> > > >>>>>>>>> java.net.SocketTimeoutException: Read timed out
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
> > > >>>>>>>>>           at
> > > >>>
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> > > >>>>>>>>>           ... 11 more
> > > >>>>>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
> > > >>>>>>>>>           at java.net.SocketInputStream.socketRead0(Native
> > > >>>    Method)
> > > >>>>>>>>>           at
> > > >>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
> > > >>>>>>>>>           at
> > > >>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
> > > >>>>>>>>>           at
> > > >>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> > > >>>>>>>>>           at
> > > >>>>>>>>>
> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> > > >>>>>>>>>           at
> > > >>>
> > > >>>
> > >
>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> > > >>>>>>>>>           ... 17 more
> > > >>>>>>>>>
> > > >>>>>>>>> Any ideas why?
> > > >>>>>>>>>
> > > >>>>>>>>> Thanks.
> > > >>>>>>>>> -Simon
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
> > > >>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
> > > >>>
> > > >>>>>>>>> wrote:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Make sure you read the JavaDoc on DelegationToken:
> > > >>>>>>>>>>
> > > >>>>>>>>>> <snip>
> > > >>>>>>>>>> Obtain a delegation token by calling {@link
> > > >>>
> > > >>>
> > >
>
> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
> > > >>>>>>>>>> </snip>
> > > >>>>>>>>>>
> > > >>>>>>>>>> You cannot create a usable DelegationToken as the client
> > > >>>    itself.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Anyways, DelegationTokens are only relevant in cases where
> > > >>>    the client
> > > >>>>>>>>>> Kerberos credentials are unavailable. The most common case
> > > >>>    is running
> > > >>>>>>>>>> MapReduce jobs. If you are just interacting with Accumulo
> > > >>>    through the
> > > >>>>>>>>>> Java
> > > >>>>>>>>>> API, the KerberosToken is all you need to use.
> > > >>>>>>>>>>
> > > >>>>>>>>>> The user-manual likely just needs to be updated. I believe
> > > >>> the
> > > >>>>>>>>>> DelegationTokenConfig was added after I wrote the initial
> > > >>>>>>>>>> documentation.
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>> Xu (Simon) Chen wrote:
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> Hi folks,
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> The latest kerberos doc seems to indicate that
> > > >>>    getDelegationToken
> > > >>>>>>>>>>> can
> > > >>>>>>>>>>> be
> > > >>>>>>>>>>> called without any parameters:
> > > >>>
> > > >>>
> > >
>
> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> Yet the source code indicates a DelegationTokenConfig
> > > >>>    object must be
> > > >>>>>>>>>>> passed in:
> > > >>>
> > > >>>
> > >
>
> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> Any ideas on how I should construct the
> > > >>> DelegationTokenConfig
> > > >>>>>>>>>>> object?
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> For context, I've been trying to get geomesa to work on my
> > > >>>    accumulo
> > > >>>>>>>>>>> 1.7
> > > >>>>>>>>>>> with kerberos turned on. Right now, the code is somewhat
> > > >>>    tied to
> > > >>>>>>>>>>> password auth:
> > > >>>
> > > >>>
> > >
>
> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
> > > >>>>>>>>>>> My thought is that I should get a KerberosToken first, and
> > > >>>    then try
> > > >>>>>>>>>>> generate a DelegationToken, which is passed back for later
> > > >>>>>>>>>>> interactions
> > > >>>>>>>>>>> between geomesa and accumulo.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> Thanks.
> > > >>>>>>>>>>> -Simon
> > > >
> > > >
> > > > --
> > > > Sean
> > >
>

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Josh Elser <jo...@gmail.com>.
On Jun 9, 2015 2:46 PM, "Keith Turner" <ke...@deenlo.com> wrote:
>
> On Mon, Jun 8, 2015 at 11:42 PM, Ryan Leary <ry...@bbn.com> wrote:
>
> > Just an anecdote from someone who has been bitten by mock more than a
> > couple times. I would try to deprecate it in 1.8 and remove in 2.0 if at
> > all possible. People really shouldn't write tests against it.
> >
>
> +1
>
> It should be deprecated ASAP inorder to clearly communicate its status as
> unmaintained.   If its deprecated in 1.8, its eligible to be dropped in
2.0
> but does not have to be dropped.    When its dropped can be a separate
> decision.
>

Doesn't semver require a major version of deprecation? Still def deprecate
immediately, remove eventually.

>
> >
> > > On Jun 8, 2015, at 10:10 PM, Sean Busbey <bu...@cloudera.com> wrote:
> > >
> > > Josh's comment below made me realize we still haven't formally
deprecated
> > > MockAccumulo.
> > >
> > > What do folks think about doing it soon-ish with an aim of removing
it in
> > > Accumulo 3.0? (that's version three, so that it can remain deprecated
for
> > > all of version 2).
> > >
> > >
> > > -Sean
> > >
> > >> On Sun, Jun 7, 2015 at 12:37 PM, Josh Elser <jo...@gmail.com>
> > wrote:
> > >>
> > >> MiniAccumulo, yes. MockAccumulo, no. In general, we've near
completely
> > >> moved away from MockAccumulo. I wouldn't be surprised if it gets
> > deprecated
> > >> and removed soon.
> > >>
> > >>
> > >>
> >
https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
> > >>
> > >> Apache Directory provides a MiniKdc that can be used easily w/
> > >> MiniAccumulo. Many of the integration tests have already been
altered to
> > >> support running w/ or w/o kerberos.
> > >>
> > >> James Hughes wrote:
> > >>
> > >>> Hi all,
> > >>>
> > >>> For GeoMesa, stats writing is quite secondary and optional, so we
can
> > >>> sort that out as a follow-on to seeing GeoMesa work with Accumulo
1.7.
> > >>>
> > >>> I haven't had a chance to read in details yet, so forgive me if
this is
> > >>> covered in the docs.  Does either Mock or MiniAccumulo provide
enough
> > >>> hooks to test out Kerberos integration effectively?  I suppose I'm
> > >>> really asking what kind of testing environment a project like
GeoMesa
> > >>> would need to use to test out Accumulo 1.7.
> > >>>
> > >>> Even though MockAccumulo has a number of limitations, it is rather
> > >>> effective in unit tests which can be part of a quick  build.
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Jim
> > >>>
> > >>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
> > >>> <ma...@gmail.com>> wrote:
> > >>>
> > >>>    Nope, I am running the example as what the readme file suggested:
> > >>>
> > >>>    java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
> > >>>    org.geomesa.QuickStart -instanceId somecloud -zookeepers
> > >>>    "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
> > >>>    -tableName sometable
> > >>>
> > >>>    I'll raise this question with the geomesa folks, but you're right
> > that
> > >>>    I can ignore it for now...
> > >>>
> > >>>    Thanks!
> > >>>    -Simon
> > >>>
> > >>>
> > >>>    On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
> > >>>    <ma...@gmail.com>> wrote:
> > >>>> Are you running it via `mvn exec:java` by chance or netbeans?
> > >>>
> > >>>
> >
http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
> > >>>>
> > >>>> If that's just a background thread writing in Stats, it might
> > >>>    just be a
> > >>>> factor of how you're invoking the program and you can ignore it.
> > >>>    I don't
> > >>>> know enough about the inner-workings of GeoMesa to say one way or
> > >>>    the other.
> > >>>>
> > >>>>
> > >>>> Xu (Simon) Chen wrote:
> > >>>>>
> > >>>>> Josh,
> > >>>>>
> > >>>>> Everything works well, except for one thing :-)
> > >>>>>
> > >>>>> I am running geomesa-quickstart program that ingest some data
> > >>>    and then
> > >>>>> perform a simple query:
> > >>>>> https://github.com/geomesa/geomesa-quickstart
> > >>>>>
> > >>>>> For some reason, the following error is emitted consistently at
> > >>> the
> > >>>>> end of the execution, after outputting the correct result:
> > >>>>> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
> > >>>    retry
> > >>>>> java.lang.InterruptedException
> > >>>>>         at java.lang.Object.wait(Native Method)
> > >>>>>         at java.lang.Object.wait(Object.java:503)
> > >>>>>         at
> > >>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
> > >>>>>         at
> > >>>    org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
> > >>>>>         at
> > >>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
> > >>>>>         at
> > >>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
> > >>>>>         at
> > >>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
> > >>>>>         at
> > >>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
> > >>>>>         at
> > >>>
> > >>>
> >
org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
> > >>>>>         at
> > >>>
> > >>>
> >
org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
> > >>>>>         at
> > >>>
> > >>>
> >
org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
> > >>>>>         at
> > >>>
> > >>>
> >
org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
> > >>>>>         at
> > >>>
> > >>>
> >
org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
> > >>>>>         at
> > >>>    scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> > >>>>>         at
> > >>>
> > >>>
> >
org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
> > >>>>>         at
> > >>>
> > >>>
> >
org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
> > >>>>>         at
> > >>>
> > >>>
> >
org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
> > >>>>>         at
> > >>>
> > >>>
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > >>>>>         at
> > >>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> > >>>>>         at
> > >>>
> > >>>
> >
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> > >>>>>         at
> > >>>
> > >>>
> >
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> > >>>>>         at
> > >>>
> > >>>
> >
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >>>>>         at
> > >>>
> > >>>
> >
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >>>>>         at java.lang.Thread.run(Thread.java:745)
> > >>>>>
> > >>>>> This is more annoying than a real problem. I am new to both
> > >>> accumulo
> > >>>>> and geomesa, but I am curious what the problem might be.
> > >>>>>
> > >>>>> Thanks!
> > >>>>> -Simon
> > >>>>>
> > >>>>>
> > >>>>> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
> > >>>    <ma...@gmail.com>>  wrote:
> > >>>>>>
> > >>>>>> Great! Glad to hear it. Please let us know how it works out!
> > >>>>>>
> > >>>>>>
> > >>>>>> Xu (Simon) Chen wrote:
> > >>>>>>>
> > >>>>>>> Josh,
> > >>>>>>>
> > >>>>>>> You're right again.. Thanks!
> > >>>>>>>
> > >>>>>>> My ansible play actually pushed client.conf to all the server
> > >>>    config
> > >>>>>>> directories, but didn't do anything for the clients, and that's
> > >>> my
> > >>>>>>> problem. Now kerberos is working great for me.
> > >>>>>>>
> > >>>>>>> Thanks again!
> > >>>>>>> -Simon
> > >>>>>>>
> > >>>>>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
> > >>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
> > >>>
> > >>>>>>> wrote:
> > >>>>>>>>
> > >>>>>>>> Simon,
> > >>>>>>>>
> > >>>>>>>> Did you create a client configuration file (~/.accumulo/config
> > >>> or
> > >>>>>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
> > >>>    Accumulo clients
> > >>>>>>>> to
> > >>>>>>>> actually use SASL when you're trying to use Kerberos
> > >>>    authentication.
> > >>>>>>>> Your
> > >>>>>>>> server is expecting that, but I would venture a guess that
> > >>>    your client
> > >>>>>>>> isn't.
> > >>>>>>>>
> > >>>>>>>> See
> > >>>
> > >>>
> >
http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> Xu (Simon) Chen wrote:
> > >>>>>>>>>
> > >>>>>>>>> Josh,
> > >>>>>>>>>
> > >>>>>>>>> Thanks. It makes sense...
> > >>>>>>>>>
> > >>>>>>>>> I used a KerberosToken, but my program got stuck when
> > >>>    running the
> > >>>>>>>>> following:
> > >>>>>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
> > >>>>>>>>> krbToken)
> > >>>>>>>>>
> > >>>>>>>>> It looks like my client is stuck here:
> > >>>
> > >>>
> >
https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
> > >>>>>>>>> failing in the receive part of
> > >>>
> > >>>
> >
org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
> > >>>>>>>>>
> > >>>>>>>>> On my tservers, I see the following:
> > >>>>>>>>>
> > >>>>>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
> > >>> Error
> > >>>>>>>>> occurred during processing of message.
> > >>>>>>>>> java.lang.RuntimeException:
> > >>>>>>>>> org.apache.thrift.transport.TTransportException:
> > >>>>>>>>> java.net.SocketTimeoutException: Read timed out
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
> > >>>>>>>>>           at
> > >>> java.security.AccessController.doPrivileged(Native
> > >>>>>>>>> Method)
> > >>>>>>>>>           at
> > >>> javax.security.auth.Subject.doAs(Subject.java:356)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> > >>>>>>>>>           at java.lang.Thread.run(Thread.java:745)
> > >>>>>>>>> Caused by: org.apache.thrift.transport.TTransportException:
> > >>>>>>>>> java.net.SocketTimeoutException: Read timed out
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
> > >>>>>>>>>           at
> > >>>
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> > >>>>>>>>>           at
> > >>>
> > >>>
> > org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> > >>>>>>>>>           ... 11 more
> > >>>>>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
> > >>>>>>>>>           at java.net.SocketInputStream.socketRead0(Native
> > >>>    Method)
> > >>>>>>>>>           at
> > >>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
> > >>>>>>>>>           at
> > >>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
> > >>>>>>>>>           at
> > >>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> > >>>>>>>>>           at
> > >>>>>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> > >>>>>>>>>           at
> > >>>
> > >>>
> >
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> > >>>>>>>>>           ... 17 more
> > >>>>>>>>>
> > >>>>>>>>> Any ideas why?
> > >>>>>>>>>
> > >>>>>>>>> Thanks.
> > >>>>>>>>> -Simon
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
> > >>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
> > >>>
> > >>>>>>>>> wrote:
> > >>>>>>>>>>
> > >>>>>>>>>> Make sure you read the JavaDoc on DelegationToken:
> > >>>>>>>>>>
> > >>>>>>>>>> <snip>
> > >>>>>>>>>> Obtain a delegation token by calling {@link
> > >>>
> > >>>
> >
SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
> > >>>>>>>>>> </snip>
> > >>>>>>>>>>
> > >>>>>>>>>> You cannot create a usable DelegationToken as the client
> > >>>    itself.
> > >>>>>>>>>>
> > >>>>>>>>>> Anyways, DelegationTokens are only relevant in cases where
> > >>>    the client
> > >>>>>>>>>> Kerberos credentials are unavailable. The most common case
> > >>>    is running
> > >>>>>>>>>> MapReduce jobs. If you are just interacting with Accumulo
> > >>>    through the
> > >>>>>>>>>> Java
> > >>>>>>>>>> API, the KerberosToken is all you need to use.
> > >>>>>>>>>>
> > >>>>>>>>>> The user-manual likely just needs to be updated. I believe
> > >>> the
> > >>>>>>>>>> DelegationTokenConfig was added after I wrote the initial
> > >>>>>>>>>> documentation.
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> Xu (Simon) Chen wrote:
> > >>>>>>>>>>>
> > >>>>>>>>>>> Hi folks,
> > >>>>>>>>>>>
> > >>>>>>>>>>> The latest kerberos doc seems to indicate that
> > >>>    getDelegationToken
> > >>>>>>>>>>> can
> > >>>>>>>>>>> be
> > >>>>>>>>>>> called without any parameters:
> > >>>
> > >>>
> >
https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
> > >>>>>>>>>>>
> > >>>>>>>>>>> Yet the source code indicates a DelegationTokenConfig
> > >>>    object must be
> > >>>>>>>>>>> passed in:
> > >>>
> > >>>
> >
https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
> > >>>>>>>>>>>
> > >>>>>>>>>>> Any ideas on how I should construct the
> > >>> DelegationTokenConfig
> > >>>>>>>>>>> object?
> > >>>>>>>>>>>
> > >>>>>>>>>>> For context, I've been trying to get geomesa to work on my
> > >>>    accumulo
> > >>>>>>>>>>> 1.7
> > >>>>>>>>>>> with kerberos turned on. Right now, the code is somewhat
> > >>>    tied to
> > >>>>>>>>>>> password auth:
> > >>>
> > >>>
> >
https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
> > >>>>>>>>>>> My thought is that I should get a KerberosToken first, and
> > >>>    then try
> > >>>>>>>>>>> generate a DelegationToken, which is passed back for later
> > >>>>>>>>>>> interactions
> > >>>>>>>>>>> between geomesa and accumulo.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Thanks.
> > >>>>>>>>>>> -Simon
> > >
> > >
> > > --
> > > Sean
> >

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Keith Turner <ke...@deenlo.com>.
On Mon, Jun 8, 2015 at 11:42 PM, Ryan Leary <ry...@bbn.com> wrote:

> Just an anecdote from someone who has been bitten by mock more than a
> couple times. I would try to deprecate it in 1.8 and remove in 2.0 if at
> all possible. People really shouldn't write tests against it.
>

+1

It should be deprecated ASAP inorder to clearly communicate its status as
unmaintained.   If its deprecated in 1.8, its eligible to be dropped in 2.0
but does not have to be dropped.    When its dropped can be a separate
decision.



>
> > On Jun 8, 2015, at 10:10 PM, Sean Busbey <bu...@cloudera.com> wrote:
> >
> > Josh's comment below made me realize we still haven't formally deprecated
> > MockAccumulo.
> >
> > What do folks think about doing it soon-ish with an aim of removing it in
> > Accumulo 3.0? (that's version three, so that it can remain deprecated for
> > all of version 2).
> >
> >
> > -Sean
> >
> >> On Sun, Jun 7, 2015 at 12:37 PM, Josh Elser <jo...@gmail.com>
> wrote:
> >>
> >> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
> >> moved away from MockAccumulo. I wouldn't be surprised if it gets
> deprecated
> >> and removed soon.
> >>
> >>
> >>
> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
> >>
> >> Apache Directory provides a MiniKdc that can be used easily w/
> >> MiniAccumulo. Many of the integration tests have already been altered to
> >> support running w/ or w/o kerberos.
> >>
> >> James Hughes wrote:
> >>
> >>> Hi all,
> >>>
> >>> For GeoMesa, stats writing is quite secondary and optional, so we can
> >>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
> >>>
> >>> I haven't had a chance to read in details yet, so forgive me if this is
> >>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
> >>> hooks to test out Kerberos integration effectively?  I suppose I'm
> >>> really asking what kind of testing environment a project like GeoMesa
> >>> would need to use to test out Accumulo 1.7.
> >>>
> >>> Even though MockAccumulo has a number of limitations, it is rather
> >>> effective in unit tests which can be part of a quick  build.
> >>>
> >>> Thanks,
> >>>
> >>> Jim
> >>>
> >>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
> >>> <ma...@gmail.com>> wrote:
> >>>
> >>>    Nope, I am running the example as what the readme file suggested:
> >>>
> >>>    java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
> >>>    org.geomesa.QuickStart -instanceId somecloud -zookeepers
> >>>    "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
> >>>    -tableName sometable
> >>>
> >>>    I'll raise this question with the geomesa folks, but you're right
> that
> >>>    I can ignore it for now...
> >>>
> >>>    Thanks!
> >>>    -Simon
> >>>
> >>>
> >>>    On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
> >>>    <ma...@gmail.com>> wrote:
> >>>> Are you running it via `mvn exec:java` by chance or netbeans?
> >>>
> >>>
> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
> >>>>
> >>>> If that's just a background thread writing in Stats, it might
> >>>    just be a
> >>>> factor of how you're invoking the program and you can ignore it.
> >>>    I don't
> >>>> know enough about the inner-workings of GeoMesa to say one way or
> >>>    the other.
> >>>>
> >>>>
> >>>> Xu (Simon) Chen wrote:
> >>>>>
> >>>>> Josh,
> >>>>>
> >>>>> Everything works well, except for one thing :-)
> >>>>>
> >>>>> I am running geomesa-quickstart program that ingest some data
> >>>    and then
> >>>>> perform a simple query:
> >>>>> https://github.com/geomesa/geomesa-quickstart
> >>>>>
> >>>>> For some reason, the following error is emitted consistently at
> >>> the
> >>>>> end of the execution, after outputting the correct result:
> >>>>> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
> >>>    retry
> >>>>> java.lang.InterruptedException
> >>>>>         at java.lang.Object.wait(Native Method)
> >>>>>         at java.lang.Object.wait(Object.java:503)
> >>>>>         at
> >>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
> >>>>>         at
> >>>    org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
> >>>>>         at
> >>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
> >>>>>         at
> >>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
> >>>>>         at
> >>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
> >>>>>         at
> >>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
> >>>>>         at
> >>>
> >>>
> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
> >>>>>         at
> >>>
> >>>
> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
> >>>>>         at
> >>>
> >>>
> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
> >>>>>         at
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
> >>>>>         at
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
> >>>>>         at
> >>>    scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> >>>>>         at
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
> >>>>>         at
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
> >>>>>         at
> >>>
> >>>
> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
> >>>>>         at
> >>>
> >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >>>>>         at
> >>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> >>>>>         at
> >>>
> >>>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> >>>>>         at
> >>>
> >>>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >>>>>         at
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>         at
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>         at java.lang.Thread.run(Thread.java:745)
> >>>>>
> >>>>> This is more annoying than a real problem. I am new to both
> >>> accumulo
> >>>>> and geomesa, but I am curious what the problem might be.
> >>>>>
> >>>>> Thanks!
> >>>>> -Simon
> >>>>>
> >>>>>
> >>>>> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
> >>>    <ma...@gmail.com>>  wrote:
> >>>>>>
> >>>>>> Great! Glad to hear it. Please let us know how it works out!
> >>>>>>
> >>>>>>
> >>>>>> Xu (Simon) Chen wrote:
> >>>>>>>
> >>>>>>> Josh,
> >>>>>>>
> >>>>>>> You're right again.. Thanks!
> >>>>>>>
> >>>>>>> My ansible play actually pushed client.conf to all the server
> >>>    config
> >>>>>>> directories, but didn't do anything for the clients, and that's
> >>> my
> >>>>>>> problem. Now kerberos is working great for me.
> >>>>>>>
> >>>>>>> Thanks again!
> >>>>>>> -Simon
> >>>>>>>
> >>>>>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
> >>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
> >>>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> Simon,
> >>>>>>>>
> >>>>>>>> Did you create a client configuration file (~/.accumulo/config
> >>> or
> >>>>>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
> >>>    Accumulo clients
> >>>>>>>> to
> >>>>>>>> actually use SASL when you're trying to use Kerberos
> >>>    authentication.
> >>>>>>>> Your
> >>>>>>>> server is expecting that, but I would venture a guess that
> >>>    your client
> >>>>>>>> isn't.
> >>>>>>>>
> >>>>>>>> See
> >>>
> >>>
> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Xu (Simon) Chen wrote:
> >>>>>>>>>
> >>>>>>>>> Josh,
> >>>>>>>>>
> >>>>>>>>> Thanks. It makes sense...
> >>>>>>>>>
> >>>>>>>>> I used a KerberosToken, but my program got stuck when
> >>>    running the
> >>>>>>>>> following:
> >>>>>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
> >>>>>>>>> krbToken)
> >>>>>>>>>
> >>>>>>>>> It looks like my client is stuck here:
> >>>
> >>>
> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
> >>>>>>>>> failing in the receive part of
> >>>
> >>>
> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
> >>>>>>>>>
> >>>>>>>>> On my tservers, I see the following:
> >>>>>>>>>
> >>>>>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
> >>> Error
> >>>>>>>>> occurred during processing of message.
> >>>>>>>>> java.lang.RuntimeException:
> >>>>>>>>> org.apache.thrift.transport.TTransportException:
> >>>>>>>>> java.net.SocketTimeoutException: Read timed out
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
> >>>>>>>>>           at
> >>> java.security.AccessController.doPrivileged(Native
> >>>>>>>>> Method)
> >>>>>>>>>           at
> >>> javax.security.auth.Subject.doAs(Subject.java:356)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
> >>>>>>>>>           at
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>>>>>           at
> >>>
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
> >>>>>>>>>           at java.lang.Thread.run(Thread.java:745)
> >>>>>>>>> Caused by: org.apache.thrift.transport.TTransportException:
> >>>>>>>>> java.net.SocketTimeoutException: Read timed out
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
> >>>>>>>>>           at
> >>>    org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> >>>>>>>>>           ... 11 more
> >>>>>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
> >>>>>>>>>           at java.net.SocketInputStream.socketRead0(Native
> >>>    Method)
> >>>>>>>>>           at
> >>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
> >>>>>>>>>           at
> >>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
> >>>>>>>>>           at
> >>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> >>>>>>>>>           at
> >>>>>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> >>>>>>>>>           at
> >>>
> >>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> >>>>>>>>>           ... 17 more
> >>>>>>>>>
> >>>>>>>>> Any ideas why?
> >>>>>>>>>
> >>>>>>>>> Thanks.
> >>>>>>>>> -Simon
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
> >>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
> >>>
> >>>>>>>>> wrote:
> >>>>>>>>>>
> >>>>>>>>>> Make sure you read the JavaDoc on DelegationToken:
> >>>>>>>>>>
> >>>>>>>>>> <snip>
> >>>>>>>>>> Obtain a delegation token by calling {@link
> >>>
> >>>
> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
> >>>>>>>>>> </snip>
> >>>>>>>>>>
> >>>>>>>>>> You cannot create a usable DelegationToken as the client
> >>>    itself.
> >>>>>>>>>>
> >>>>>>>>>> Anyways, DelegationTokens are only relevant in cases where
> >>>    the client
> >>>>>>>>>> Kerberos credentials are unavailable. The most common case
> >>>    is running
> >>>>>>>>>> MapReduce jobs. If you are just interacting with Accumulo
> >>>    through the
> >>>>>>>>>> Java
> >>>>>>>>>> API, the KerberosToken is all you need to use.
> >>>>>>>>>>
> >>>>>>>>>> The user-manual likely just needs to be updated. I believe
> >>> the
> >>>>>>>>>> DelegationTokenConfig was added after I wrote the initial
> >>>>>>>>>> documentation.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Xu (Simon) Chen wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Hi folks,
> >>>>>>>>>>>
> >>>>>>>>>>> The latest kerberos doc seems to indicate that
> >>>    getDelegationToken
> >>>>>>>>>>> can
> >>>>>>>>>>> be
> >>>>>>>>>>> called without any parameters:
> >>>
> >>>
> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
> >>>>>>>>>>>
> >>>>>>>>>>> Yet the source code indicates a DelegationTokenConfig
> >>>    object must be
> >>>>>>>>>>> passed in:
> >>>
> >>>
> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
> >>>>>>>>>>>
> >>>>>>>>>>> Any ideas on how I should construct the
> >>> DelegationTokenConfig
> >>>>>>>>>>> object?
> >>>>>>>>>>>
> >>>>>>>>>>> For context, I've been trying to get geomesa to work on my
> >>>    accumulo
> >>>>>>>>>>> 1.7
> >>>>>>>>>>> with kerberos turned on. Right now, the code is somewhat
> >>>    tied to
> >>>>>>>>>>> password auth:
> >>>
> >>>
> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
> >>>>>>>>>>> My thought is that I should get a KerberosToken first, and
> >>>    then try
> >>>>>>>>>>> generate a DelegationToken, which is passed back for later
> >>>>>>>>>>> interactions
> >>>>>>>>>>> between geomesa and accumulo.
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks.
> >>>>>>>>>>> -Simon
> >
> >
> > --
> > Sean
>

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Christopher <ct...@apache.org>.
+1 to deprecating in 1.8.0.

Additionally, I would be +1 to accelerating its removal in 2.0. I
think it has become a burden that is inhibiting progress in other
areas. I would like to see it dispassionately ripped out in favor of
properly mocked unit testing (with EasyMock or others) and MAC-based
integration testing.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Mon, Jun 8, 2015 at 11:42 PM, Ryan Leary <ry...@bbn.com> wrote:
> Just an anecdote from someone who has been bitten by mock more than a couple times. I would try to deprecate it in 1.8 and remove in 2.0 if at all possible. People really shouldn't write tests against it.
>
>> On Jun 8, 2015, at 10:10 PM, Sean Busbey <bu...@cloudera.com> wrote:
>>
>> Josh's comment below made me realize we still haven't formally deprecated
>> MockAccumulo.
>>
>> What do folks think about doing it soon-ish with an aim of removing it in
>> Accumulo 3.0? (that's version three, so that it can remain deprecated for
>> all of version 2).
>>
>>
>> -Sean
>>
>>> On Sun, Jun 7, 2015 at 12:37 PM, Josh Elser <jo...@gmail.com> wrote:
>>>
>>> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
>>> moved away from MockAccumulo. I wouldn't be surprised if it gets deprecated
>>> and removed soon.
>>>
>>>
>>> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>>>
>>> Apache Directory provides a MiniKdc that can be used easily w/
>>> MiniAccumulo. Many of the integration tests have already been altered to
>>> support running w/ or w/o kerberos.
>>>
>>> James Hughes wrote:
>>>
>>>> Hi all,
>>>>
>>>> For GeoMesa, stats writing is quite secondary and optional, so we can
>>>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
>>>>
>>>> I haven't had a chance to read in details yet, so forgive me if this is
>>>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
>>>> hooks to test out Kerberos integration effectively?  I suppose I'm
>>>> really asking what kind of testing environment a project like GeoMesa
>>>> would need to use to test out Accumulo 1.7.
>>>>
>>>> Even though MockAccumulo has a number of limitations, it is rather
>>>> effective in unit tests which can be part of a quick  build.
>>>>
>>>> Thanks,
>>>>
>>>> Jim
>>>>
>>>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
>>>> <ma...@gmail.com>> wrote:
>>>>
>>>>    Nope, I am running the example as what the readme file suggested:
>>>>
>>>>    java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>>>    org.geomesa.QuickStart -instanceId somecloud -zookeepers
>>>>    "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>>>    -tableName sometable
>>>>
>>>>    I'll raise this question with the geomesa folks, but you're right that
>>>>    I can ignore it for now...
>>>>
>>>>    Thanks!
>>>>    -Simon
>>>>
>>>>
>>>>    On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
>>>>    <ma...@gmail.com>> wrote:
>>>>> Are you running it via `mvn exec:java` by chance or netbeans?
>>>>
>>>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>>>>
>>>>> If that's just a background thread writing in Stats, it might
>>>>    just be a
>>>>> factor of how you're invoking the program and you can ignore it.
>>>>    I don't
>>>>> know enough about the inner-workings of GeoMesa to say one way or
>>>>    the other.
>>>>>
>>>>>
>>>>> Xu (Simon) Chen wrote:
>>>>>>
>>>>>> Josh,
>>>>>>
>>>>>> Everything works well, except for one thing :-)
>>>>>>
>>>>>> I am running geomesa-quickstart program that ingest some data
>>>>    and then
>>>>>> perform a simple query:
>>>>>> https://github.com/geomesa/geomesa-quickstart
>>>>>>
>>>>>> For some reason, the following error is emitted consistently at
>>>> the
>>>>>> end of the execution, after outputting the correct result:
>>>>>> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>>>>    retry
>>>>>> java.lang.InterruptedException
>>>>>>         at java.lang.Object.wait(Native Method)
>>>>>>         at java.lang.Object.wait(Object.java:503)
>>>>>>         at
>>>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>>>>>         at
>>>>    org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>>>>>         at
>>>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>>>>>         at
>>>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>>>>>         at
>>>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>>>>>         at
>>>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>>>>>         at
>>>>
>>>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>>>>>         at
>>>>
>>>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>>>>>         at
>>>>
>>>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>>>>>         at
>>>>
>>>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>>>>>         at
>>>>
>>>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>>>>>         at
>>>>    scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>>>>>         at
>>>>
>>>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>>>>>         at
>>>>
>>>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>>>>>         at
>>>>
>>>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>>>>>         at
>>>>
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>>         at
>>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>>>>>         at
>>>>
>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>>>>>         at
>>>>
>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>>>>         at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>         at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>>
>>>>>> This is more annoying than a real problem. I am new to both
>>>> accumulo
>>>>>> and geomesa, but I am curious what the problem might be.
>>>>>>
>>>>>> Thanks!
>>>>>> -Simon
>>>>>>
>>>>>>
>>>>>> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
>>>>    <ma...@gmail.com>>  wrote:
>>>>>>>
>>>>>>> Great! Glad to hear it. Please let us know how it works out!
>>>>>>>
>>>>>>>
>>>>>>> Xu (Simon) Chen wrote:
>>>>>>>>
>>>>>>>> Josh,
>>>>>>>>
>>>>>>>> You're right again.. Thanks!
>>>>>>>>
>>>>>>>> My ansible play actually pushed client.conf to all the server
>>>>    config
>>>>>>>> directories, but didn't do anything for the clients, and that's
>>>> my
>>>>>>>> problem. Now kerberos is working great for me.
>>>>>>>>
>>>>>>>> Thanks again!
>>>>>>>> -Simon
>>>>>>>>
>>>>>>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>>>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Simon,
>>>>>>>>>
>>>>>>>>> Did you create a client configuration file (~/.accumulo/config
>>>> or
>>>>>>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>>>    Accumulo clients
>>>>>>>>> to
>>>>>>>>> actually use SASL when you're trying to use Kerberos
>>>>    authentication.
>>>>>>>>> Your
>>>>>>>>> server is expecting that, but I would venture a guess that
>>>>    your client
>>>>>>>>> isn't.
>>>>>>>>>
>>>>>>>>> See
>>>>
>>>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Xu (Simon) Chen wrote:
>>>>>>>>>>
>>>>>>>>>> Josh,
>>>>>>>>>>
>>>>>>>>>> Thanks. It makes sense...
>>>>>>>>>>
>>>>>>>>>> I used a KerberosToken, but my program got stuck when
>>>>    running the
>>>>>>>>>> following:
>>>>>>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
>>>>>>>>>> krbToken)
>>>>>>>>>>
>>>>>>>>>> It looks like my client is stuck here:
>>>>
>>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>>>>>>>>> failing in the receive part of
>>>>
>>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>>>>>>>>
>>>>>>>>>> On my tservers, I see the following:
>>>>>>>>>>
>>>>>>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
>>>> Error
>>>>>>>>>> occurred during processing of message.
>>>>>>>>>> java.lang.RuntimeException:
>>>>>>>>>> org.apache.thrift.transport.TTransportException:
>>>>>>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>>>>>>>>           at
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>>>>>>>>           at
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>>>>>>>>           at
>>>> java.security.AccessController.doPrivileged(Native
>>>>>>>>>> Method)
>>>>>>>>>>           at
>>>> javax.security.auth.Subject.doAs(Subject.java:356)
>>>>>>>>>>           at
>>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>>>>>>>>           at
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>>>>>>>>           at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>>>>>           at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>>>>>           at
>>>>
>>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>>>>>>>>           at java.lang.Thread.run(Thread.java:745)
>>>>>>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>>>>>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>>>>>>>>           at
>>>>    org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>>>>>>>>           ... 11 more
>>>>>>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>>>>>>>>           at java.net.SocketInputStream.socketRead0(Native
>>>>    Method)
>>>>>>>>>>           at
>>>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>>>>>>>>           at
>>>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>>>>>>>>           at
>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>>>>>>>>           at
>>>>>>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>>>>>>>>           at
>>>>
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>>>>>>>>           ... 17 more
>>>>>>>>>>
>>>>>>>>>> Any ideas why?
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>> -Simon
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>>>
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>>>>>>>>>>
>>>>>>>>>>> <snip>
>>>>>>>>>>> Obtain a delegation token by calling {@link
>>>>
>>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>>>>>>>>> </snip>
>>>>>>>>>>>
>>>>>>>>>>> You cannot create a usable DelegationToken as the client
>>>>    itself.
>>>>>>>>>>>
>>>>>>>>>>> Anyways, DelegationTokens are only relevant in cases where
>>>>    the client
>>>>>>>>>>> Kerberos credentials are unavailable. The most common case
>>>>    is running
>>>>>>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>>>    through the
>>>>>>>>>>> Java
>>>>>>>>>>> API, the KerberosToken is all you need to use.
>>>>>>>>>>>
>>>>>>>>>>> The user-manual likely just needs to be updated. I believe
>>>> the
>>>>>>>>>>> DelegationTokenConfig was added after I wrote the initial
>>>>>>>>>>> documentation.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Xu (Simon) Chen wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hi folks,
>>>>>>>>>>>>
>>>>>>>>>>>> The latest kerberos doc seems to indicate that
>>>>    getDelegationToken
>>>>>>>>>>>> can
>>>>>>>>>>>> be
>>>>>>>>>>>> called without any parameters:
>>>>
>>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>>>>>>>>>
>>>>>>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>>>    object must be
>>>>>>>>>>>> passed in:
>>>>
>>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>>>>>>>>>
>>>>>>>>>>>> Any ideas on how I should construct the
>>>> DelegationTokenConfig
>>>>>>>>>>>> object?
>>>>>>>>>>>>
>>>>>>>>>>>> For context, I've been trying to get geomesa to work on my
>>>>    accumulo
>>>>>>>>>>>> 1.7
>>>>>>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>>>    tied to
>>>>>>>>>>>> password auth:
>>>>
>>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>>>>>>>>> My thought is that I should get a KerberosToken first, and
>>>>    then try
>>>>>>>>>>>> generate a DelegationToken, which is passed back for later
>>>>>>>>>>>> interactions
>>>>>>>>>>>> between geomesa and accumulo.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks.
>>>>>>>>>>>> -Simon
>>
>>
>> --
>> Sean

Re: Deprecating Mock Accumulo (was Re: kerberos auth, getDelegationToken)

Posted by Ryan Leary <ry...@bbn.com>.
Just an anecdote from someone who has been bitten by mock more than a couple times. I would try to deprecate it in 1.8 and remove in 2.0 if at all possible. People really shouldn't write tests against it. 

> On Jun 8, 2015, at 10:10 PM, Sean Busbey <bu...@cloudera.com> wrote:
> 
> Josh's comment below made me realize we still haven't formally deprecated
> MockAccumulo.
> 
> What do folks think about doing it soon-ish with an aim of removing it in
> Accumulo 3.0? (that's version three, so that it can remain deprecated for
> all of version 2).
> 
> 
> -Sean
> 
>> On Sun, Jun 7, 2015 at 12:37 PM, Josh Elser <jo...@gmail.com> wrote:
>> 
>> MiniAccumulo, yes. MockAccumulo, no. In general, we've near completely
>> moved away from MockAccumulo. I wouldn't be surprised if it gets deprecated
>> and removed soon.
>> 
>> 
>> https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
>> 
>> Apache Directory provides a MiniKdc that can be used easily w/
>> MiniAccumulo. Many of the integration tests have already been altered to
>> support running w/ or w/o kerberos.
>> 
>> James Hughes wrote:
>> 
>>> Hi all,
>>> 
>>> For GeoMesa, stats writing is quite secondary and optional, so we can
>>> sort that out as a follow-on to seeing GeoMesa work with Accumulo 1.7.
>>> 
>>> I haven't had a chance to read in details yet, so forgive me if this is
>>> covered in the docs.  Does either Mock or MiniAccumulo provide enough
>>> hooks to test out Kerberos integration effectively?  I suppose I'm
>>> really asking what kind of testing environment a project like GeoMesa
>>> would need to use to test out Accumulo 1.7.
>>> 
>>> Even though MockAccumulo has a number of limitations, it is rather
>>> effective in unit tests which can be part of a quick  build.
>>> 
>>> Thanks,
>>> 
>>> Jim
>>> 
>>> On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen <xchenum@gmail.com
>>> <ma...@gmail.com>> wrote:
>>> 
>>>    Nope, I am running the example as what the readme file suggested:
>>> 
>>>    java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
>>>    org.geomesa.QuickStart -instanceId somecloud -zookeepers
>>>    "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
>>>    -tableName sometable
>>> 
>>>    I'll raise this question with the geomesa folks, but you're right that
>>>    I can ignore it for now...
>>> 
>>>    Thanks!
>>>    -Simon
>>> 
>>> 
>>>    On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser <josh.elser@gmail.com
>>>    <ma...@gmail.com>> wrote:
>>>> Are you running it via `mvn exec:java` by chance or netbeans?
>>> 
>>> http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%3C547A9071.1020704@gmail.com%3E
>>>> 
>>>> If that's just a background thread writing in Stats, it might
>>>    just be a
>>>> factor of how you're invoking the program and you can ignore it.
>>>    I don't
>>>> know enough about the inner-workings of GeoMesa to say one way or
>>>    the other.
>>>> 
>>>> 
>>>> Xu (Simon) Chen wrote:
>>>>> 
>>>>> Josh,
>>>>> 
>>>>> Everything works well, except for one thing :-)
>>>>> 
>>>>> I am running geomesa-quickstart program that ingest some data
>>>    and then
>>>>> perform a simple query:
>>>>> https://github.com/geomesa/geomesa-quickstart
>>>>> 
>>>>> For some reason, the following error is emitted consistently at
>>> the
>>>>> end of the execution, after outputting the correct result:
>>>>> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
>>>    retry
>>>>> java.lang.InterruptedException
>>>>>         at java.lang.Object.wait(Native Method)
>>>>>         at java.lang.Object.wait(Object.java:503)
>>>>>         at
>>> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>>>>>         at
>>>    org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
>>>>>         at
>>> org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
>>>>>         at
>>> org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
>>>>>         at
>>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
>>>>>         at
>>>>> org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
>>>>>         at
>>> 
>>> org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
>>>>>         at
>>> 
>>> org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
>>>>>         at
>>> 
>>> org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
>>>>>         at
>>> 
>>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
>>>>>         at
>>> 
>>> org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
>>>>>         at
>>>    scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>>>>>         at
>>> 
>>> org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
>>>>>         at
>>> 
>>> org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
>>>>>         at
>>> 
>>> org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
>>>>>         at
>>> 
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>         at
>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>>>>         at
>>> 
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>>>>         at
>>> 
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>>>         at
>>> 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>         at
>>> 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>> 
>>>>> This is more annoying than a real problem. I am new to both
>>> accumulo
>>>>> and geomesa, but I am curious what the problem might be.
>>>>> 
>>>>> Thanks!
>>>>> -Simon
>>>>> 
>>>>> 
>>>>> On Sat, Jun 6, 2015 at 8:01 PM, Josh Elser<josh.elser@gmail.com
>>>    <ma...@gmail.com>>  wrote:
>>>>>> 
>>>>>> Great! Glad to hear it. Please let us know how it works out!
>>>>>> 
>>>>>> 
>>>>>> Xu (Simon) Chen wrote:
>>>>>>> 
>>>>>>> Josh,
>>>>>>> 
>>>>>>> You're right again.. Thanks!
>>>>>>> 
>>>>>>> My ansible play actually pushed client.conf to all the server
>>>    config
>>>>>>> directories, but didn't do anything for the clients, and that's
>>> my
>>>>>>> problem. Now kerberos is working great for me.
>>>>>>> 
>>>>>>> Thanks again!
>>>>>>> -Simon
>>>>>>> 
>>>>>>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
>>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>> 
>>>>>>> wrote:
>>>>>>>> 
>>>>>>>> Simon,
>>>>>>>> 
>>>>>>>> Did you create a client configuration file (~/.accumulo/config
>>> or
>>>>>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
>>>    Accumulo clients
>>>>>>>> to
>>>>>>>> actually use SASL when you're trying to use Kerberos
>>>    authentication.
>>>>>>>> Your
>>>>>>>> server is expecting that, but I would venture a guess that
>>>    your client
>>>>>>>> isn't.
>>>>>>>> 
>>>>>>>> See
>>> 
>>> http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Xu (Simon) Chen wrote:
>>>>>>>>> 
>>>>>>>>> Josh,
>>>>>>>>> 
>>>>>>>>> Thanks. It makes sense...
>>>>>>>>> 
>>>>>>>>> I used a KerberosToken, but my program got stuck when
>>>    running the
>>>>>>>>> following:
>>>>>>>>> new ZooKeeperInstance(instance, zookeepers).getConnector(user,
>>>>>>>>> krbToken)
>>>>>>>>> 
>>>>>>>>> It looks like my client is stuck here:
>>> 
>>> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>>>>>>>> failing in the receive part of
>>> 
>>> org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
>>>>>>>>> 
>>>>>>>>> On my tservers, I see the following:
>>>>>>>>> 
>>>>>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] ERROR:
>>> Error
>>>>>>>>> occurred during processing of message.
>>>>>>>>> java.lang.RuntimeException:
>>>>>>>>> org.apache.thrift.transport.TTransportException:
>>>>>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>>>>>>>>>           at
>>> 
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
>>>>>>>>>           at
>>> 
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
>>>>>>>>>           at
>>> java.security.AccessController.doPrivileged(Native
>>>>>>>>> Method)
>>>>>>>>>           at
>>> javax.security.auth.Subject.doAs(Subject.java:356)
>>>>>>>>>           at
>>> 
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
>>>>>>>>>           at
>>> 
>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
>>>>>>>>>           at
>>> 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>>>>           at
>>> 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>>>>           at
>>> 
>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>>>>>>>>>           at java.lang.Thread.run(Thread.java:745)
>>>>>>>>> Caused by: org.apache.thrift.transport.TTransportException:
>>>>>>>>> java.net.SocketTimeoutException: Read timed out
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>>>>>>>>>           at
>>>    org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>>>>>>>>>           ... 11 more
>>>>>>>>> Caused by: java.net.SocketTimeoutException: Read timed out
>>>>>>>>>           at java.net.SocketInputStream.socketRead0(Native
>>>    Method)
>>>>>>>>>           at
>>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
>>>>>>>>>           at
>>>>>>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
>>>>>>>>>           at
>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>>>>>>>>>           at
>>>>>>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>>>>>>>>>           at
>>> 
>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>>>>>>>           ... 17 more
>>>>>>>>> 
>>>>>>>>> Any ideas why?
>>>>>>>>> 
>>>>>>>>> Thanks.
>>>>>>>>> -Simon
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
>>>    Elser<josh.elser@gmail.com <ma...@gmail.com>>
>>> 
>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Make sure you read the JavaDoc on DelegationToken:
>>>>>>>>>> 
>>>>>>>>>> <snip>
>>>>>>>>>> Obtain a delegation token by calling {@link
>>> 
>>> SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
>>>>>>>>>> </snip>
>>>>>>>>>> 
>>>>>>>>>> You cannot create a usable DelegationToken as the client
>>>    itself.
>>>>>>>>>> 
>>>>>>>>>> Anyways, DelegationTokens are only relevant in cases where
>>>    the client
>>>>>>>>>> Kerberos credentials are unavailable. The most common case
>>>    is running
>>>>>>>>>> MapReduce jobs. If you are just interacting with Accumulo
>>>    through the
>>>>>>>>>> Java
>>>>>>>>>> API, the KerberosToken is all you need to use.
>>>>>>>>>> 
>>>>>>>>>> The user-manual likely just needs to be updated. I believe
>>> the
>>>>>>>>>> DelegationTokenConfig was added after I wrote the initial
>>>>>>>>>> documentation.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> Xu (Simon) Chen wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hi folks,
>>>>>>>>>>> 
>>>>>>>>>>> The latest kerberos doc seems to indicate that
>>>    getDelegationToken
>>>>>>>>>>> can
>>>>>>>>>>> be
>>>>>>>>>>> called without any parameters:
>>> 
>>> https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
>>>>>>>>>>> 
>>>>>>>>>>> Yet the source code indicates a DelegationTokenConfig
>>>    object must be
>>>>>>>>>>> passed in:
>>> 
>>> https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
>>>>>>>>>>> 
>>>>>>>>>>> Any ideas on how I should construct the
>>> DelegationTokenConfig
>>>>>>>>>>> object?
>>>>>>>>>>> 
>>>>>>>>>>> For context, I've been trying to get geomesa to work on my
>>>    accumulo
>>>>>>>>>>> 1.7
>>>>>>>>>>> with kerberos turned on. Right now, the code is somewhat
>>>    tied to
>>>>>>>>>>> password auth:
>>> 
>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>>>>>>>> My thought is that I should get a KerberosToken first, and
>>>    then try
>>>>>>>>>>> generate a DelegationToken, which is passed back for later
>>>>>>>>>>> interactions
>>>>>>>>>>> between geomesa and accumulo.
>>>>>>>>>>> 
>>>>>>>>>>> Thanks.
>>>>>>>>>>> -Simon
> 
> 
> -- 
> Sean