You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Aaron <aa...@gmail.com> on 2013/05/21 16:48:32 UTC

POM dependencies for an Accumulo Client

try to create a simple Servlet that acts as a Accumulo Client.  Trying to
work through the fun of runtime dependencies on a Tomcat 7 server (7.0.40
to be exact).

Basically, anyone been thru this before?  Seem to be having some Jersey
conflicts as well as Servlet 2.5 vs 3.0 conflicts.

2nd part:  what accumulo dependencies do we need...i would assume at least
accumulo-core, but do we need all the others? Proxy? FATE? etc.

If I get this figured out before I hear anything, I'll post my pom here.

Thanks in advance,
Aaron

Re: POM dependencies for an Accumulo Client

Posted by Aaron <aa...@gmail.com>.
Yah..sorry about that..should have added the note that we are using CDH4.2
and are currently looking at CDH4.3...which the simple tests we are running
appear to be cool.


On Wed, May 29, 2013 at 1:45 PM, Billie Rinaldi <bi...@gmail.com>wrote:

> On Wed, May 29, 2013 at 10:29 AM, Aaron <aa...@gmail.com> wrote:
>
>> Wow..so, I made you copy a bunch of unnecessary things.  I wrote a bunch
>> of simple unit tests, and came up with a minimal set of dependancies:
>>
>>  <dependencies>
>>
>> <dependency>
>>
>> <groupId>log4j</groupId>
>>
>> <artifactId>log4j</artifactId>
>>
>> <version>1.2.17</version>
>>
>> </dependency>
>>
>> <dependency>
>>
>> <groupId>org.apache.accumulo</groupId>
>>
>> <artifactId>accumulo-core</artifactId>
>>
>> <version>${accumulo.version}</version>
>>
>> </dependency>
>>
>> <dependency>
>>
>> <groupId>org.apache.hadoop</groupId>
>>
>> <artifactId>hadoop-common</artifactId>
>>
>> <version>${hadoop.version}</version>
>>
>> </dependency>
>>
>
> hadoop-common doesn't exist for hadoop 1.  hadoop-client exists for hadoop
> 1 and 2.  You may need additional dependencies if you use it instead (such
> as commons-httpclient).
>
> Billie
>
>
>>  <dependency>
>>
>> <groupId>junit</groupId>
>>
>> <artifactId>junit</artifactId>
>>
>> <version>4.11</version>
>>
>> <scope>test</scope>
>>
>> </dependency>
>>
>> The accumulo-core & hadoop-common pull is all the necessary extras like
>> zookeeper, and the other accumulo artifacts. Below is my simple Unit tests.
>>
>> public class CRUDTest
>>
>> {
>>
>> private static final Logger log = Logger.getLogger(CRUDTest.class);
>>
>>  private static final String zookeeperHost = "172.100.100.101";
>>
>> private static final String instanceName = "testinstance";
>>
>> private static final String root = "root";
>>
>> private static final String pass = "accumulopass";
>>
>>
>>  private static final String tableName = "testtable";
>>
>> private static final String testuser = "testuser";
>>
>> private static final String testpass = "testpass";
>>
>> private static final String auths = "testauth";
>>
>>  private Instance instance;
>>
>> private Connector testUserConn;
>>
>>  @Before
>>
>> public void setup()
>>
>> {
>>
>> this.instance = new ZooKeeperInstance(instanceName, zookeeperHost,
>> 20000);
>>
>> try
>>
>> {
>>
>> Connector rootConn = instance.getConnector(root, new PasswordToken(pass
>> ));
>>
>> log.info("Have the connector");
>>
>>  TableOperations tableOps = rootConn.tableOperations();
>>
>> if (!tableOps.exists(tableName))
>>
>> tableOps.create(tableName);
>>
>>  // Make sure our user has the proper permissions/auths
>>
>> SecurityOperations securOps = rootConn.securityOperations();
>>
>> securOps.createLocalUser(testuser, new PasswordToken(testpass));
>>
>> securOps.changeUserAuthorizations(testuser, new Authorizations(auths));
>>
>> securOps.grantTablePermission(testuser, tableName, TablePermission.WRITE
>> );
>>
>> securOps.grantTablePermission(testuser, tableName, TablePermission.READ);
>>
>>
>>  this.testUserConn = instance.getConnector(testuser, new PasswordToken(
>> testpass));
>>
>> }
>>
>> catch (AccumuloException | AccumuloSecurityException |
>> TableExistsException e)
>>
>> {
>>
>> e.printStackTrace();
>>
>> }
>>
>> }
>>
>>  @After
>>
>> public void teardown()
>>
>> {
>>
>> try
>>
>> {
>>
>> Connector rootConn = this.instance.getConnector(root, new PasswordToken(
>> pass));
>>
>> TableOperations tableOps = rootConn.tableOperations();
>>
>> SecurityOperations securOps =rootConn.securityOperations();
>>
>>
>>  tableOps.delete(tableName);
>>
>> securOps.dropLocalUser(testuser);
>>
>> }
>>
>> catch (AccumuloException | AccumuloSecurityException |
>> TableNotFoundException e)
>>
>> {
>>
>> e.printStackTrace();
>>
>> }
>>
>> }
>>
>>
>>  @Test
>>
>> public void testSimpleCreate()
>>
>> {
>>
>> String rowId = UUID.randomUUID().toString();
>>
>> Text rowID = new Text (rowId);
>>
>> Text colFam = new Text("myColFam");
>>
>> Text colQual = new Text("myColQual");
>>
>> ColumnVisibility vis = new ColumnVisibility(auths);
>>
>>  String strValue = UUID.randomUUID().toString();
>>
>> log.info("strValue = " + strValue);
>>
>> Value value = new Value(strValue.getBytes());
>>
>>  Mutation mutation= new Mutation(rowID);
>>
>> mutation.put(colFam,  colQual,  vis, System.currentTimeMillis(), value);
>>
>>  BatchWriterConfig config = new BatchWriterConfig();
>>
>> try
>>
>> {
>>
>> BatchWriter writer = this.testUserConn.createBatchWriter(tableName,
>> config);
>>
>> writer.addMutation(mutation);
>>
>> writer.flush();
>>
>> writer.close();
>>
>>  // Now let's see if we can see it
>>
>> Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
>> auths));
>>
>> Range range = new Range(rowId);
>>
>> scan.setRange(range);
>>
>> scan.fetchColumnFamily(colFam);
>>
>>  for(Entry<Key,Value> entry : scan)
>>
>> {
>>
>> log.info("key = " + entry.getKey().toString());
>>
>> log.info("value = " + entry.getValue().toString());
>>
>> Assert.assertTrue(strValue.equals(entry.getValue().toString()));
>>
>> }
>>
>>  scan.close();
>>
>> }
>>
>> catch (TableNotFoundException | MutationsRejectedException e)
>>
>> {
>>
>> e.printStackTrace();
>>
>> }
>>
>> }
>>
>>  @Test
>>
>> public void testSimpleUpdate()
>>
>> {
>>
>> Text rowID = new Text ("rowid_update");
>>
>> Text colFam = new Text("myColFam");
>>
>> Text colQual = new Text("myColQual");
>>
>> ColumnVisibility vis = new ColumnVisibility(auths);
>>
>>  String strValue = UUID.randomUUID().toString();
>>
>> log.info("strValue = " + strValue);
>>
>> Value value = new Value(strValue.getBytes());
>>
>>  Mutation origMutation= new Mutation(rowID);
>>
>> origMutation.put(colFam,  colQual,  vis, System.currentTimeMillis(),
>> value);
>>
>>  BatchWriterConfig config = new BatchWriterConfig();
>>
>> try
>>
>> {
>>
>> BatchWriter writer = this.testUserConn.createBatchWriter(tableName,
>> config);
>>
>>
>>  // Add the entry to be modified
>>
>> writer.addMutation(origMutation);
>>
>> writer.flush();
>>
>>  // Now let's change the value/timestamp
>>
>> Mutation updateMutation= new Mutation(rowID);
>>
>> updateMutation.put(colFam, colQual, vis, System.currentTimeMillis(),
>>
>> new Value(UUID.randomUUID().toString().getBytes()));
>>
>> writer.addMutation(updateMutation);
>>
>> writer.flush();
>>
>> writer.close();
>>
>>  // Now let's see if we can see it
>>
>> Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
>> auths));
>>
>> Range range = new Range(rowID);
>>
>> scan.setRange(range);
>>
>> scan.fetchColumnFamily(colFam);
>>
>>  for(Entry<Key,Value> entry : scan)
>>
>> {
>>
>> log.info("key = " + entry.getKey().toString());
>>
>> log.info("value = " + entry.getValue().toString());
>>
>> Assert.assertFalse(strValue.equals(entry.getValue().toString()));
>>
>> }
>>
>>  scan.close();
>>
>> }
>>
>> catch (TableNotFoundException | MutationsRejectedException e)
>>
>> {
>>
>> e.printStackTrace();
>>
>> }
>>
>> }
>>
>>  @Test
>>
>> public void testSimpleDelete()
>>
>> {
>>
>> Text rowID = new Text ("rowid_delete");
>>
>> Text colFam = new Text("myColFam");
>>
>> Text colQual = new Text("myColQual");
>>
>> ColumnVisibility vis = new ColumnVisibility("public");
>>
>>  String strValue = UUID.randomUUID().toString();
>>
>> log.info("strValue = " + strValue);
>>
>> Value value = new Value(strValue.getBytes());
>>
>>  BatchWriterConfig config = new BatchWriterConfig();
>>
>> try
>>
>> {
>>
>> BatchWriter writer = this.testUserConn.createBatchWriter(tableName,
>> config);
>>
>>
>>  // Add the entry to be modified
>>
>> Mutation origMutation= new Mutation(rowID);
>>
>> origMutation.put(colFam,  colQual,  vis, System.currentTimeMillis(),
>> value);
>>
>> writer.addMutation(origMutation);
>>
>> writer.flush();
>>
>>  // Now let's see if we can see it
>>
>> Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
>> auths));
>>
>> Range range = new Range(rowID);
>>
>> scan.setRange(range);
>>
>> scan.fetchColumnFamily(colFam);
>>
>> for(Entry<Key,Value> entry : scan)
>>
>> {
>>
>> log.info("key = " + entry.getKey().toString());
>>
>> log.info("value = " + entry.getValue().toString());
>>
>> Assert.assertTrue(strValue.equals(entry.getValue().toString()));
>>
>> }
>>
>>
>>  // Delete it
>>
>> BatchDeleter deleter = this.testUserConn.createBatchDeleter(tableName,
>> new Authorizations(auths), 1, config);
>>
>> List<Range> ranges = Arrays.asList(new Range(rowID));
>>
>> deleter.setRanges(ranges);
>>
>> deleter.delete();
>>
>>
>>  scan.fetchColumnFamily(colFam);
>>
>> for(Entry<Key,Value> entry : scan)
>>
>> {
>>
>> log.info("key = " + entry.getKey().toString());
>>
>> log.info("value = " + entry.getValue().toString());
>>
>> }
>>
>>  scan.close();
>>
>> }
>>
>> catch (TableNotFoundException | MutationsRejectedException e)
>>
>> {
>>
>> e.printStackTrace();
>>
>> }
>>
>> }
>>
>> }
>>
>>
>>
>> On Tue, May 28, 2013 at 9:38 PM, albertoaflores <aa...@gmail.com>wrote:
>>
>>> Thanks Aaron, I'll give these a shot...
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-accumulo.1065345.n5.nabble.com/POM-dependencies-for-an-Accumulo-Client-tp4182p4355.html
>>> Sent from the Users mailing list archive at Nabble.com.
>>>
>>
>>
>

Re: POM dependencies for an Accumulo Client

Posted by Billie Rinaldi <bi...@gmail.com>.
On Wed, May 29, 2013 at 10:29 AM, Aaron <aa...@gmail.com> wrote:

> Wow..so, I made you copy a bunch of unnecessary things.  I wrote a bunch
> of simple unit tests, and came up with a minimal set of dependancies:
>
>  <dependencies>
>
> <dependency>
>
> <groupId>log4j</groupId>
>
> <artifactId>log4j</artifactId>
>
> <version>1.2.17</version>
>
> </dependency>
>
> <dependency>
>
> <groupId>org.apache.accumulo</groupId>
>
> <artifactId>accumulo-core</artifactId>
>
> <version>${accumulo.version}</version>
>
> </dependency>
>
> <dependency>
>
> <groupId>org.apache.hadoop</groupId>
>
> <artifactId>hadoop-common</artifactId>
>
> <version>${hadoop.version}</version>
>
> </dependency>
>

hadoop-common doesn't exist for hadoop 1.  hadoop-client exists for hadoop
1 and 2.  You may need additional dependencies if you use it instead (such
as commons-httpclient).

Billie


>  <dependency>
>
> <groupId>junit</groupId>
>
> <artifactId>junit</artifactId>
>
> <version>4.11</version>
>
> <scope>test</scope>
>
> </dependency>
>
> The accumulo-core & hadoop-common pull is all the necessary extras like
> zookeeper, and the other accumulo artifacts. Below is my simple Unit tests.
>
> public class CRUDTest
>
> {
>
> private static final Logger log = Logger.getLogger(CRUDTest.class);
>
>  private static final String zookeeperHost = "172.100.100.101";
>
> private static final String instanceName = "testinstance";
>
> private static final String root = "root";
>
> private static final String pass = "accumulopass";
>
>
>  private static final String tableName = "testtable";
>
> private static final String testuser = "testuser";
>
> private static final String testpass = "testpass";
>
> private static final String auths = "testauth";
>
>  private Instance instance;
>
> private Connector testUserConn;
>
>  @Before
>
> public void setup()
>
> {
>
> this.instance = new ZooKeeperInstance(instanceName, zookeeperHost, 20000);
>
> try
>
> {
>
> Connector rootConn = instance.getConnector(root, new PasswordToken(pass));
>
> log.info("Have the connector");
>
>  TableOperations tableOps = rootConn.tableOperations();
>
> if (!tableOps.exists(tableName))
>
> tableOps.create(tableName);
>
>  // Make sure our user has the proper permissions/auths
>
> SecurityOperations securOps = rootConn.securityOperations();
>
> securOps.createLocalUser(testuser, new PasswordToken(testpass));
>
> securOps.changeUserAuthorizations(testuser, new Authorizations(auths));
>
> securOps.grantTablePermission(testuser, tableName, TablePermission.WRITE);
>
> securOps.grantTablePermission(testuser, tableName, TablePermission.READ);
>
>
>  this.testUserConn = instance.getConnector(testuser, new PasswordToken(
> testpass));
>
> }
>
> catch (AccumuloException | AccumuloSecurityException |
> TableExistsException e)
>
> {
>
> e.printStackTrace();
>
> }
>
> }
>
>  @After
>
> public void teardown()
>
> {
>
> try
>
> {
>
> Connector rootConn = this.instance.getConnector(root, new PasswordToken(
> pass));
>
> TableOperations tableOps = rootConn.tableOperations();
>
> SecurityOperations securOps =rootConn.securityOperations();
>
>
>  tableOps.delete(tableName);
>
> securOps.dropLocalUser(testuser);
>
> }
>
> catch (AccumuloException | AccumuloSecurityException |
> TableNotFoundException e)
>
> {
>
> e.printStackTrace();
>
> }
>
> }
>
>
>  @Test
>
> public void testSimpleCreate()
>
> {
>
> String rowId = UUID.randomUUID().toString();
>
> Text rowID = new Text (rowId);
>
> Text colFam = new Text("myColFam");
>
> Text colQual = new Text("myColQual");
>
> ColumnVisibility vis = new ColumnVisibility(auths);
>
>  String strValue = UUID.randomUUID().toString();
>
> log.info("strValue = " + strValue);
>
> Value value = new Value(strValue.getBytes());
>
>  Mutation mutation= new Mutation(rowID);
>
> mutation.put(colFam,  colQual,  vis, System.currentTimeMillis(), value);
>
>  BatchWriterConfig config = new BatchWriterConfig();
>
> try
>
> {
>
> BatchWriter writer = this.testUserConn.createBatchWriter(tableName,
> config);
>
> writer.addMutation(mutation);
>
> writer.flush();
>
> writer.close();
>
>  // Now let's see if we can see it
>
> Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
> auths));
>
> Range range = new Range(rowId);
>
> scan.setRange(range);
>
> scan.fetchColumnFamily(colFam);
>
>  for(Entry<Key,Value> entry : scan)
>
> {
>
> log.info("key = " + entry.getKey().toString());
>
> log.info("value = " + entry.getValue().toString());
>
> Assert.assertTrue(strValue.equals(entry.getValue().toString()));
>
> }
>
>  scan.close();
>
> }
>
> catch (TableNotFoundException | MutationsRejectedException e)
>
> {
>
> e.printStackTrace();
>
> }
>
> }
>
>  @Test
>
> public void testSimpleUpdate()
>
> {
>
> Text rowID = new Text ("rowid_update");
>
> Text colFam = new Text("myColFam");
>
> Text colQual = new Text("myColQual");
>
> ColumnVisibility vis = new ColumnVisibility(auths);
>
>  String strValue = UUID.randomUUID().toString();
>
> log.info("strValue = " + strValue);
>
> Value value = new Value(strValue.getBytes());
>
>  Mutation origMutation= new Mutation(rowID);
>
> origMutation.put(colFam,  colQual,  vis, System.currentTimeMillis(),
> value);
>
>  BatchWriterConfig config = new BatchWriterConfig();
>
> try
>
> {
>
> BatchWriter writer = this.testUserConn.createBatchWriter(tableName,
> config);
>
>
>  // Add the entry to be modified
>
> writer.addMutation(origMutation);
>
> writer.flush();
>
>  // Now let's change the value/timestamp
>
> Mutation updateMutation= new Mutation(rowID);
>
> updateMutation.put(colFam, colQual, vis, System.currentTimeMillis(),
>
> new Value(UUID.randomUUID().toString().getBytes()));
>
> writer.addMutation(updateMutation);
>
> writer.flush();
>
> writer.close();
>
>  // Now let's see if we can see it
>
> Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
> auths));
>
> Range range = new Range(rowID);
>
> scan.setRange(range);
>
> scan.fetchColumnFamily(colFam);
>
>  for(Entry<Key,Value> entry : scan)
>
> {
>
> log.info("key = " + entry.getKey().toString());
>
> log.info("value = " + entry.getValue().toString());
>
> Assert.assertFalse(strValue.equals(entry.getValue().toString()));
>
> }
>
>  scan.close();
>
> }
>
> catch (TableNotFoundException | MutationsRejectedException e)
>
> {
>
> e.printStackTrace();
>
> }
>
> }
>
>  @Test
>
> public void testSimpleDelete()
>
> {
>
> Text rowID = new Text ("rowid_delete");
>
> Text colFam = new Text("myColFam");
>
> Text colQual = new Text("myColQual");
>
> ColumnVisibility vis = new ColumnVisibility("public");
>
>  String strValue = UUID.randomUUID().toString();
>
> log.info("strValue = " + strValue);
>
> Value value = new Value(strValue.getBytes());
>
>  BatchWriterConfig config = new BatchWriterConfig();
>
> try
>
> {
>
> BatchWriter writer = this.testUserConn.createBatchWriter(tableName,
> config);
>
>
>  // Add the entry to be modified
>
> Mutation origMutation= new Mutation(rowID);
>
> origMutation.put(colFam,  colQual,  vis, System.currentTimeMillis(),
> value);
>
> writer.addMutation(origMutation);
>
> writer.flush();
>
>  // Now let's see if we can see it
>
> Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
> auths));
>
> Range range = new Range(rowID);
>
> scan.setRange(range);
>
> scan.fetchColumnFamily(colFam);
>
> for(Entry<Key,Value> entry : scan)
>
> {
>
> log.info("key = " + entry.getKey().toString());
>
> log.info("value = " + entry.getValue().toString());
>
> Assert.assertTrue(strValue.equals(entry.getValue().toString()));
>
> }
>
>
>  // Delete it
>
> BatchDeleter deleter = this.testUserConn.createBatchDeleter(tableName, newAuthorizations(
> auths), 1, config);
>
> List<Range> ranges = Arrays.asList(new Range(rowID));
>
> deleter.setRanges(ranges);
>
> deleter.delete();
>
>
>  scan.fetchColumnFamily(colFam);
>
> for(Entry<Key,Value> entry : scan)
>
> {
>
> log.info("key = " + entry.getKey().toString());
>
> log.info("value = " + entry.getValue().toString());
>
> }
>
>  scan.close();
>
> }
>
> catch (TableNotFoundException | MutationsRejectedException e)
>
> {
>
> e.printStackTrace();
>
> }
>
> }
>
> }
>
>
>
> On Tue, May 28, 2013 at 9:38 PM, albertoaflores <aa...@gmail.com>wrote:
>
>> Thanks Aaron, I'll give these a shot...
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-accumulo.1065345.n5.nabble.com/POM-dependencies-for-an-Accumulo-Client-tp4182p4355.html
>> Sent from the Users mailing list archive at Nabble.com.
>>
>
>

Re: POM dependencies for an Accumulo Client

Posted by Aaron <aa...@gmail.com>.
Wow..so, I made you copy a bunch of unnecessary things.  I wrote a bunch of
simple unit tests, and came up with a minimal set of dependancies:

<dependencies>

<dependency>

<groupId>log4j</groupId>

<artifactId>log4j</artifactId>

<version>1.2.17</version>

</dependency>

<dependency>

<groupId>org.apache.accumulo</groupId>

<artifactId>accumulo-core</artifactId>

<version>${accumulo.version}</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-common</artifactId>

<version>${hadoop.version}</version>

</dependency>

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>4.11</version>

<scope>test</scope>

</dependency>

The accumulo-core & hadoop-common pull is all the necessary extras like
zookeeper, and the other accumulo artifacts. Below is my simple Unit tests.

public class CRUDTest

{

private static final Logger log = Logger.getLogger(CRUDTest.class);

 private static final String zookeeperHost = "172.100.100.101";

private static final String instanceName = "testinstance";

private static final String root = "root";

private static final String pass = "accumulopass";


 private static final String tableName = "testtable";

private static final String testuser = "testuser";

private static final String testpass = "testpass";

private static final String auths = "testauth";

 private Instance instance;

private Connector testUserConn;

 @Before

public void setup()

{

this.instance = new ZooKeeperInstance(instanceName, zookeeperHost, 20000);

try

{

Connector rootConn = instance.getConnector(root, new PasswordToken(pass));

log.info("Have the connector");

 TableOperations tableOps = rootConn.tableOperations();

if (!tableOps.exists(tableName))

tableOps.create(tableName);

 // Make sure our user has the proper permissions/auths

SecurityOperations securOps = rootConn.securityOperations();

securOps.createLocalUser(testuser, new PasswordToken(testpass));

securOps.changeUserAuthorizations(testuser, new Authorizations(auths));

securOps.grantTablePermission(testuser, tableName, TablePermission.WRITE);

securOps.grantTablePermission(testuser, tableName, TablePermission.READ);


 this.testUserConn = instance.getConnector(testuser, new PasswordToken(
testpass));

}

catch (AccumuloException | AccumuloSecurityException | TableExistsException
e)

{

e.printStackTrace();

}

}

 @After

public void teardown()

{

try

{

Connector rootConn = this.instance.getConnector(root, new PasswordToken(pass
));

TableOperations tableOps = rootConn.tableOperations();

SecurityOperations securOps =rootConn.securityOperations();


 tableOps.delete(tableName);

securOps.dropLocalUser(testuser);

}

catch (AccumuloException | AccumuloSecurityException |
TableNotFoundException e)

{

e.printStackTrace();

}

}


 @Test

public void testSimpleCreate()

{

String rowId = UUID.randomUUID().toString();

Text rowID = new Text (rowId);

Text colFam = new Text("myColFam");

Text colQual = new Text("myColQual");

ColumnVisibility vis = new ColumnVisibility(auths);

 String strValue = UUID.randomUUID().toString();

log.info("strValue = " + strValue);

Value value = new Value(strValue.getBytes());

 Mutation mutation= new Mutation(rowID);

mutation.put(colFam,  colQual,  vis, System.currentTimeMillis(), value);

 BatchWriterConfig config = new BatchWriterConfig();

try

{

BatchWriter writer = this.testUserConn.createBatchWriter(tableName, config);

writer.addMutation(mutation);

writer.flush();

writer.close();

 // Now let's see if we can see it

Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
auths));

Range range = new Range(rowId);

scan.setRange(range);

scan.fetchColumnFamily(colFam);

 for(Entry<Key,Value> entry : scan)

{

log.info("key = " + entry.getKey().toString());

log.info("value = " + entry.getValue().toString());

Assert.assertTrue(strValue.equals(entry.getValue().toString()));

}

 scan.close();

}

catch (TableNotFoundException | MutationsRejectedException e)

{

e.printStackTrace();

}

}

 @Test

public void testSimpleUpdate()

{

Text rowID = new Text ("rowid_update");

Text colFam = new Text("myColFam");

Text colQual = new Text("myColQual");

ColumnVisibility vis = new ColumnVisibility(auths);

 String strValue = UUID.randomUUID().toString();

log.info("strValue = " + strValue);

Value value = new Value(strValue.getBytes());

 Mutation origMutation= new Mutation(rowID);

origMutation.put(colFam,  colQual,  vis, System.currentTimeMillis(), value);

 BatchWriterConfig config = new BatchWriterConfig();

try

{

BatchWriter writer = this.testUserConn.createBatchWriter(tableName, config);


 // Add the entry to be modified

writer.addMutation(origMutation);

writer.flush();

 // Now let's change the value/timestamp

Mutation updateMutation= new Mutation(rowID);

updateMutation.put(colFam, colQual, vis, System.currentTimeMillis(),

new Value(UUID.randomUUID().toString().getBytes()));

writer.addMutation(updateMutation);

writer.flush();

writer.close();

 // Now let's see if we can see it

Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
auths));

Range range = new Range(rowID);

scan.setRange(range);

scan.fetchColumnFamily(colFam);

 for(Entry<Key,Value> entry : scan)

{

log.info("key = " + entry.getKey().toString());

log.info("value = " + entry.getValue().toString());

Assert.assertFalse(strValue.equals(entry.getValue().toString()));

}

 scan.close();

}

catch (TableNotFoundException | MutationsRejectedException e)

{

e.printStackTrace();

}

}

 @Test

public void testSimpleDelete()

{

Text rowID = new Text ("rowid_delete");

Text colFam = new Text("myColFam");

Text colQual = new Text("myColQual");

ColumnVisibility vis = new ColumnVisibility("public");

 String strValue = UUID.randomUUID().toString();

log.info("strValue = " + strValue);

Value value = new Value(strValue.getBytes());

 BatchWriterConfig config = new BatchWriterConfig();

try

{

BatchWriter writer = this.testUserConn.createBatchWriter(tableName, config);


 // Add the entry to be modified

Mutation origMutation= new Mutation(rowID);

origMutation.put(colFam,  colQual,  vis, System.currentTimeMillis(), value);

writer.addMutation(origMutation);

writer.flush();

 // Now let's see if we can see it

Scanner scan = this.testUserConn.createScanner(tableName, newAuthorizations(
auths));

Range range = new Range(rowID);

scan.setRange(range);

scan.fetchColumnFamily(colFam);

for(Entry<Key,Value> entry : scan)

{

log.info("key = " + entry.getKey().toString());

log.info("value = " + entry.getValue().toString());

Assert.assertTrue(strValue.equals(entry.getValue().toString()));

}


 // Delete it

BatchDeleter deleter = this.testUserConn.createBatchDeleter(tableName,
newAuthorizations(
auths), 1, config);

List<Range> ranges = Arrays.asList(new Range(rowID));

deleter.setRanges(ranges);

deleter.delete();


 scan.fetchColumnFamily(colFam);

for(Entry<Key,Value> entry : scan)

{

log.info("key = " + entry.getKey().toString());

log.info("value = " + entry.getValue().toString());

}

 scan.close();

}

catch (TableNotFoundException | MutationsRejectedException e)

{

e.printStackTrace();

}

}

}



On Tue, May 28, 2013 at 9:38 PM, albertoaflores <aa...@gmail.com> wrote:

> Thanks Aaron, I'll give these a shot...
>
>
>
> --
> View this message in context:
> http://apache-accumulo.1065345.n5.nabble.com/POM-dependencies-for-an-Accumulo-Client-tp4182p4355.html
> Sent from the Users mailing list archive at Nabble.com.
>

Re: POM dependencies for an Accumulo Client

Posted by albertoaflores <aa...@gmail.com>.
Thanks Aaron, I'll give these a shot...



--
View this message in context: http://apache-accumulo.1065345.n5.nabble.com/POM-dependencies-for-an-Accumulo-Client-tp4182p4355.html
Sent from the Users mailing list archive at Nabble.com.

Re: POM dependencies for an Accumulo Client

Posted by Aaron Glahe <aa...@gmail.com>.
Some things to note:  

1.  The Servlet stuff & Mongo Driver dependencies are my servlet specific (didn't take them out just in case folks wanted/needed them as an example).  My servlet ran in Tomcat 7
2.  The Accumulo parts I commented out, i didn't need for my specific case:  e.g. No proxy
3.  In my impl, I tried at first to get rid of FATE & Trace..but, those were required at Runtime
4.  I used CDH4.2.1 MRv1 as my Hadoop Libs.


	<dependencies>
		<dependency>
			<groupId>com.sun.jersey</groupId>
			<artifactId>jersey-servlet</artifactId>
			<version>1.17.1</version>
		</dependency>
		<dependency>
			<groupId>com.sun.jersey</groupId>
			<artifactId>jersey-server</artifactId>
			<version>1.17.1</version>
		</dependency>
		<dependency>
			<groupId>javax.servlet</groupId>
			<artifactId>javax.servlet-api</artifactId>
			<version>3.0.1</version>
			<scope>provided</scope>
		</dependency>

		<dependency>
			<groupId>org.mongodb</groupId>
			<artifactId>mongo-java-driver</artifactId>
			<version>2.11.1</version>
		</dependency>

		<dependency>
			<groupId>log4j</groupId>
			<artifactId>log4j</artifactId>
			<version>1.2.17</version>
		</dependency>

		<dependency>
			<groupId>org.apache.zookeeper</groupId>
			<artifactId>zookeeper</artifactId>
			<version>${zookeeper.version}</version>
		</dependency>

		<dependency>
			<groupId>com.google.code.gson</groupId>
			<artifactId>gson</artifactId>
			<version>2.2.3</version>
		</dependency>
		<dependency>
			<groupId>com.google.guava</groupId>
			<artifactId>guava</artifactId>
			<version>14.0.1</version>
		</dependency>
		
		<dependency>
			<groupId>commons-codec</groupId>
			<artifactId>commons-codec</artifactId>
			<version>1.4</version>
		</dependency>
		<dependency>
			<groupId>commons-collections</groupId>
			<artifactId>commons-collections</artifactId>
			<version>3.2.1</version>
		</dependency>
		<dependency>
			<groupId>commons-configuration</groupId>
			<artifactId>commons-configuration</artifactId>
			<version>1.6</version>
		</dependency>
		<dependency>
			<groupId>commons-io</groupId>
			<artifactId>commons-io</artifactId>
			<version>2.1</version>
		</dependency>
		<dependency>
			<groupId>commons-lang</groupId>
			<artifactId>commons-lang</artifactId>
			<version>2.4</version>
		</dependency>
		<dependency>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging</artifactId>
			<version>1.1.1</version>
		</dependency>
		<dependency>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging-api</artifactId>
			<version>1.0.4</version>
		</dependency>

		<dependency>
			<groupId>org.apache.accumulo</groupId>
			<artifactId>accumulo-fate</artifactId>
			<version>${accumulo.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.accumulo</groupId>
			<artifactId>accumulo-core</artifactId>
			<version>${accumulo.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.accumulo</groupId>
			<artifactId>accumulo-trace</artifactId>
			<version>${accumulo.version}</version>
		</dependency>

<!-- 
		<dependency>
			<groupId>org.apache.accumulo</groupId>
			<artifactId>accumulo-proxy</artifactId>
			<version>${accumulo.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.accumulo</groupId>
			<artifactId>accumulo-server</artifactId>
			<version>${accumulo.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.accumulo</groupId>
			<artifactId>accumulo-start</artifactId>
			<version>${accumulo.version}</version>
		</dependency>
 -->
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-jci-core</artifactId>
			<version>1.0</version>
		</dependency>
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-jci-fam</artifactId>
			<version>1.0</version>
		</dependency>
		
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-math</artifactId>
			<version>2.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-vfs2</artifactId>
			<version>2.0</version>
		</dependency>

		<dependency>
			<groupId>org.apache.thrift</groupId>
			<artifactId>libthrift</artifactId>
			<version>0.9.0</version>
		</dependency>
 
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-api</artifactId>
			<version>${slf4j.version}</version>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-log4j12</artifactId>
			<version>${slf4j.version}</version>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-nop</artifactId>
			<version>${slf4j.version}</version>
		</dependency>

		<dependency>
			<groupId>commons-httpclient</groupId>
			<artifactId>commons-httpclient</artifactId>
			<version>${httpclient.version}</version>
		</dependency>
		
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro</artifactId>
			<version>${avro.version}</version>
		</dependency>
		
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-client</artifactId>
			<version>${hadoop.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-common</artifactId>
			<version>${hadoop.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-distcp</artifactId>
			<version>${hadoop.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-hdfs</artifactId>
			<version>${hadoop.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-mapreduce-client-core</artifactId>
			<version>${hadoop.version}</version>
		</dependency>

<!-- 		
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-minicluster</artifactId>
			<version>${hadoop.version}</version>
		</dependency>
-->	
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.11</version>
			<scope>test</scope>
		</dependency>
	</dependencies>


On May 28, 2013, at 9:25 PM, albertoaflores <aa...@gmail.com> wrote:

> Any updates on this yet? 
> 
> Any feedback much appreciated.
> 
> 
> 
> --
> View this message in context: http://apache-accumulo.1065345.n5.nabble.com/POM-dependencies-for-an-Accumulo-Client-tp4182p4353.html
> Sent from the Users mailing list archive at Nabble.com.


Re: POM dependencies for an Accumulo Client

Posted by albertoaflores <aa...@gmail.com>.
Any updates on this yet? 

Any feedback much appreciated.



--
View this message in context: http://apache-accumulo.1065345.n5.nabble.com/POM-dependencies-for-an-Accumulo-Client-tp4182p4353.html
Sent from the Users mailing list archive at Nabble.com.

Re: POM dependencies for an Accumulo Client

Posted by Aaron <aa...@gmail.com>.
Version: 1.5-RC4

Started with what was in the accumulo parent pom, and i was slowly taking
out various dependencies and seeing if stuff broke.  Was about to start
taking out all commons-* (at least all the ones I don't need).

Thanks again.

I'll post up my dependency list when finalize it, just for those who search
on Google.




On Tue, May 21, 2013 at 12:48 PM, Christopher <ct...@apache.org> wrote:

> It depends on which version of Accumulo, but generally, you'll need
> accumulo-core, hadoop-client (or hadoop-core), zookeeper, and
> libthrift. You may also need accumulo-start, accumulo-trace,
> accumulo-fate.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Tue, May 21, 2013 at 10:48 AM, Aaron <aa...@gmail.com> wrote:
> > try to create a simple Servlet that acts as a Accumulo Client.  Trying to
> > work through the fun of runtime dependencies on a Tomcat 7 server
> (7.0.40 to
> > be exact).
> >
> > Basically, anyone been thru this before?  Seem to be having some Jersey
> > conflicts as well as Servlet 2.5 vs 3.0 conflicts.
> >
> > 2nd part:  what accumulo dependencies do we need...i would assume at
> least
> > accumulo-core, but do we need all the others? Proxy? FATE? etc.
> >
> > If I get this figured out before I hear anything, I'll post my pom here.
> >
> > Thanks in advance,
> > Aaron
>

Re: POM dependencies for an Accumulo Client

Posted by Christopher <ct...@apache.org>.
It depends on which version of Accumulo, but generally, you'll need
accumulo-core, hadoop-client (or hadoop-core), zookeeper, and
libthrift. You may also need accumulo-start, accumulo-trace,
accumulo-fate.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Tue, May 21, 2013 at 10:48 AM, Aaron <aa...@gmail.com> wrote:
> try to create a simple Servlet that acts as a Accumulo Client.  Trying to
> work through the fun of runtime dependencies on a Tomcat 7 server (7.0.40 to
> be exact).
>
> Basically, anyone been thru this before?  Seem to be having some Jersey
> conflicts as well as Servlet 2.5 vs 3.0 conflicts.
>
> 2nd part:  what accumulo dependencies do we need...i would assume at least
> accumulo-core, but do we need all the others? Proxy? FATE? etc.
>
> If I get this figured out before I hear anything, I'll post my pom here.
>
> Thanks in advance,
> Aaron