You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by GitBox <gi...@apache.org> on 2021/02/01 16:58:17 UTC

[GitHub] [accumulo] dlmarion commented on a change in pull request #1892: Converting table creation to pre-split

dlmarion commented on a change in pull request #1892:
URL: https://github.com/apache/accumulo/pull/1892#discussion_r567984822



##########
File path: test/src/main/java/org/apache/accumulo/test/BulkImportMonitoringIT.java
##########
@@ -65,15 +67,21 @@ protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSit
   public void test() throws Exception {
     getCluster().getClusterControl().start(ServerType.MONITOR);
     try (AccumuloClient c = Accumulo.newClient().from(getClientProperties()).build()) {
+
+      // creating table name
       final String tableName = getUniqueNames(1)[0];
-      c.tableOperations().create(tableName);
-      c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "1");
-      // splits to slow down bulk import
+      // creating splits
       SortedSet<Text> splits = new TreeSet<>();
       for (int i = 1; i < 0xf; i++) {
         splits.add(new Text(Integer.toHexString(i)));
       }

Review comment:
       @milleruntime I agree with your last two comments. While the stream operations make for tidy code, there are some not so obvious differences between them and for-loops. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org