You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by Charles Givre <cg...@gmail.com> on 2018/04/17 17:25:17 UTC

Re: Drill Tests Fail on Local Machine

Hi Tim, 
Here’s the error: 

Results :

Tests in error: 
  MongoTestSuit.initMongo:226 »   Error while starting shrded cluster. 

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 39.848 s
[INFO] Finished at: 2018-04-17T13:24:31-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
[ERROR] 
[ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException



-------------------------------------------------------------------------------
Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
	at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
	at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
	at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
	at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
	at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
	at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)




<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
  <properties>
    <property name="os.detected.version" value="10.13"/>
    <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
    <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
    <property name="java.vm.version" value="25.65-b01"/>
    <property name="gopherProxySet" value="false"/>
    <property name="java.vm.vendor" value="Oracle Corporation"/>
    <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
    <property name="java.vendor.url" value="http://java.oracle.com/"/>
    <property name="os.detected.version.minor" value="13"/>
    <property name="path.separator" value=":"/>
    <property name="guice.disable.misplaced.annotation.check" value="true"/>
    <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
    <property name="file.encoding.pkg" value="sun.io"/>
    <property name="user.country" value="US"/>
    <property name="sun.java.launcher" value="SUN_STANDARD"/>
    <property name="sun.os.patch.level" value="unknown"/>
    <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
    <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
    <property name="java.runtime.version" value="1.8.0_65-b17"/>
    <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
    <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
    <property name="os.arch" value="x86_64"/>
    <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
    <property name="line.separator" value="&#10;"/>
    <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
    <property name="os.detected.version.major" value="10"/>
    <property name="os.name" value="Mac OS X"/>
    <property name="os.detected.classifier" value="osx-x86_64"/>
    <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
    <property name="sun.jnu.encoding" value="UTF-8"/>
    <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
    <property name="os.detected.arch" value="x86_64"/>
    <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
    <property name="java.specification.name" value="Java Platform API Specification"/>
    <property name="java.class.version" value="52.0"/>
    <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
    <property name="os.detected.name" value="osx"/>
    <property name="os.version" value="10.13.4"/>
    <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
    <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
    <property name="user.home" value="/Users/cgivre"/>
    <property name="user.timezone" value="America/New_York"/>
    <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
    <property name="java.specification.version" value="1.8"/>
    <property name="file.encoding" value="UTF-8"/>
    <property name="user.name" value="cgivre"/>
    <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
    <property name="java.vm.specification.version" value="1.8"/>
    <property name="sun.arch.data.model" value="64"/>
    <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
    <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
    <property name="java.specification.vendor" value="Oracle Corporation"/>
    <property name="user.language" value="en"/>
    <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
    <property name="java.vm.info" value="mixed mode"/>
    <property name="java.version" value="1.8.0_65"/>
    <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
    <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
    <property name="java.vendor" value="Oracle Corporation"/>
    <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
    <property name="file.separator" value="/"/>
    <property name="java.vendor.url.bug" value="http://bugreport.sun.com/bugreport/"/>
    <property name="sun.cpu.endian" value="little"/>
    <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
    <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
    <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
    <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
    <property name="sun.cpu.isalist" value=""/>
  </properties>
  <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
    <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
	at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
	at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
	at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
	at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
	at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
	at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
]]></error>
    <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
[mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
[mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
[mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
[mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
[mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
[mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
[mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
[mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
[mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
[mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
[mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
[mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
[mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
[mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
[mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
[mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
[mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
[mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
[mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
[mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
[mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
[mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
[mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
[mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
[mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
[mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
[mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
[mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
[mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
[mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
[mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
[mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
[mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
[mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
[mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
[mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
[mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
[mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
[mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
[mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
[mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
[mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
[mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
[mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
[mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
[mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
[mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
[mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
[mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
[mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
[mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
[mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
[mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
[mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
[mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
[mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
[mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
[mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
[mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
[mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
[mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
[mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
[mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
[mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
[mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
[mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
[mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
[mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
[mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
[mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
[mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
[mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
[mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
[mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
[mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
[mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
[mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
[mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
[mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
[mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
[mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
[mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
[mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
[mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
[mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
[mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
[mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
[mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
[mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
[mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
[mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
[mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
[mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
[mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
[mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
[mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
[mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
[mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
[mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
[mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
[mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
[mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
[mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
[mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
[mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
[mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
[mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
[mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
[mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
[mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
[mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
[mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
[mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
[mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
[mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
[mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
[mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
[mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
[mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
[mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
[mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
[mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
[mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
[mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
[mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
[mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
[mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
[mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
[mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
[mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
[mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
[mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
[mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
[mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
[mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
[mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
[mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
[mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
[mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
[mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] 	 building index using bulk method
[mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
[mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
[mongoimport error] fatal error: MSpanList_Insert
[mongoimport error] 
[mongoimport error] runtime stack:
[mongoimport error] runtime.throw(0x468d0b)
[mongoimport error] 	/usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
[mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
[mongoimport error] 	/usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
[mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
[mongoimport error] 	/usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
[mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
[mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
[mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
[mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
[mongoimport error] 	/usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
[mongoimport error] MCentral_Grow(0x48ea38, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
[mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
[mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
[mongoimport error] runtime.mcacheRefill_m()
[mongoimport error] 	/usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
[mongoimport error] runtime.onM(0x3917f8)
[mongoimport error] 	/usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
[mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
[mongoimport error] 	/usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
[mongoimport error] runtime.newobject(0x2f1120, 0x600000)
[mongoimport error] 	/usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
[mongoimport error] runtime.newG(0x29c0a)
[mongoimport error] 	/usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
[mongoimport error] allocg(0x478540)
[mongoimport error] 	/usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
[mongoimport error] runtime.malg(0x8000, 0x4785e0)
[mongoimport error] 	/usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
[mongoimport error] runtime.mpreinit(0x478bc0)
[mongoimport error] 	/usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
[mongoimport error] mcommoninit(0x478bc0)
[mongoimport error] 	/usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
[mongoimport error] runtime.schedinit()
[mongoimport error] 	/usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
[mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
[mongoimport error] 	/usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
[mongoimport error] 
22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
java.io.IOException: Could not start process: <EOF>
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
	at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
	at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
	at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
	at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
	at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
	at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
	at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
	at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
	at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
	at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
	at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
	at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
	at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
	at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
	at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
	at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
	at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
[mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
[mongod output] ]]></system-out>
  </testcase>
</testsuite>





> On Apr 17, 2018, at 13:15, Timothy Farkas <tf...@mapr.com> wrote:
> 
> Hi Charles,
> 
> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
> 
> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
> 
> Thanks,
> Tim
> 
> 
> 
> 
> 
> ________________________________
> From: Charles Givre <cg...@gmail.com>
> Sent: Tuesday, April 17, 2018 6:22:13 AM
> To: dev@drill.apache.org
> Subject: Drill Tests Fail on Local Machine
> 
> All,
> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
> 1.  If it is possible to only run the unit tests for the components I am working on OR
> 2.  It it is possible to skip the MongoDB tests
> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
> 
> Any advice would be appreciated.
> Thanks,
> — C


Re: Drill Tests Fail on Local Machine

Posted by Timothy Farkas <tf...@mapr.com>.
Hi Charles,

I just upgraded to osx 10.13.4 and the same issue started happening for me. I've created a jira here https://issues.apache.org/jira/browse/DRILL-6341 . Please follow it. Now that I can reproduce it, I'll try to have a fix asap.

Thanks,
Tim
________________________________
From: Timothy Farkas <tf...@mapr.com>
Sent: Wednesday, April 18, 2018 11:05:40 AM
To: dev@drill.apache.org
Subject: Re: Drill Tests Fail on Local Machine

Sigh, I'll keep fiddling with it then and try to reproduce on a Travis osx container. Also I just realized another way to run the build for everything except the mongo plugin.


mvn -pl '!contrib/storage-mongo' clean install

Thanks,
Tim




________________________________
From: Charles Givre <cg...@gmail.com>
Sent: Tuesday, April 17, 2018 7:10:09 PM
To: dev@drill.apache.org
Subject: Re: Drill Tests Fail on Local Machine

HI Tim,
No joy :-(



-------------------------------------------------------------------------------
Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -696,139.146 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -696,139.147 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)

Output.


[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn11] end connection 127.0.0.1:64072 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn10] end connection 127.0.0.1:64052 (2 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn6] end connection 127.0.0.1:64054 (3 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn16] end connection 127.0.0.1:64047 (8 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn6] end connection 127.0.0.1:64053 (2 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn5] end connection 127.0.0.1:64044 (5 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn4] end connection 127.0.0.1:64032 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn3] end connection 127.0.0.1:64031 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn1] end connection 127.0.0.1:64029 (3 connections now open)
[mongod output] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -696,139.146 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -696,139.147 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)

2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64092 #20 (7 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64093 #9 (4 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64091 #8 (4 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I COMMAND  [conn9] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.450-0400 I FTDC     [conn9] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64095 #7 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn7] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.451-0400 I FTDC     [conn7] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.450-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn5] end connection 127.0.0.1:63953 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn4] end connection 127.0.0.1:63950 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64096 #15 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn4] end connection 127.0.0.1:63993 (2 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn5] end connection 127.0.0.1:63996 (2 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64097 #6 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-17T22:07:31.451-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-17T22:07:31.451-0400 I ASIO     [ReplicationExecutor] dropping unhealthy pooled connection to localhost:27023
[mongod output] 2018-04-17T22:07:31.451-0400 I ASIO     [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn6] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.451-0400 I FTDC     [conn6] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn11] end connection 127.0.0.1:64034 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn14] end connection 127.0.0.1:64039 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I COMMAND  [conn20] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.452-0400 I FTDC     [conn20] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.452-0400 I NETWORK  [conn7] end connection 127.0.0.1:64076 (1 connection now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I NETWORK  [conn8] end connection 127.0.0.1:64080 (0 connections now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-17T22:07:31.452-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-17T22:07:31.452-0400 I ASIO     [ReplicationExecutor] dropping unhealthy pooled connection to localhost:27023
[mongod output] 2018-04-17T22:07:31.452-0400 I ASIO     [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn15] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.452-0400 I FTDC     [conn15] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.453-0400 I REPL     [conn9] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.453-0400 I REPL     [conn7] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.453-0400 I STORAGE  [conn5] got request after shutdown()
[mongod output] 2018-04-17T22:07:31.454-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64098 #10 (1 connection now open)
[mongod output] 2018-04-17T22:07:31.454-0400 I COMMAND  [conn8] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.454-0400 I FTDC     [conn8] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.454-0400 I REPL     [conn6] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.454-0400 I COMMAND  [conn10] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.454-0400 I FTDC     [conn10] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.454-0400 I REPL     [conn15] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.455-0400 I REPL     [ReplicationExecutor] Error in heartbeat request to localhost:27024; HostUnreachable End of file
[mongod output] 2018-04-17T22:07:31.455-0400 I REPL     [conn20] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.456-0400 I REPL     [conn8] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.456-0400 I CONTROL  [conn10] now exiting
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] closing listening socket: 5
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] closing listening socket: 6
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] removing socket file: /tmp/mongodb-27019.sock
[mongod output] 2018-04-17T22:07:31.457-0400 I NETWORK  [conn10] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:31.457-0400 I NETWORK  [conn10] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:31.457-0400 I STORAGE  [conn10] WiredTigerKVEngine shutting down
[mongod output]


2018-04-17T22:07:32.456-0400 I CONTROL  [conn9] now exiting
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] removing socket file: /tmp/mongodb-27022.sock
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn9] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn9] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.457-0400 I STORAGE  [conn9] WiredTigerKVEngine shutting down
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn5] end connection 127.0.0.1:64007 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn8] end connection 127.0.0.1:64069 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn7] end connection 127.0.0.1:64066 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.468-0400 I CONTROL  [conn7] now exiting
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] removing socket file: /tmp/mongodb-27021.sock
[mongod output] 2018-04-17T22:07:32.469-0400 I NETWORK  [conn7] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.469-0400 I NETWORK  [conn7] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.470-0400 I STORAGE  [conn7] WiredTigerKVEngine shutting down
[mongod output]
2018-04-17T22:07:32.470-0400 2018-04-17T22:07:32.666-0400 I STORAGE  [conn7] shutdown: removing fs lock...
[mongod output]
2018-04-17T22:07:32.734-0400 I CONTROL  [conn6] now exiting
[mongod output] 2018-04-17T22:07:32.734-0400 I CONTROL  [conn8] now exiting
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn6] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn6] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn8] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] removing socket file: /tmp/mongodb-27024.sock
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] removing socket file: /tmp/mongodb-27025.sock
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] end connection 127.0.0.1:64022 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn2] end connection 127.0.0.1:63944 (1 connection now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn7] end connection 127.0.0.1:64025 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn5] end connection 127.0.0.1:63964 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.736-0400 I STORAGE  [conn8] WiredTigerKVEngine shutting down
[mongod output] 2018-04-17T22:07:32.736-0400 I STORAGE  [conn6] WiredTigerKVEngine shutting down
[mongod output]


Results :

Tests in error:
  MongoTestSuit.initMongo:231 » IO Could not start process: <EOF>

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 37.242 s
[INFO] Finished at: 2018-04-17T22:07:33-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-fix/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]




> On Apr 17, 2018, at 19:59, Timothy Farkas <tf...@mapr.com> wrote:
>
> Hi Charles,
>
> Can you try running the mongodb tests using this branch https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ilooner_drill_tree_mongodb-2Dtest-2Dfix&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=ewK1tetJfLceliFjzAyiNbnadq3NcJMruR_Do57kjNU&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ilooner_drill_tree_mongodb-2Dtest-2Dfix&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=ewK1tetJfLceliFjzAyiNbnadq3NcJMruR_Do57kjNU&e=> ?
>
> I can't reproduce the issue on my machine, but others reported setting the LC_ALL environment variable and upgrading the version of the test library resolved the issue for them. The branch includes those fixes.
>
> Thanks,
> Tim
>
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 3:54:18 PM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
>
> Here you go
>
>
> cgivre$ locale
> LANG="en_US.UTF-8"
> LC_COLLATE="en_US.UTF-8"
> LC_CTYPE="en_US.UTF-8"
> LC_MESSAGES="en_US.UTF-8"
> LC_MONETARY="en_US.UTF-8"
> LC_NUMERIC="en_US.UTF-8"
> LC_TIME="en_US.UTF-8"
> LC_ALL=
>
> Thank you VERY much for your help!
>
>
>> On Apr 17, 2018, at 18:53, Timothy Farkas <tf...@mapr.com> wrote:
>>
>> Hi Charles,
>>
>> It looks like some people had issues with the MongoDB test library when their locale was not set properly. Can you provide the output of the locale command on your mac.
>>
>> Thanks,
>> Tim
>> ________________________________
>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>>>
>> Sent: Tuesday, April 17, 2018 2:02:11 PM
>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>>
>> Subject: Re: Drill Tests Fail on Local Machine
>>
>> Hi Tim,
>> Thanks for getting back with me. I’m on a Mac with High Sierra.
>>
>> Here’s my Java info:
>> java -version
>> java version "1.8.0_65"
>> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
>> Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
>>
>> Thank you very much for your help!
>> — C
>>
>>
>>> On Apr 17, 2018, at 16:56, Timothy Farkas <tfarkas@mapr.com <ma...@mapr.com>> wrote:
>>>
>>> Hi Charles,
>>>
>>> According to this thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>>> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
>>>
>>> Thanks,
>>> Tim
>>>
>>> ________________________________
>>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>> <mailto:cgivre@gmail.com <ma...@gmail.com><mailto:cgivre@gmail.com <ma...@gmail.com>>>>
>>> Sent: Tuesday, April 17, 2018 10:25:17 AM
>>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>> <mailto:dev@drill.apache.org <ma...@drill.apache.org><mailto:dev@drill.apache.org <ma...@drill.apache.org>>>
>>> Subject: Re: Drill Tests Fail on Local Machine
>>>
>>> Hi Tim,
>>> Here’s the error:
>>>
>>> Results :
>>>
>>> Tests in error:
>>> MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
>>>
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
>>>
>>> [INFO] ------------------------------------------------------------------------
>>> [INFO] BUILD FAILURE
>>> [INFO] ------------------------------------------------------------------------
>>> [INFO] Total time: 39.848 s
>>> [INFO] Finished at: 2018-04-17T13:24:31-04:00
>>> [INFO] ------------------------------------------------------------------------
>>> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
>>> [ERROR]
>>> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
>>> [ERROR] -> [Help 1]
>>> [ERROR]
>>> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>> [ERROR]
>>> [ERROR] For more information about the errors and possible solutions, please read the following articles:
>>> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>>>
>>>
>>>
>>>
>>> -------------------------------------------------------------------------------
>>> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
>>> -------------------------------------------------------------------------------
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
>>> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
>>> java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>>
>>>
>>>
>>>
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>>> <properties>
>>>  <property name="os.detected.version" value="10.13"/>
>>>  <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>>>  <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>>>  <property name="java.vm.version" value="25.65-b01"/>
>>>  <property name="gopherProxySet" value="false"/>
>>>  <property name="java.vm.vendor" value="Oracle Corporation"/>
>>>  <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>>  <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>>>
>>>  <property name="os.detected.version.minor" value="13"/>
>>>  <property name="path.separator" value=":"/>
>>>  <property name="guice.disable.misplaced.annotation.check" value="true"/>
>>>  <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>>>  <property name="file.encoding.pkg" value="sun.io <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=DMrAnTYO_PNqCzxhMhw-DKhjkkpGGiFnklMDggRxwVE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=>>>"/>
>>>  <property name="user.country" value="US"/>
>>>  <property name="sun.java.launcher" value="SUN_STANDARD"/>
>>>  <property name="sun.os.patch.level" value="unknown"/>
>>>  <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>>>  <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>>  <property name="java.runtime.version" value="1.8.0_65-b17"/>
>>>  <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>>>  <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>>>  <property name="os.arch" value="x86_64"/>
>>>  <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>>>  <property name="line.separator" value="&#10;"/>
>>>  <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>>>  <property name="os.detected.version.major" value="10"/>
>>>  <property name="os.name" value="Mac OS X"/>
>>>  <property name="os.detected.classifier" value="osx-x86_64"/>
>>>  <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>>>  <property name="sun.jnu.encoding" value="UTF-8"/>
>>>  <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>>>  <property name="os.detected.arch" value="x86_64"/>
>>>  <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>>>  <property name="java.specification.name" value="Java Platform API Specification"/>
>>>  <property name="java.class.version" value="52.0"/>
>>>  <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>>>  <property name="os.detected.name" value="osx"/>
>>>  <property name="os.version" value="10.13.4"/>
>>>  <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>>>  <property name="user.home" value="/Users/cgivre"/>
>>>  <property name="user.timezone" value="America/New_York"/>
>>>  <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>>>  <property name="java.specification.version" value="1.8"/>
>>>  <property name="file.encoding" value="UTF-8"/>
>>>  <property name="user.name" value="cgivre"/>
>>>  <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>>>  <property name="java.vm.specification.version" value="1.8"/>
>>>  <property name="sun.arch.data.model" value="64"/>
>>>  <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>>>  <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>>>  <property name="java.specification.vendor" value="Oracle Corporation"/>
>>>  <property name="user.language" value="en"/>
>>>  <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>>>  <property name="java.vm.info <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=DsCdAloUKoQxmK71OiUWKz29_Inj-sM9UKd97batKOE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=>>>" value="mixed mode"/>
>>>  <property name="java.version" value="1.8.0_65"/>
>>>  <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>>>  <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>>>  <property name="java.vendor" value="Oracle Corporation"/>
>>>  <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>>>  <property name="file.separator" value="/"/>
>>>  <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>>>
>>>  <property name="sun.cpu.endian" value="little"/>
>>>  <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>>>  <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>>>  <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="sun.cpu.isalist" value=""/>
>>> </properties>
>>> <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>>>  <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>> ]]></error>
>>>  <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
>>> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>>> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>>> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
>>> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
>>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>>> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>>> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>>> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>>> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>>> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>>> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>>> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>>> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>>> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
>>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
>>> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
>>> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
>>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
>>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
>>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
>>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
>>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
>>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>>> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>>> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>>> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>>> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>>> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>>> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>>> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>>> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>>> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
>>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
>>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
>>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
>>> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
>>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
>>> [mongoimport error] fatal error: MSpanList_Insert
>>> [mongoimport error]
>>> [mongoimport error] runtime stack:
>>> [mongoimport error] runtime.throw(0x468d0b)
>>> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
>>> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
>>> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
>>> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
>>> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
>>> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
>>> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
>>> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
>>> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
>>> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
>>> [mongoimport error] runtime.mcacheRefill_m()
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
>>> [mongoimport error] runtime.onM(0x3917f8)
>>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
>>> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
>>> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
>>> [mongoimport error] runtime.newG(0x29c0a)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
>>> [mongoimport error] allocg(0x478540)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
>>> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
>>> [mongoimport error] runtime.mpreinit(0x478bc0)
>>> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
>>> [mongoimport error] mcommoninit(0x478bc0)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
>>> [mongoimport error] runtime.schedinit()
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
>>> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
>>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
>>> [mongoimport error]
>>> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
>>> java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>>      at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>>      at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>>>      at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>>>      at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>>      at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>>      at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>>>      at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>>>      at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>>>      at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>>      at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>>>      at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
>>> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
>>> [mongod output] ]]></system-out>
>>> </testcase>
>>> </testsuite>
>>>
>>>
>>>
>>>
>>>
>>>> On Apr 17, 2018, at 13:15, Timothy Farkas <tfarkas@mapr.com <ma...@mapr.com>> wrote:
>>>>
>>>> Hi Charles,
>>>>
>>>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>>>>
>>>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>>>>
>>>> Thanks,
>>>> Tim
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ________________________________
>>>> From: Charles Givre <cg...@gmail.com>
>>>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>>>> To: dev@drill.apache.org
>>>> Subject: Drill Tests Fail on Local Machine
>>>>
>>>> All,
>>>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>>>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>>>> 2.  It it is possible to skip the MongoDB tests
>>>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>>>>
>>>> Any advice would be appreciated.
>>>> Thanks,
>>>> — C


Re: Drill Tests Fail on Local Machine

Posted by Timothy Farkas <tf...@mapr.com>.
Sigh, I'll keep fiddling with it then and try to reproduce on a Travis osx container. Also I just realized another way to run the build for everything except the mongo plugin.


mvn -pl '!contrib/storage-mongo' clean install

Thanks,
Tim




________________________________
From: Charles Givre <cg...@gmail.com>
Sent: Tuesday, April 17, 2018 7:10:09 PM
To: dev@drill.apache.org
Subject: Re: Drill Tests Fail on Local Machine

HI Tim,
No joy :-(



-------------------------------------------------------------------------------
Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -696,139.146 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -696,139.147 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)

Output.


[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn11] end connection 127.0.0.1:64072 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn10] end connection 127.0.0.1:64052 (2 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn6] end connection 127.0.0.1:64054 (3 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn16] end connection 127.0.0.1:64047 (8 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn6] end connection 127.0.0.1:64053 (2 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn5] end connection 127.0.0.1:64044 (5 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn4] end connection 127.0.0.1:64032 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn3] end connection 127.0.0.1:64031 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn1] end connection 127.0.0.1:64029 (3 connections now open)
[mongod output] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -696,139.146 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -696,139.147 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)

2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64092 #20 (7 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64093 #9 (4 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64091 #8 (4 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I COMMAND  [conn9] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.450-0400 I FTDC     [conn9] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64095 #7 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn7] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.451-0400 I FTDC     [conn7] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.450-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn5] end connection 127.0.0.1:63953 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn4] end connection 127.0.0.1:63950 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64096 #15 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn4] end connection 127.0.0.1:63993 (2 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn5] end connection 127.0.0.1:63996 (2 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64097 #6 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-17T22:07:31.451-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-17T22:07:31.451-0400 I ASIO     [ReplicationExecutor] dropping unhealthy pooled connection to localhost:27023
[mongod output] 2018-04-17T22:07:31.451-0400 I ASIO     [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn6] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.451-0400 I FTDC     [conn6] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn11] end connection 127.0.0.1:64034 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn14] end connection 127.0.0.1:64039 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I COMMAND  [conn20] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.452-0400 I FTDC     [conn20] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.452-0400 I NETWORK  [conn7] end connection 127.0.0.1:64076 (1 connection now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I NETWORK  [conn8] end connection 127.0.0.1:64080 (0 connections now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-17T22:07:31.452-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-17T22:07:31.452-0400 I ASIO     [ReplicationExecutor] dropping unhealthy pooled connection to localhost:27023
[mongod output] 2018-04-17T22:07:31.452-0400 I ASIO     [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn15] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.452-0400 I FTDC     [conn15] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.453-0400 I REPL     [conn9] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.453-0400 I REPL     [conn7] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.453-0400 I STORAGE  [conn5] got request after shutdown()
[mongod output] 2018-04-17T22:07:31.454-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64098 #10 (1 connection now open)
[mongod output] 2018-04-17T22:07:31.454-0400 I COMMAND  [conn8] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.454-0400 I FTDC     [conn8] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.454-0400 I REPL     [conn6] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.454-0400 I COMMAND  [conn10] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.454-0400 I FTDC     [conn10] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.454-0400 I REPL     [conn15] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.455-0400 I REPL     [ReplicationExecutor] Error in heartbeat request to localhost:27024; HostUnreachable End of file
[mongod output] 2018-04-17T22:07:31.455-0400 I REPL     [conn20] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.456-0400 I REPL     [conn8] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.456-0400 I CONTROL  [conn10] now exiting
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] closing listening socket: 5
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] closing listening socket: 6
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] removing socket file: /tmp/mongodb-27019.sock
[mongod output] 2018-04-17T22:07:31.457-0400 I NETWORK  [conn10] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:31.457-0400 I NETWORK  [conn10] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:31.457-0400 I STORAGE  [conn10] WiredTigerKVEngine shutting down
[mongod output]


2018-04-17T22:07:32.456-0400 I CONTROL  [conn9] now exiting
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] removing socket file: /tmp/mongodb-27022.sock
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn9] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn9] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.457-0400 I STORAGE  [conn9] WiredTigerKVEngine shutting down
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn5] end connection 127.0.0.1:64007 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn8] end connection 127.0.0.1:64069 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn7] end connection 127.0.0.1:64066 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.468-0400 I CONTROL  [conn7] now exiting
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] removing socket file: /tmp/mongodb-27021.sock
[mongod output] 2018-04-17T22:07:32.469-0400 I NETWORK  [conn7] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.469-0400 I NETWORK  [conn7] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.470-0400 I STORAGE  [conn7] WiredTigerKVEngine shutting down
[mongod output]
2018-04-17T22:07:32.470-0400 2018-04-17T22:07:32.666-0400 I STORAGE  [conn7] shutdown: removing fs lock...
[mongod output]
2018-04-17T22:07:32.734-0400 I CONTROL  [conn6] now exiting
[mongod output] 2018-04-17T22:07:32.734-0400 I CONTROL  [conn8] now exiting
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn6] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn6] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn8] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] removing socket file: /tmp/mongodb-27024.sock
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] removing socket file: /tmp/mongodb-27025.sock
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] end connection 127.0.0.1:64022 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn2] end connection 127.0.0.1:63944 (1 connection now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn7] end connection 127.0.0.1:64025 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn5] end connection 127.0.0.1:63964 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.736-0400 I STORAGE  [conn8] WiredTigerKVEngine shutting down
[mongod output] 2018-04-17T22:07:32.736-0400 I STORAGE  [conn6] WiredTigerKVEngine shutting down
[mongod output]


Results :

Tests in error:
  MongoTestSuit.initMongo:231 » IO Could not start process: <EOF>

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 37.242 s
[INFO] Finished at: 2018-04-17T22:07:33-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-fix/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]




> On Apr 17, 2018, at 19:59, Timothy Farkas <tf...@mapr.com> wrote:
>
> Hi Charles,
>
> Can you try running the mongodb tests using this branch https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ilooner_drill_tree_mongodb-2Dtest-2Dfix&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=ewK1tetJfLceliFjzAyiNbnadq3NcJMruR_Do57kjNU&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ilooner_drill_tree_mongodb-2Dtest-2Dfix&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=ewK1tetJfLceliFjzAyiNbnadq3NcJMruR_Do57kjNU&e=> ?
>
> I can't reproduce the issue on my machine, but others reported setting the LC_ALL environment variable and upgrading the version of the test library resolved the issue for them. The branch includes those fixes.
>
> Thanks,
> Tim
>
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 3:54:18 PM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
>
> Here you go
>
>
> cgivre$ locale
> LANG="en_US.UTF-8"
> LC_COLLATE="en_US.UTF-8"
> LC_CTYPE="en_US.UTF-8"
> LC_MESSAGES="en_US.UTF-8"
> LC_MONETARY="en_US.UTF-8"
> LC_NUMERIC="en_US.UTF-8"
> LC_TIME="en_US.UTF-8"
> LC_ALL=
>
> Thank you VERY much for your help!
>
>
>> On Apr 17, 2018, at 18:53, Timothy Farkas <tf...@mapr.com> wrote:
>>
>> Hi Charles,
>>
>> It looks like some people had issues with the MongoDB test library when their locale was not set properly. Can you provide the output of the locale command on your mac.
>>
>> Thanks,
>> Tim
>> ________________________________
>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>>>
>> Sent: Tuesday, April 17, 2018 2:02:11 PM
>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>>
>> Subject: Re: Drill Tests Fail on Local Machine
>>
>> Hi Tim,
>> Thanks for getting back with me. I’m on a Mac with High Sierra.
>>
>> Here’s my Java info:
>> java -version
>> java version "1.8.0_65"
>> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
>> Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
>>
>> Thank you very much for your help!
>> — C
>>
>>
>>> On Apr 17, 2018, at 16:56, Timothy Farkas <tfarkas@mapr.com <ma...@mapr.com>> wrote:
>>>
>>> Hi Charles,
>>>
>>> According to this thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>>> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
>>>
>>> Thanks,
>>> Tim
>>>
>>> ________________________________
>>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>> <mailto:cgivre@gmail.com <ma...@gmail.com><mailto:cgivre@gmail.com <ma...@gmail.com>>>>
>>> Sent: Tuesday, April 17, 2018 10:25:17 AM
>>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>> <mailto:dev@drill.apache.org <ma...@drill.apache.org><mailto:dev@drill.apache.org <ma...@drill.apache.org>>>
>>> Subject: Re: Drill Tests Fail on Local Machine
>>>
>>> Hi Tim,
>>> Here’s the error:
>>>
>>> Results :
>>>
>>> Tests in error:
>>> MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
>>>
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
>>>
>>> [INFO] ------------------------------------------------------------------------
>>> [INFO] BUILD FAILURE
>>> [INFO] ------------------------------------------------------------------------
>>> [INFO] Total time: 39.848 s
>>> [INFO] Finished at: 2018-04-17T13:24:31-04:00
>>> [INFO] ------------------------------------------------------------------------
>>> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
>>> [ERROR]
>>> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
>>> [ERROR] -> [Help 1]
>>> [ERROR]
>>> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>> [ERROR]
>>> [ERROR] For more information about the errors and possible solutions, please read the following articles:
>>> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>>>
>>>
>>>
>>>
>>> -------------------------------------------------------------------------------
>>> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
>>> -------------------------------------------------------------------------------
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
>>> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
>>> java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>>
>>>
>>>
>>>
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>>> <properties>
>>>  <property name="os.detected.version" value="10.13"/>
>>>  <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>>>  <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>>>  <property name="java.vm.version" value="25.65-b01"/>
>>>  <property name="gopherProxySet" value="false"/>
>>>  <property name="java.vm.vendor" value="Oracle Corporation"/>
>>>  <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>>  <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>>>
>>>  <property name="os.detected.version.minor" value="13"/>
>>>  <property name="path.separator" value=":"/>
>>>  <property name="guice.disable.misplaced.annotation.check" value="true"/>
>>>  <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>>>  <property name="file.encoding.pkg" value="sun.io <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=DMrAnTYO_PNqCzxhMhw-DKhjkkpGGiFnklMDggRxwVE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=>>>"/>
>>>  <property name="user.country" value="US"/>
>>>  <property name="sun.java.launcher" value="SUN_STANDARD"/>
>>>  <property name="sun.os.patch.level" value="unknown"/>
>>>  <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>>>  <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>>  <property name="java.runtime.version" value="1.8.0_65-b17"/>
>>>  <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>>>  <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>>>  <property name="os.arch" value="x86_64"/>
>>>  <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>>>  <property name="line.separator" value="&#10;"/>
>>>  <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>>>  <property name="os.detected.version.major" value="10"/>
>>>  <property name="os.name" value="Mac OS X"/>
>>>  <property name="os.detected.classifier" value="osx-x86_64"/>
>>>  <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>>>  <property name="sun.jnu.encoding" value="UTF-8"/>
>>>  <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>>>  <property name="os.detected.arch" value="x86_64"/>
>>>  <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>>>  <property name="java.specification.name" value="Java Platform API Specification"/>
>>>  <property name="java.class.version" value="52.0"/>
>>>  <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>>>  <property name="os.detected.name" value="osx"/>
>>>  <property name="os.version" value="10.13.4"/>
>>>  <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>>>  <property name="user.home" value="/Users/cgivre"/>
>>>  <property name="user.timezone" value="America/New_York"/>
>>>  <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>>>  <property name="java.specification.version" value="1.8"/>
>>>  <property name="file.encoding" value="UTF-8"/>
>>>  <property name="user.name" value="cgivre"/>
>>>  <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>>>  <property name="java.vm.specification.version" value="1.8"/>
>>>  <property name="sun.arch.data.model" value="64"/>
>>>  <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>>>  <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>>>  <property name="java.specification.vendor" value="Oracle Corporation"/>
>>>  <property name="user.language" value="en"/>
>>>  <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>>>  <property name="java.vm.info <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=DsCdAloUKoQxmK71OiUWKz29_Inj-sM9UKd97batKOE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=>>>" value="mixed mode"/>
>>>  <property name="java.version" value="1.8.0_65"/>
>>>  <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>>>  <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>>>  <property name="java.vendor" value="Oracle Corporation"/>
>>>  <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>>>  <property name="file.separator" value="/"/>
>>>  <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>>>
>>>  <property name="sun.cpu.endian" value="little"/>
>>>  <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>>>  <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>>>  <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="sun.cpu.isalist" value=""/>
>>> </properties>
>>> <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>>>  <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>> ]]></error>
>>>  <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
>>> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>>> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>>> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
>>> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
>>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>>> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>>> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>>> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <https://urldefense.proofpoint.com/v2/url?u=http-3A__local.oplog.rs_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=3CJ2fMZkL41X4-be0XUZjxJ5qlkaExLJVHx4DDiQaMw&s=FEa0oNFJYoQbEiTBctiXg3OOIfaA_rl8G8boHZtsYQ0&e=>
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>>> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>>> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>>> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>>> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>>> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>>> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
>>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
>>> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
>>> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
>>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
>>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
>>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
>>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
>>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
>>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>>> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>>> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>>> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>>> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>>> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>>> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>>> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>>> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>>> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
>>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
>>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
>>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
>>> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
>>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
>>> [mongoimport error] fatal error: MSpanList_Insert
>>> [mongoimport error]
>>> [mongoimport error] runtime stack:
>>> [mongoimport error] runtime.throw(0x468d0b)
>>> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
>>> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
>>> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
>>> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
>>> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
>>> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
>>> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
>>> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
>>> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
>>> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
>>> [mongoimport error] runtime.mcacheRefill_m()
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
>>> [mongoimport error] runtime.onM(0x3917f8)
>>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
>>> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
>>> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
>>> [mongoimport error] runtime.newG(0x29c0a)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
>>> [mongoimport error] allocg(0x478540)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
>>> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
>>> [mongoimport error] runtime.mpreinit(0x478bc0)
>>> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
>>> [mongoimport error] mcommoninit(0x478bc0)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
>>> [mongoimport error] runtime.schedinit()
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
>>> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
>>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
>>> [mongoimport error]
>>> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
>>> java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>>      at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>>      at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>>>      at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>>>      at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>>      at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>>      at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>>>      at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>>>      at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>>>      at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>>      at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>>>      at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
>>> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
>>> [mongod output] ]]></system-out>
>>> </testcase>
>>> </testsuite>
>>>
>>>
>>>
>>>
>>>
>>>> On Apr 17, 2018, at 13:15, Timothy Farkas <tfarkas@mapr.com <ma...@mapr.com>> wrote:
>>>>
>>>> Hi Charles,
>>>>
>>>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>>>>
>>>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>>>>
>>>> Thanks,
>>>> Tim
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ________________________________
>>>> From: Charles Givre <cg...@gmail.com>
>>>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>>>> To: dev@drill.apache.org
>>>> Subject: Drill Tests Fail on Local Machine
>>>>
>>>> All,
>>>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>>>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>>>> 2.  It it is possible to skip the MongoDB tests
>>>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>>>>
>>>> Any advice would be appreciated.
>>>> Thanks,
>>>> — C


Re: Drill Tests Fail on Local Machine

Posted by Charles Givre <cg...@gmail.com>.
HI Tim, 
No joy :-(



-------------------------------------------------------------------------------
Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -696,139.146 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -696,139.147 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)

Output.  


[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn11] end connection 127.0.0.1:64072 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn10] end connection 127.0.0.1:64052 (2 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn6] end connection 127.0.0.1:64054 (3 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn16] end connection 127.0.0.1:64047 (8 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn6] end connection 127.0.0.1:64053 (2 connections now open)
[mongod output] 2018-04-17T22:07:30.817-0400 I NETWORK  [conn5] end connection 127.0.0.1:64044 (5 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn4] end connection 127.0.0.1:64032 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn3] end connection 127.0.0.1:64031 (4 connections now open)
[mongod output] 2018-04-17T22:07:30.818-0400 I NETWORK  [conn1] end connection 127.0.0.1:64029 (3 connections now open)
[mongod output] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -696,139.146 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -696,139.147 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
	at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
	at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
	at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
	at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
	at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
	at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
	at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
	at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)

2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64092 #20 (7 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64093 #9 (4 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64091 #8 (4 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I COMMAND  [conn9] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.450-0400 I FTDC     [conn9] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64095 #7 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn7] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.451-0400 I FTDC     [conn7] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.450-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn5] end connection 127.0.0.1:63953 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn4] end connection 127.0.0.1:63950 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.450-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64096 #15 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn4] end connection 127.0.0.1:63993 (2 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn5] end connection 127.0.0.1:63996 (2 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64097 #6 (3 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-17T22:07:31.451-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-17T22:07:31.451-0400 I ASIO     [ReplicationExecutor] dropping unhealthy pooled connection to localhost:27023
[mongod output] 2018-04-17T22:07:31.451-0400 I ASIO     [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn6] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.451-0400 I FTDC     [conn6] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn11] end connection 127.0.0.1:64034 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.451-0400 I NETWORK  [conn14] end connection 127.0.0.1:64039 (6 connections now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I COMMAND  [conn20] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.452-0400 I FTDC     [conn20] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.452-0400 I NETWORK  [conn7] end connection 127.0.0.1:64076 (1 connection now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I NETWORK  [conn8] end connection 127.0.0.1:64080 (0 connections now open)
[mongod output] 2018-04-17T22:07:31.452-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-17T22:07:31.452-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-17T22:07:31.452-0400 I ASIO     [ReplicationExecutor] dropping unhealthy pooled connection to localhost:27023
[mongod output] 2018-04-17T22:07:31.452-0400 I ASIO     [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
[mongod output] 2018-04-17T22:07:31.451-0400 I COMMAND  [conn15] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.452-0400 I FTDC     [conn15] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.453-0400 I REPL     [conn9] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.453-0400 I REPL     [conn7] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.453-0400 I STORAGE  [conn5] got request after shutdown()
[mongod output] 2018-04-17T22:07:31.454-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:64098 #10 (1 connection now open)
[mongod output] 2018-04-17T22:07:31.454-0400 I COMMAND  [conn8] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.454-0400 I FTDC     [conn8] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.454-0400 I REPL     [conn6] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.454-0400 I COMMAND  [conn10] terminating, shutdown command received
[mongod output] 2018-04-17T22:07:31.454-0400 I FTDC     [conn10] Shutting down full-time diagnostic data capture
[mongod output] 2018-04-17T22:07:31.454-0400 I REPL     [conn15] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.455-0400 I REPL     [ReplicationExecutor] Error in heartbeat request to localhost:27024; HostUnreachable End of file
[mongod output] 2018-04-17T22:07:31.455-0400 I REPL     [conn20] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.456-0400 I REPL     [conn8] Stopping replication applier threads
[mongod output] 2018-04-17T22:07:31.456-0400 I CONTROL  [conn10] now exiting
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] closing listening socket: 5
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] closing listening socket: 6
[mongod output] 2018-04-17T22:07:31.456-0400 I NETWORK  [conn10] removing socket file: /tmp/mongodb-27019.sock
[mongod output] 2018-04-17T22:07:31.457-0400 I NETWORK  [conn10] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:31.457-0400 I NETWORK  [conn10] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:31.457-0400 I STORAGE  [conn10] WiredTigerKVEngine shutting down
[mongod output] 


2018-04-17T22:07:32.456-0400 I CONTROL  [conn9] now exiting
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.456-0400 I NETWORK  [conn9] removing socket file: /tmp/mongodb-27022.sock
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn9] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn9] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.457-0400 I STORAGE  [conn9] WiredTigerKVEngine shutting down
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn5] end connection 127.0.0.1:64007 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn8] end connection 127.0.0.1:64069 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.457-0400 I NETWORK  [conn7] end connection 127.0.0.1:64066 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.468-0400 I CONTROL  [conn7] now exiting
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.468-0400 I NETWORK  [conn7] removing socket file: /tmp/mongodb-27021.sock
[mongod output] 2018-04-17T22:07:32.469-0400 I NETWORK  [conn7] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.469-0400 I NETWORK  [conn7] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.470-0400 I STORAGE  [conn7] WiredTigerKVEngine shutting down
[mongod output] 
2018-04-17T22:07:32.470-0400 2018-04-17T22:07:32.666-0400 I STORAGE  [conn7] shutdown: removing fs lock...
[mongod output] 
2018-04-17T22:07:32.734-0400 I CONTROL  [conn6] now exiting
[mongod output] 2018-04-17T22:07:32.734-0400 I CONTROL  [conn8] now exiting
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn6] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn6] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.734-0400 I NETWORK  [conn8] shutdown: going to close listening sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] closing listening socket: 5
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] closing listening socket: 6
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] removing socket file: /tmp/mongodb-27024.sock
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] removing socket file: /tmp/mongodb-27025.sock
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn8] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] shutdown: going to flush diaglog...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] shutdown: going to close sockets...
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn6] end connection 127.0.0.1:64022 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn2] end connection 127.0.0.1:63944 (1 connection now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn7] end connection 127.0.0.1:64025 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.735-0400 I NETWORK  [conn5] end connection 127.0.0.1:63964 (3 connections now open)
[mongod output] 2018-04-17T22:07:32.736-0400 I STORAGE  [conn8] WiredTigerKVEngine shutting down
[mongod output] 2018-04-17T22:07:32.736-0400 I STORAGE  [conn6] WiredTigerKVEngine shutting down
[mongod output] 


Results :

Tests in error: 
  MongoTestSuit.initMongo:231 » IO Could not start process: <EOF>

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 37.242 s
[INFO] Finished at: 2018-04-17T22:07:33-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
[ERROR] 
[ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-fix/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]




> On Apr 17, 2018, at 19:59, Timothy Farkas <tf...@mapr.com> wrote:
> 
> Hi Charles,
> 
> Can you try running the mongodb tests using this branch https://github.com/ilooner/drill/tree/mongodb-test-fix <https://github.com/ilooner/drill/tree/mongodb-test-fix> ?
> 
> I can't reproduce the issue on my machine, but others reported setting the LC_ALL environment variable and upgrading the version of the test library resolved the issue for them. The branch includes those fixes.
> 
> Thanks,
> Tim
> 
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 3:54:18 PM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
> 
> Here you go
> 
> 
> cgivre$ locale
> LANG="en_US.UTF-8"
> LC_COLLATE="en_US.UTF-8"
> LC_CTYPE="en_US.UTF-8"
> LC_MESSAGES="en_US.UTF-8"
> LC_MONETARY="en_US.UTF-8"
> LC_NUMERIC="en_US.UTF-8"
> LC_TIME="en_US.UTF-8"
> LC_ALL=
> 
> Thank you VERY much for your help!
> 
> 
>> On Apr 17, 2018, at 18:53, Timothy Farkas <tf...@mapr.com> wrote:
>> 
>> Hi Charles,
>> 
>> It looks like some people had issues with the MongoDB test library when their locale was not set properly. Can you provide the output of the locale command on your mac.
>> 
>> Thanks,
>> Tim
>> ________________________________
>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>>>
>> Sent: Tuesday, April 17, 2018 2:02:11 PM
>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>>
>> Subject: Re: Drill Tests Fail on Local Machine
>> 
>> Hi Tim,
>> Thanks for getting back with me. I’m on a Mac with High Sierra.
>> 
>> Here’s my Java info:
>> java -version
>> java version "1.8.0_65"
>> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
>> Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
>> 
>> Thank you very much for your help!
>> — C
>> 
>> 
>>> On Apr 17, 2018, at 16:56, Timothy Farkas <tfarkas@mapr.com <ma...@mapr.com>> wrote:
>>> 
>>> Hi Charles,
>>> 
>>> According to this thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>>> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
>>> 
>>> Thanks,
>>> Tim
>>> 
>>> ________________________________
>>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>> <mailto:cgivre@gmail.com <ma...@gmail.com><mailto:cgivre@gmail.com <ma...@gmail.com>>>>
>>> Sent: Tuesday, April 17, 2018 10:25:17 AM
>>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>> <mailto:dev@drill.apache.org <ma...@drill.apache.org><mailto:dev@drill.apache.org <ma...@drill.apache.org>>>
>>> Subject: Re: Drill Tests Fail on Local Machine
>>> 
>>> Hi Tim,
>>> Here’s the error:
>>> 
>>> Results :
>>> 
>>> Tests in error:
>>> MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
>>> 
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
>>> 
>>> [INFO] ------------------------------------------------------------------------
>>> [INFO] BUILD FAILURE
>>> [INFO] ------------------------------------------------------------------------
>>> [INFO] Total time: 39.848 s
>>> [INFO] Finished at: 2018-04-17T13:24:31-04:00
>>> [INFO] ------------------------------------------------------------------------
>>> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
>>> [ERROR]
>>> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
>>> [ERROR] -> [Help 1]
>>> [ERROR]
>>> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>> [ERROR]
>>> [ERROR] For more information about the errors and possible solutions, please read the following articles:
>>> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>>>
>>> 
>>> 
>>> 
>>> -------------------------------------------------------------------------------
>>> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
>>> -------------------------------------------------------------------------------
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
>>> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
>>> java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>> 
>>> 
>>> 
>>> 
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>>> <properties>
>>>  <property name="os.detected.version" value="10.13"/>
>>>  <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>>>  <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>>>  <property name="java.vm.version" value="25.65-b01"/>
>>>  <property name="gopherProxySet" value="false"/>
>>>  <property name="java.vm.vendor" value="Oracle Corporation"/>
>>>  <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>>  <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>>>
>>>  <property name="os.detected.version.minor" value="13"/>
>>>  <property name="path.separator" value=":"/>
>>>  <property name="guice.disable.misplaced.annotation.check" value="true"/>
>>>  <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>>>  <property name="file.encoding.pkg" value="sun.io <http://sun.io/> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=>>>"/>
>>>  <property name="user.country" value="US"/>
>>>  <property name="sun.java.launcher" value="SUN_STANDARD"/>
>>>  <property name="sun.os.patch.level" value="unknown"/>
>>>  <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>>>  <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>>  <property name="java.runtime.version" value="1.8.0_65-b17"/>
>>>  <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>>>  <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>>>  <property name="os.arch" value="x86_64"/>
>>>  <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>>>  <property name="line.separator" value="&#10;"/>
>>>  <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>>>  <property name="os.detected.version.major" value="10"/>
>>>  <property name="os.name" value="Mac OS X"/>
>>>  <property name="os.detected.classifier" value="osx-x86_64"/>
>>>  <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>>>  <property name="sun.jnu.encoding" value="UTF-8"/>
>>>  <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>>>  <property name="os.detected.arch" value="x86_64"/>
>>>  <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>>>  <property name="java.specification.name" value="Java Platform API Specification"/>
>>>  <property name="java.class.version" value="52.0"/>
>>>  <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>>>  <property name="os.detected.name" value="osx"/>
>>>  <property name="os.version" value="10.13.4"/>
>>>  <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>>>  <property name="user.home" value="/Users/cgivre"/>
>>>  <property name="user.timezone" value="America/New_York"/>
>>>  <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>>>  <property name="java.specification.version" value="1.8"/>
>>>  <property name="file.encoding" value="UTF-8"/>
>>>  <property name="user.name" value="cgivre"/>
>>>  <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>>>  <property name="java.vm.specification.version" value="1.8"/>
>>>  <property name="sun.arch.data.model" value="64"/>
>>>  <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>>>  <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>>>  <property name="java.specification.vendor" value="Oracle Corporation"/>
>>>  <property name="user.language" value="en"/>
>>>  <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>>>  <property name="java.vm.info <http://java.vm.info/> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=>>>" value="mixed mode"/>
>>>  <property name="java.version" value="1.8.0_65"/>
>>>  <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>>>  <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>>>  <property name="java.vendor" value="Oracle Corporation"/>
>>>  <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>>>  <property name="file.separator" value="/"/>
>>>  <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>>>
>>>  <property name="sun.cpu.endian" value="little"/>
>>>  <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>>>  <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>>>  <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>>  <property name="sun.cpu.isalist" value=""/>
>>> </properties>
>>> <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>>>  <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>> ]]></error>
>>>  <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
>>> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs <http://local.oplog.rs/>
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs <http://local.oplog.rs/>command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
>>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <http://local.oplog.rs/>
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <http://local.oplog.rs/>
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>>> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>>> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>>> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
>>> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
>>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>>> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>>> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>>> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs <http://local.oplog.rs/>
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs <http://local.oplog.rs/>command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
>>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
>>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
>>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>>> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
>>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
>>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <http://local.oplog.rs/>
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs <http://local.oplog.rs/>
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>>> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>>> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>>> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
>>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
>>> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>>> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>>> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
>>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
>>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
>>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>>> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
>>> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>>> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>>> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>>> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>>> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
>>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>>> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>>> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>>> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
>>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>>> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
>>> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
>>> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
>>> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
>>> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
>>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
>>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
>>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
>>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
>>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
>>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
>>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
>>> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>>> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>>> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>>> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>>> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>>> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>>> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>>> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>>> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>>> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>>> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>>> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
>>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
>>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
>>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
>>> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
>>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
>>> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>>> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
>>> [mongoimport error] fatal error: MSpanList_Insert
>>> [mongoimport error]
>>> [mongoimport error] runtime stack:
>>> [mongoimport error] runtime.throw(0x468d0b)
>>> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
>>> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
>>> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
>>> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
>>> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
>>> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
>>> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
>>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
>>> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
>>> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
>>> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
>>> [mongoimport error] runtime.mcacheRefill_m()
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
>>> [mongoimport error] runtime.onM(0x3917f8)
>>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
>>> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
>>> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
>>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
>>> [mongoimport error] runtime.newG(0x29c0a)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
>>> [mongoimport error] allocg(0x478540)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
>>> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
>>> [mongoimport error] runtime.mpreinit(0x478bc0)
>>> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
>>> [mongoimport error] mcommoninit(0x478bc0)
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
>>> [mongoimport error] runtime.schedinit()
>>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
>>> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
>>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
>>> [mongoimport error]
>>> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
>>> java.io.IOException: Could not start process: <EOF>
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>>>      at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>>      at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>>>      at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>>>      at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>>      at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>>      at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>>>      at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>>>      at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>>      at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>>      at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>>>      at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>>>      at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>>>      at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>>>      at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>>>      at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>>      at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>>>      at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>>>      at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
>>> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
>>> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
>>> [mongod output] ]]></system-out>
>>> </testcase>
>>> </testsuite>
>>> 
>>> 
>>> 
>>> 
>>> 
>>>> On Apr 17, 2018, at 13:15, Timothy Farkas <tfarkas@mapr.com <ma...@mapr.com>> wrote:
>>>> 
>>>> Hi Charles,
>>>> 
>>>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>>>> 
>>>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>>>> 
>>>> Thanks,
>>>> Tim
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ________________________________
>>>> From: Charles Givre <cg...@gmail.com>
>>>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>>>> To: dev@drill.apache.org
>>>> Subject: Drill Tests Fail on Local Machine
>>>> 
>>>> All,
>>>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>>>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>>>> 2.  It it is possible to skip the MongoDB tests
>>>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>>>> 
>>>> Any advice would be appreciated.
>>>> Thanks,
>>>> — C


Re: Drill Tests Fail on Local Machine

Posted by Timothy Farkas <tf...@mapr.com>.
Hi Charles,

Can you try running the mongodb tests using this branch https://github.com/ilooner/drill/tree/mongodb-test-fix ?

I can't reproduce the issue on my machine, but others reported setting the LC_ALL environment variable and upgrading the version of the test library resolved the issue for them. The branch includes those fixes.

Thanks,
Tim

________________________________
From: Charles Givre <cg...@gmail.com>
Sent: Tuesday, April 17, 2018 3:54:18 PM
To: dev@drill.apache.org
Subject: Re: Drill Tests Fail on Local Machine

Here you go


cgivre$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=

Thank you VERY much for your help!


> On Apr 17, 2018, at 18:53, Timothy Farkas <tf...@mapr.com> wrote:
>
> Hi Charles,
>
> It looks like some people had issues with the MongoDB test library when their locale was not set properly. Can you provide the output of the locale command on your mac.
>
> Thanks,
> Tim
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 2:02:11 PM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
>
> Hi Tim,
> Thanks for getting back with me. I’m on a Mac with High Sierra.
>
> Here’s my Java info:
> java -version
> java version "1.8.0_65"
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
>
> Thank you very much for your help!
> — C
>
>
>> On Apr 17, 2018, at 16:56, Timothy Farkas <tf...@mapr.com> wrote:
>>
>> Hi Charles,
>>
>> According to this thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
>>
>> Thanks,
>> Tim
>>
>> ________________________________
>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>>>
>> Sent: Tuesday, April 17, 2018 10:25:17 AM
>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>>
>> Subject: Re: Drill Tests Fail on Local Machine
>>
>> Hi Tim,
>> Here’s the error:
>>
>> Results :
>>
>> Tests in error:
>> MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
>>
>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
>>
>> [INFO] ------------------------------------------------------------------------
>> [INFO] BUILD FAILURE
>> [INFO] ------------------------------------------------------------------------
>> [INFO] Total time: 39.848 s
>> [INFO] Finished at: 2018-04-17T13:24:31-04:00
>> [INFO] ------------------------------------------------------------------------
>> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
>> [ERROR]
>> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
>> [ERROR] -> [Help 1]
>> [ERROR]
>> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> [ERROR]
>> [ERROR] For more information about the errors and possible solutions, please read the following articles:
>> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>>
>>
>>
>>
>> -------------------------------------------------------------------------------
>> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
>> -------------------------------------------------------------------------------
>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
>> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
>> java.io.IOException: Could not start process: <EOF>
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>       at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>       at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>       at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>       at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>>
>>
>>
>>
>> <?xml version="1.0" encoding="UTF-8"?>
>> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>> <properties>
>>   <property name="os.detected.version" value="10.13"/>
>>   <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>>   <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>>   <property name="java.vm.version" value="25.65-b01"/>
>>   <property name="gopherProxySet" value="false"/>
>>   <property name="java.vm.vendor" value="Oracle Corporation"/>
>>   <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>   <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>>
>>   <property name="os.detected.version.minor" value="13"/>
>>   <property name="path.separator" value=":"/>
>>   <property name="guice.disable.misplaced.annotation.check" value="true"/>
>>   <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>>   <property name="file.encoding.pkg" value="sun.io <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=8j5taWQMz7OICZMsKxIoAKZK32KkxXA92WdObDBmajA&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=>>"/>
>>   <property name="user.country" value="US"/>
>>   <property name="sun.java.launcher" value="SUN_STANDARD"/>
>>   <property name="sun.os.patch.level" value="unknown"/>
>>   <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>>   <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>   <property name="java.runtime.version" value="1.8.0_65-b17"/>
>>   <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>>   <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>>   <property name="os.arch" value="x86_64"/>
>>   <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>>   <property name="line.separator" value="&#10;"/>
>>   <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>>   <property name="os.detected.version.major" value="10"/>
>>   <property name="os.name" value="Mac OS X"/>
>>   <property name="os.detected.classifier" value="osx-x86_64"/>
>>   <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>>   <property name="sun.jnu.encoding" value="UTF-8"/>
>>   <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>>   <property name="os.detected.arch" value="x86_64"/>
>>   <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>>   <property name="java.specification.name" value="Java Platform API Specification"/>
>>   <property name="java.class.version" value="52.0"/>
>>   <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>>   <property name="os.detected.name" value="osx"/>
>>   <property name="os.version" value="10.13.4"/>
>>   <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>   <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>>   <property name="user.home" value="/Users/cgivre"/>
>>   <property name="user.timezone" value="America/New_York"/>
>>   <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>>   <property name="java.specification.version" value="1.8"/>
>>   <property name="file.encoding" value="UTF-8"/>
>>   <property name="user.name" value="cgivre"/>
>>   <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>>   <property name="java.vm.specification.version" value="1.8"/>
>>   <property name="sun.arch.data.model" value="64"/>
>>   <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>>   <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>>   <property name="java.specification.vendor" value="Oracle Corporation"/>
>>   <property name="user.language" value="en"/>
>>   <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>>   <property name="java.vm.info <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=nSlRGNVDsYO60yFbOB8_Q6oZcisgbE2-eTZouYZKb5s&s=Zw8uUhwg4SQfix1UN3myYnxEdi87pOz5ghslJRjQCx0&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=>>" value="mixed mode"/>
>>   <property name="java.version" value="1.8.0_65"/>
>>   <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>>   <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>>   <property name="java.vendor" value="Oracle Corporation"/>
>>   <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>>   <property name="file.separator" value="/"/>
>>   <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>>
>>   <property name="sun.cpu.endian" value="little"/>
>>   <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>>   <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>>   <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>   <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>   <property name="sun.cpu.isalist" value=""/>
>> </properties>
>> <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>>   <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>       at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>       at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>       at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>       at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>> ]]></error>
>>   <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
>> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
>> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
>> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
>> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
>> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
>> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
>> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
>> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
>> [mongoimport error] fatal error: MSpanList_Insert
>> [mongoimport error]
>> [mongoimport error] runtime stack:
>> [mongoimport error] runtime.throw(0x468d0b)
>> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
>> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
>> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
>> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
>> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
>> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
>> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
>> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
>> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
>> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
>> [mongoimport error] runtime.mcacheRefill_m()
>> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
>> [mongoimport error] runtime.onM(0x3917f8)
>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
>> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
>> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
>> [mongoimport error] runtime.newG(0x29c0a)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
>> [mongoimport error] allocg(0x478540)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
>> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
>> [mongoimport error] runtime.mpreinit(0x478bc0)
>> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
>> [mongoimport error] mcommoninit(0x478bc0)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
>> [mongoimport error] runtime.schedinit()
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
>> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
>> [mongoimport error]
>> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
>> java.io.IOException: Could not start process: <EOF>
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>>       at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>       at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>       at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>>       at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>>       at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>>       at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>>       at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>       at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>       at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>>       at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>>       at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>>       at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>>       at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>>       at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>       at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>>       at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>       at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>>       at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>>       at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>>       at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
>> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
>> [mongod output] ]]></system-out>
>> </testcase>
>> </testsuite>
>>
>>
>>
>>
>>
>>> On Apr 17, 2018, at 13:15, Timothy Farkas <tf...@mapr.com> wrote:
>>>
>>> Hi Charles,
>>>
>>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>>>
>>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>>>
>>> Thanks,
>>> Tim
>>>
>>>
>>>
>>>
>>>
>>> ________________________________
>>> From: Charles Givre <cg...@gmail.com>
>>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>>> To: dev@drill.apache.org
>>> Subject: Drill Tests Fail on Local Machine
>>>
>>> All,
>>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>>> 2.  It it is possible to skip the MongoDB tests
>>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>>>
>>> Any advice would be appreciated.
>>> Thanks,
>>> — C


Re: Drill Tests Fail on Local Machine

Posted by Charles Givre <cg...@gmail.com>.
Here you go


cgivre$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=

Thank you VERY much for your help!


> On Apr 17, 2018, at 18:53, Timothy Farkas <tf...@mapr.com> wrote:
> 
> Hi Charles,
> 
> It looks like some people had issues with the MongoDB test library when their locale was not set properly. Can you provide the output of the locale command on your mac.
> 
> Thanks,
> Tim
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 2:02:11 PM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
> 
> Hi Tim,
> Thanks for getting back with me. I’m on a Mac with High Sierra.
> 
> Here’s my Java info:
> java -version
> java version "1.8.0_65"
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
> 
> Thank you very much for your help!
> — C
> 
> 
>> On Apr 17, 2018, at 16:56, Timothy Farkas <tf...@mapr.com> wrote:
>> 
>> Hi Charles,
>> 
>> According to this thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=>> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
>> 
>> Thanks,
>> Tim
>> 
>> ________________________________
>> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com> <mailto:cgivre@gmail.com <ma...@gmail.com>>>
>> Sent: Tuesday, April 17, 2018 10:25:17 AM
>> To: dev@drill.apache.org <ma...@drill.apache.org> <mailto:dev@drill.apache.org <ma...@drill.apache.org>>
>> Subject: Re: Drill Tests Fail on Local Machine
>> 
>> Hi Tim,
>> Here’s the error:
>> 
>> Results :
>> 
>> Tests in error:
>> MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
>> 
>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
>> 
>> [INFO] ------------------------------------------------------------------------
>> [INFO] BUILD FAILURE
>> [INFO] ------------------------------------------------------------------------
>> [INFO] Total time: 39.848 s
>> [INFO] Finished at: 2018-04-17T13:24:31-04:00
>> [INFO] ------------------------------------------------------------------------
>> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
>> [ERROR]
>> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
>> [ERROR] -> [Help 1]
>> [ERROR]
>> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> [ERROR]
>> [ERROR] For more information about the errors and possible solutions, please read the following articles:
>> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>>
>> 
>> 
>> 
>> -------------------------------------------------------------------------------
>> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
>> -------------------------------------------------------------------------------
>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
>> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
>> java.io.IOException: Could not start process: <EOF>
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>       at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>       at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>       at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>       at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>> 
>> 
>> 
>> 
>> <?xml version="1.0" encoding="UTF-8"?>
>> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>> <properties>
>>   <property name="os.detected.version" value="10.13"/>
>>   <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>>   <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>>   <property name="java.vm.version" value="25.65-b01"/>
>>   <property name="gopherProxySet" value="false"/>
>>   <property name="java.vm.vendor" value="Oracle Corporation"/>
>>   <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>   <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>>
>>   <property name="os.detected.version.minor" value="13"/>
>>   <property name="path.separator" value=":"/>
>>   <property name="guice.disable.misplaced.annotation.check" value="true"/>
>>   <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>>   <property name="file.encoding.pkg" value="sun.io <http://sun.io/> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=>>"/>
>>   <property name="user.country" value="US"/>
>>   <property name="sun.java.launcher" value="SUN_STANDARD"/>
>>   <property name="sun.os.patch.level" value="unknown"/>
>>   <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>>   <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>>   <property name="java.runtime.version" value="1.8.0_65-b17"/>
>>   <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>>   <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>>   <property name="os.arch" value="x86_64"/>
>>   <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>>   <property name="line.separator" value="&#10;"/>
>>   <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>>   <property name="os.detected.version.major" value="10"/>
>>   <property name="os.name" value="Mac OS X"/>
>>   <property name="os.detected.classifier" value="osx-x86_64"/>
>>   <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>>   <property name="sun.jnu.encoding" value="UTF-8"/>
>>   <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>>   <property name="os.detected.arch" value="x86_64"/>
>>   <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>>   <property name="java.specification.name" value="Java Platform API Specification"/>
>>   <property name="java.class.version" value="52.0"/>
>>   <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>>   <property name="os.detected.name" value="osx"/>
>>   <property name="os.version" value="10.13.4"/>
>>   <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>   <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>>   <property name="user.home" value="/Users/cgivre"/>
>>   <property name="user.timezone" value="America/New_York"/>
>>   <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>>   <property name="java.specification.version" value="1.8"/>
>>   <property name="file.encoding" value="UTF-8"/>
>>   <property name="user.name" value="cgivre"/>
>>   <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>>   <property name="java.vm.specification.version" value="1.8"/>
>>   <property name="sun.arch.data.model" value="64"/>
>>   <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>>   <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>>   <property name="java.specification.vendor" value="Oracle Corporation"/>
>>   <property name="user.language" value="en"/>
>>   <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>>   <property name="java.vm.info <http://java.vm.info/> <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=>>" value="mixed mode"/>
>>   <property name="java.version" value="1.8.0_65"/>
>>   <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>>   <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>>   <property name="java.vendor" value="Oracle Corporation"/>
>>   <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>>   <property name="file.separator" value="/"/>
>>   <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>>
>>   <property name="sun.cpu.endian" value="little"/>
>>   <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>>   <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>>   <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>   <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>>   <property name="sun.cpu.isalist" value=""/>
>> </properties>
>> <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>>   <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>>       at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>>       at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>>       at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>>       at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>> ]]></error>
>>   <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
>> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
>> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
>> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
>> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
>> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
>> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
>> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
>> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
>> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
>> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
>> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
>> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
>> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
>> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
>> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
>> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
>> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
>> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
>> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
>> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
>> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
>> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
>> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
>> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
>> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
>> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
>> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
>> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
>> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
>> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
>> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
>> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
>> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
>> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
>> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
>> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
>> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
>> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
>> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
>> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
>> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
>> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
>> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
>> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
>> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
>> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
>> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
>> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
>> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
>> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
>> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
>> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
>> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
>> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
>> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
>> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
>> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
>> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
>> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
>> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
>> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
>> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
>> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
>> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
>> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
>> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
>> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
>> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
>> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
>> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
>> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
>> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
>> [mongoimport error] fatal error: MSpanList_Insert
>> [mongoimport error]
>> [mongoimport error] runtime stack:
>> [mongoimport error] runtime.throw(0x468d0b)
>> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
>> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
>> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
>> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
>> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
>> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
>> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
>> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
>> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
>> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
>> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
>> [mongoimport error] runtime.mcacheRefill_m()
>> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
>> [mongoimport error] runtime.onM(0x3917f8)
>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
>> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
>> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
>> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
>> [mongoimport error] runtime.newG(0x29c0a)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
>> [mongoimport error] allocg(0x478540)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
>> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
>> [mongoimport error] runtime.mpreinit(0x478bc0)
>> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
>> [mongoimport error] mcommoninit(0x478bc0)
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
>> [mongoimport error] runtime.schedinit()
>> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
>> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
>> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
>> [mongoimport error]
>> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
>> java.io.IOException: Could not start process: <EOF>
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>>       at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>>       at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>>       at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>>       at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>       at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>       at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>>       at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>>       at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>>       at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>>       at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>>       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>>       at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>>       at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>>       at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>>       at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>>       at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>>       at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>>       at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>       at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>>       at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>>       at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>>       at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>>       at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>>       at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>>       at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>>       at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
>> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
>> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
>> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
>> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
>> [mongod output] ]]></system-out>
>> </testcase>
>> </testsuite>
>> 
>> 
>> 
>> 
>> 
>>> On Apr 17, 2018, at 13:15, Timothy Farkas <tf...@mapr.com> wrote:
>>> 
>>> Hi Charles,
>>> 
>>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>>> 
>>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>>> 
>>> Thanks,
>>> Tim
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ________________________________
>>> From: Charles Givre <cg...@gmail.com>
>>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>>> To: dev@drill.apache.org
>>> Subject: Drill Tests Fail on Local Machine
>>> 
>>> All,
>>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>>> 2.  It it is possible to skip the MongoDB tests
>>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>>> 
>>> Any advice would be appreciated.
>>> Thanks,
>>> — C


Re: Drill Tests Fail on Local Machine

Posted by Timothy Farkas <tf...@mapr.com>.
Hi Charles,

It looks like some people had issues with the MongoDB test library when their locale was not set properly. Can you provide the output of the locale command on your mac.

Thanks,
Tim
________________________________
From: Charles Givre <cg...@gmail.com>
Sent: Tuesday, April 17, 2018 2:02:11 PM
To: dev@drill.apache.org
Subject: Re: Drill Tests Fail on Local Machine

Hi Tim,
Thanks for getting back with me. I’m on a Mac with High Sierra.

Here’s my Java info:
 java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)

Thank you very much for your help!
— C


> On Apr 17, 2018, at 16:56, Timothy Farkas <tf...@mapr.com> wrote:
>
> Hi Charles,
>
> According to this thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e= <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_flapdoodle-2Doss_de.flapdoodle.embed.mongo_issues_167&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=N7lwLR2tEkQRnsyuuyZM4phw0IC8iOXSeSLUUTkznXY&e=> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
>
> Thanks,
> Tim
>
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 10:25:17 AM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
>
> Hi Tim,
> Here’s the error:
>
> Results :
>
> Tests in error:
>  MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
>
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
>
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 39.848 s
> [INFO] Finished at: 2018-04-17T13:24:31-04:00
> [INFO] ------------------------------------------------------------------------
> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
> [ERROR]
> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please read the following articles:
> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>
>
>
>
> -------------------------------------------------------------------------------
> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
> -------------------------------------------------------------------------------
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
> java.io.IOException: Could not start process: <EOF>
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
>
>
>
>
> <?xml version="1.0" encoding="UTF-8"?>
> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>  <properties>
>    <property name="os.detected.version" value="10.13"/>
>    <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>    <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>    <property name="java.vm.version" value="25.65-b01"/>
>    <property name="gopherProxySet" value="false"/>
>    <property name="java.vm.vendor" value="Oracle Corporation"/>
>    <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>    <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>
>    <property name="os.detected.version.minor" value="13"/>
>    <property name="path.separator" value=":"/>
>    <property name="guice.disable.misplaced.annotation.check" value="true"/>
>    <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>    <property name="file.encoding.pkg" value="sun.io <https://urldefense.proofpoint.com/v2/url?u=http-3A__sun.io_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=ZjjgOaK7wiTXE3oB4evZsx5aK5hTg_O-jNu4o3hRvxE&e=>"/>
>    <property name="user.country" value="US"/>
>    <property name="sun.java.launcher" value="SUN_STANDARD"/>
>    <property name="sun.os.patch.level" value="unknown"/>
>    <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>    <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>    <property name="java.runtime.version" value="1.8.0_65-b17"/>
>    <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>    <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>    <property name="os.arch" value="x86_64"/>
>    <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>    <property name="line.separator" value="&#10;"/>
>    <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>    <property name="os.detected.version.major" value="10"/>
>    <property name="os.name" value="Mac OS X"/>
>    <property name="os.detected.classifier" value="osx-x86_64"/>
>    <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>    <property name="sun.jnu.encoding" value="UTF-8"/>
>    <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>    <property name="os.detected.arch" value="x86_64"/>
>    <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>    <property name="java.specification.name" value="Java Platform API Specification"/>
>    <property name="java.class.version" value="52.0"/>
>    <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>    <property name="os.detected.name" value="osx"/>
>    <property name="os.version" value="10.13.4"/>
>    <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>    <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>    <property name="user.home" value="/Users/cgivre"/>
>    <property name="user.timezone" value="America/New_York"/>
>    <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>    <property name="java.specification.version" value="1.8"/>
>    <property name="file.encoding" value="UTF-8"/>
>    <property name="user.name" value="cgivre"/>
>    <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>    <property name="java.vm.specification.version" value="1.8"/>
>    <property name="sun.arch.data.model" value="64"/>
>    <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>    <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>    <property name="java.specification.vendor" value="Oracle Corporation"/>
>    <property name="user.language" value="en"/>
>    <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>    <property name="java.vm.info <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.vm.info_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=XaVHrLst5fuHPkMguZsL2qWZTXyFbR0_nf0vnx2-xgk&s=Bez-z4Rt_WMbv9B4-9LE_HvGd4MDlQ7vtCQ-h8uk4Wg&e=>" value="mixed mode"/>
>    <property name="java.version" value="1.8.0_65"/>
>    <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>    <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>    <property name="java.vendor" value="Oracle Corporation"/>
>    <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>    <property name="file.separator" value="/"/>
>    <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>
>    <property name="sun.cpu.endian" value="little"/>
>    <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>    <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>    <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>    <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>    <property name="sun.cpu.isalist" value=""/>
>  </properties>
>  <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>    <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
> ]]></error>
>    <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
> [mongoimport error] fatal error: MSpanList_Insert
> [mongoimport error]
> [mongoimport error] runtime stack:
> [mongoimport error] runtime.throw(0x468d0b)
> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
> [mongoimport error] runtime.mcacheRefill_m()
> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
> [mongoimport error] runtime.onM(0x3917f8)
> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
> [mongoimport error] runtime.newG(0x29c0a)
> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
> [mongoimport error] allocg(0x478540)
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
> [mongoimport error] runtime.mpreinit(0x478bc0)
> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
> [mongoimport error] mcommoninit(0x478bc0)
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
> [mongoimport error] runtime.schedinit()
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
> [mongoimport error]
> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
> java.io.IOException: Could not start process: <EOF>
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>        at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>        at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>        at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>        at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>        at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>        at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>        at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>        at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>        at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>        at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>        at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
> [mongod output] ]]></system-out>
>  </testcase>
> </testsuite>
>
>
>
>
>
>> On Apr 17, 2018, at 13:15, Timothy Farkas <tf...@mapr.com> wrote:
>>
>> Hi Charles,
>>
>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>>
>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>>
>> Thanks,
>> Tim
>>
>>
>>
>>
>>
>> ________________________________
>> From: Charles Givre <cg...@gmail.com>
>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>> To: dev@drill.apache.org
>> Subject: Drill Tests Fail on Local Machine
>>
>> All,
>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>> 2.  It it is possible to skip the MongoDB tests
>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>>
>> Any advice would be appreciated.
>> Thanks,
>> — C


Re: Drill Tests Fail on Local Machine

Posted by Charles Givre <cg...@gmail.com>.
Hi Tim, 
Thanks for getting back with me. I’m on a Mac with High Sierra. 

Here’s my Java info:
 java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)

Thank you very much for your help!
— C


> On Apr 17, 2018, at 16:56, Timothy Farkas <tf...@mapr.com> wrote:
> 
> Hi Charles,
> 
> According to this thread https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues/167 <https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues/167> there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?
> 
> Thanks,
> Tim
> 
> ________________________________
> From: Charles Givre <cgivre@gmail.com <ma...@gmail.com>>
> Sent: Tuesday, April 17, 2018 10:25:17 AM
> To: dev@drill.apache.org <ma...@drill.apache.org>
> Subject: Re: Drill Tests Fail on Local Machine
> 
> Hi Tim,
> Here’s the error:
> 
> Results :
> 
> Tests in error:
>  MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.
> 
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 39.848 s
> [INFO] Finished at: 2018-04-17T13:24:31-04:00
> [INFO] ------------------------------------------------------------------------
> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
> [ERROR]
> [ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please read the following articles:
> [ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=>
> 
> 
> 
> -------------------------------------------------------------------------------
> Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
> -------------------------------------------------------------------------------
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
> org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
> java.io.IOException: Could not start process: <EOF>
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
> 
> 
> 
> 
> <?xml version="1.0" encoding="UTF-8"?>
> <testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
>  <properties>
>    <property name="os.detected.version" value="10.13"/>
>    <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
>    <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
>    <property name="java.vm.version" value="25.65-b01"/>
>    <property name="gopherProxySet" value="false"/>
>    <property name="java.vm.vendor" value="Oracle Corporation"/>
>    <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>    <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>>
>    <property name="os.detected.version.minor" value="13"/>
>    <property name="path.separator" value=":"/>
>    <property name="guice.disable.misplaced.annotation.check" value="true"/>
>    <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
>    <property name="file.encoding.pkg" value="sun.io <http://sun.io/>"/>
>    <property name="user.country" value="US"/>
>    <property name="sun.java.launcher" value="SUN_STANDARD"/>
>    <property name="sun.os.patch.level" value="unknown"/>
>    <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
>    <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
>    <property name="java.runtime.version" value="1.8.0_65-b17"/>
>    <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
>    <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
>    <property name="os.arch" value="x86_64"/>
>    <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
>    <property name="line.separator" value="&#10;"/>
>    <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
>    <property name="os.detected.version.major" value="10"/>
>    <property name="os.name" value="Mac OS X"/>
>    <property name="os.detected.classifier" value="osx-x86_64"/>
>    <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
>    <property name="sun.jnu.encoding" value="UTF-8"/>
>    <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
>    <property name="os.detected.arch" value="x86_64"/>
>    <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
>    <property name="java.specification.name" value="Java Platform API Specification"/>
>    <property name="java.class.version" value="52.0"/>
>    <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
>    <property name="os.detected.name" value="osx"/>
>    <property name="os.version" value="10.13.4"/>
>    <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>    <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
>    <property name="user.home" value="/Users/cgivre"/>
>    <property name="user.timezone" value="America/New_York"/>
>    <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
>    <property name="java.specification.version" value="1.8"/>
>    <property name="file.encoding" value="UTF-8"/>
>    <property name="user.name" value="cgivre"/>
>    <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
>    <property name="java.vm.specification.version" value="1.8"/>
>    <property name="sun.arch.data.model" value="64"/>
>    <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
>    <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
>    <property name="java.specification.vendor" value="Oracle Corporation"/>
>    <property name="user.language" value="en"/>
>    <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
>    <property name="java.vm.info <http://java.vm.info/>" value="mixed mode"/>
>    <property name="java.version" value="1.8.0_65"/>
>    <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
>    <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
>    <property name="java.vendor" value="Oracle Corporation"/>
>    <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
>    <property name="file.separator" value="/"/>
>    <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>>
>    <property name="sun.cpu.endian" value="little"/>
>    <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
>    <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
>    <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>    <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
>    <property name="sun.cpu.isalist" value=""/>
>  </properties>
>  <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
>    <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
>        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
>        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
>        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
>        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
> ]]></error>
>    <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
> [mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
> [mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
> [mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
> [mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
> [mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
> [mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
> [mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
> [mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
> [mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
> [mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
> [mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
> [mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
> [mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
> [mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
> [mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
> [mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
> [mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
> [mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
> [mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
> [mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
> [mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
> [mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
> [mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
> [mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
> [mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
> [mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
> [mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
> [mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
> [mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
> [mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
> [mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
> [mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
> [mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
> [mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
> [mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
> [mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
> [mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
> [mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
> [mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
> [mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
> [mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
> [mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
> [mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
> [mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
> [mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
> [mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
> [mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
> [mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
> [mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
> [mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
> [mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
> [mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
> [mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
> [mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
> [mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
> [mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
> [mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
> [mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
> [mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
> [mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
> [mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
> [mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
> [mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
> [mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
> [mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
> [mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
> [mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
> [mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
> [mongod output] [mongod output]note: noprealloc may hurt performance in many applications
> [mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
> [mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
> [mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
> [mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
> [mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
> [mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
> [mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
> [mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
> [mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
> [mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
> [mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
> [mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
> [mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
> [mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
> [mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
> [mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
> [mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
> [mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
> [mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
> [mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
> [mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
> [mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
> [mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
> [mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
> [mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
> [mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
> [mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
> [mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
> [mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
> [mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
> [mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
> [mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
> [mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
> [mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
> [mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
> [mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
> [mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
> [mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
> [mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
> [mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
> [mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
> [mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
> [mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
> [mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
> [mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
> [mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
> [mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
> [mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
> [mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
> [mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
> [mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
> [mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
> [mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
> [mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
> [mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
> [mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
> [mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
> [mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
> [mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
> [mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
> [mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
> [mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
> [mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
> [mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
> [mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
> [mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
> [mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
> [mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done. scanned 0 total records. 0 secs
> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
> [mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
> [mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
> [mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
> [mongoimport error] fatal error: MSpanList_Insert
> [mongoimport error]
> [mongoimport error] runtime stack:
> [mongoimport error] runtime.throw(0x468d0b)
> [mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
> [mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
> [mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
> [mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
> [mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
> [mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
> [mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
> [mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
> [mongoimport error] MCentral_Grow(0x48ea38, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
> [mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
> [mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
> [mongoimport error] runtime.mcacheRefill_m()
> [mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
> [mongoimport error] runtime.onM(0x3917f8)
> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
> [mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
> [mongoimport error] runtime.newobject(0x2f1120, 0x600000)
> [mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
> [mongoimport error] runtime.newG(0x29c0a)
> [mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
> [mongoimport error] allocg(0x478540)
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
> [mongoimport error] runtime.malg(0x8000, 0x4785e0)
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
> [mongoimport error] runtime.mpreinit(0x478bc0)
> [mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
> [mongoimport error] mcommoninit(0x478bc0)
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
> [mongoimport error] runtime.schedinit()
> [mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
> [mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
> [mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
> [mongoimport error]
> 22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
> java.io.IOException: Could not start process: <EOF>
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
>        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
>        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
>        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
>        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>        at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
>        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
>        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
>        at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
>        at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
>        at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
>        at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
>        at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
>        at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
>        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
>        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
>        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>        at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
>        at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
>        at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
>        at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>        at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
>        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
>        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
>        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
>        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
> 2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
> [mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
> [mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
> [mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
> [mongod output] ]]></system-out>
>  </testcase>
> </testsuite>
> 
> 
> 
> 
> 
>> On Apr 17, 2018, at 13:15, Timothy Farkas <tf...@mapr.com> wrote:
>> 
>> Hi Charles,
>> 
>> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>> 
>> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>> 
>> Thanks,
>> Tim
>> 
>> 
>> 
>> 
>> 
>> ________________________________
>> From: Charles Givre <cg...@gmail.com>
>> Sent: Tuesday, April 17, 2018 6:22:13 AM
>> To: dev@drill.apache.org
>> Subject: Drill Tests Fail on Local Machine
>> 
>> All,
>> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
>> 1.  If it is possible to only run the unit tests for the components I am working on OR
>> 2.  It it is possible to skip the MongoDB tests
>> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>> 
>> Any advice would be appreciated.
>> Thanks,
>> — C


Re: Drill Tests Fail on Local Machine

Posted by Timothy Farkas <tf...@mapr.com>.
Hi Charles,

According to this thread https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues/167 there are a couple possibilities. Are you running 32-bit java or are you developing on Windows?

Thanks,
Tim

________________________________
From: Charles Givre <cg...@gmail.com>
Sent: Tuesday, April 17, 2018 10:25:17 AM
To: dev@drill.apache.org
Subject: Re: Drill Tests Fail on Local Machine

Hi Tim,
Here’s the error:

Results :

Tests in error:
  MongoTestSuit.initMongo:226 »   Error while starting shrded cluster.

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 39.848 s
[INFO] Finished at: 2018-04-17T13:24:31-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project drill-mongo-storage: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] https://urldefense.proofpoint.com/v2/url?u=http-3A__cwiki.apache.org_confluence_display_MAVEN_MojoFailureException&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=qq8r1rCmNKgCzG8bqNJRyXVk6EhCtQuKz-AU7hO8CQ8&e=



-------------------------------------------------------------------------------
Test set: org.apache.drill.exec.store.mongo.MongoTestSuit
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: -868,676.297 sec <<< FAILURE! - in org.apache.drill.exec.store.mongo.MongoTestSuit
org.apache.drill.exec.store.mongo.MongoTestSuit  Time elapsed: -868,676.298 sec  <<< ERROR!
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)




<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298" tests="1" errors="1" skipped="0" failures="0">
  <properties>
    <property name="os.detected.version" value="10.13"/>
    <property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
    <property name="sun.boot.library.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib"/>
    <property name="java.vm.version" value="25.65-b01"/>
    <property name="gopherProxySet" value="false"/>
    <property name="java.vm.vendor" value="Oracle Corporation"/>
    <property name="maven.multiModuleProjectDirectory" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
    <property name="java.vendor.url" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__java.oracle.com_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=VEi5z_Vg0yUYTV2Ke6QMnOZYHePa4smPvnMh3cpY7gg&e=>
    <property name="os.detected.version.minor" value="13"/>
    <property name="path.separator" value=":"/>
    <property name="guice.disable.misplaced.annotation.check" value="true"/>
    <property name="java.vm.name" value="Java HotSpot(TM) 64-Bit Server VM"/>
    <property name="file.encoding.pkg" value="sun.io"/>
    <property name="user.country" value="US"/>
    <property name="sun.java.launcher" value="SUN_STANDARD"/>
    <property name="sun.os.patch.level" value="unknown"/>
    <property name="java.vm.specification.name" value="Java Virtual Machine Specification"/>
    <property name="user.dir" value="/Users/cgivre/github/drill-dev/drill-log/drill"/>
    <property name="java.runtime.version" value="1.8.0_65-b17"/>
    <property name="java.awt.graphicsenv" value="sun.awt.CGraphicsEnvironment"/>
    <property name="java.endorsed.dirs" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/endorsed"/>
    <property name="os.arch" value="x86_64"/>
    <property name="java.io.tmpdir" value="/var/folders/r9/mfkwmvbd4r17w2v_g6dr964c0000gp/T/"/>
    <property name="line.separator" value="&#10;"/>
    <property name="java.vm.specification.vendor" value="Oracle Corporation"/>
    <property name="os.detected.version.major" value="10"/>
    <property name="os.name" value="Mac OS X"/>
    <property name="os.detected.classifier" value="osx-x86_64"/>
    <property name="classworlds.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/bin/m2.conf"/>
    <property name="sun.jnu.encoding" value="UTF-8"/>
    <property name="java.library.path" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:."/>
    <property name="os.detected.arch" value="x86_64"/>
    <property name="maven.conf" value="/usr/local/Cellar/maven/3.5.3/libexec/conf"/>
    <property name="java.specification.name" value="Java Platform API Specification"/>
    <property name="java.class.version" value="52.0"/>
    <property name="sun.management.compiler" value="HotSpot 64-Bit Tiered Compilers"/>
    <property name="os.detected.name" value="osx"/>
    <property name="os.version" value="10.13.4"/>
    <property name="http.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
    <property name="library.jansi.path" value="/usr/local/Cellar/maven/3.5.3/libexec/lib/jansi-native"/>
    <property name="user.home" value="/Users/cgivre"/>
    <property name="user.timezone" value="America/New_York"/>
    <property name="java.awt.printerjob" value="sun.lwawt.macosx.CPrinterJob"/>
    <property name="java.specification.version" value="1.8"/>
    <property name="file.encoding" value="UTF-8"/>
    <property name="user.name" value="cgivre"/>
    <property name="java.class.path" value="/usr/local/Cellar/maven/3.5.3/libexec/boot/plexus-classworlds-2.5.2.jar"/>
    <property name="java.vm.specification.version" value="1.8"/>
    <property name="sun.arch.data.model" value="64"/>
    <property name="java.home" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre"/>
    <property name="sun.java.command" value="org.codehaus.plexus.classworlds.launcher.Launcher clean package"/>
    <property name="java.specification.vendor" value="Oracle Corporation"/>
    <property name="user.language" value="en"/>
    <property name="awt.toolkit" value="sun.lwawt.macosx.LWCToolkit"/>
    <property name="java.vm.info" value="mixed mode"/>
    <property name="java.version" value="1.8.0_65"/>
    <property name="java.ext.dirs" value="/Users/cgivre/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java"/>
    <property name="sun.boot.class.path" value="/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/sunrsasign.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/classes"/>
    <property name="java.vendor" value="Oracle Corporation"/>
    <property name="maven.home" value="/usr/local/Cellar/maven/3.5.3/libexec"/>
    <property name="file.separator" value="/"/>
    <property name="java.vendor.url.bug" value="https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.sun.com_bugreport_-2522_&d=DwIFaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=4eQVr8zB8ZBff-yxTimdOQ&m=QASXSGvaWCynAcdxyMzhcsEck1PKPXfV7b3-csdlrD0&s=yVWkRGc92-mMN8qezp7ehs85h8LeFgNN0ulHlKXiasE&e=>
    <property name="sun.cpu.endian" value="little"/>
    <property name="sun.io.unicode.encoding" value="UnicodeBig"/>
    <property name="sun.font.fontmanager" value="sun.font.CFontManager"/>
    <property name="socksNonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
    <property name="ftp.nonProxyHosts" value="local|*.local|169.254/16|*.169.254/16"/>
    <property name="sun.cpu.isalist" value=""/>
  </properties>
  <testcase name="org.apache.drill.exec.store.mongo.MongoTestSuit" classname="org.apache.drill.exec.store.mongo.MongoTestSuit" time="-868,676.298">
    <error message="Could not start process: &lt;EOF&gt;" type="java.io.IOException"><![CDATA[java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79)
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114)
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44)
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35)
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62)
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47)
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231)
]]></error>
    <system-out><![CDATA[[mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:31.860-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48886 port=27023 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:31.861-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27023 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:31.863-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:32.037-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:32.057-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:32.057-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-77cbd104-97d9-449c-bbce-e02f76c618c7/diagnostic.data'
[mongod output] 2018-04-15T22:11:32.057-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:32.076-0400 I NETWORK  [initandlisten] waiting for connections on port 27023
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:32.283-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48887 port=27024 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:32.284-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27024 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:32.294-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:32.460-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:32.495-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:32.495-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ee6c62de-3984-42fd-8b03-1816638d1741/diagnostic.data'
[mongod output] 2018-04-15T22:11:32.495-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:32.537-0400 I NETWORK  [initandlisten] waiting for connections on port 27024
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:32.757-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48888 port=27025 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:32.758-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27025 }, replication: { replSet: "shard_2_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:32.759-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:32.908-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:32.941-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:32.941-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:32.941-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-ac320a11-ecec-4117-9efe-5d1daeb015e5/diagnostic.data'
[mongod output] 2018-04-15T22:11:32.976-0400 I NETWORK  [initandlisten] waiting for connections on port 27025
[mongod output] 2018-04-15T22:11:34.142-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49154 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.172-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49155 #2 (2 connections now open)
[mongod output] 2018-04-15T22:11:34.201-0400 I REPL     [conn2] replSetInitiate admin command received from client
[mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49156 #3 (3 connections now open)
[mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [conn3] end connection 127.0.0.1:49156 (2 connections now open)
[mongod output] 2018-04-15T22:11:34.203-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49157 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49157 (0 connections now open)
[mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49158 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.204-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
[mongod output] 2018-04-15T22:11:34.204-0400 I NETWORK  [conn1] end connection 127.0.0.1:49158 (0 connections now open)
[mongod output] 2018-04-15T22:11:34.210-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49161 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.210-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
[mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49164 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:34.221-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49167 #4 (3 connections now open)
[mongod output] 2018-04-15T22:11:34.221-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:34.221-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:34.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49170 #5 (4 connections now open)
[mongod output] 2018-04-15T22:11:34.227-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:34.241-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:34.386-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] This node is localhost:27023 in the config
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [conn2] Starting replication applier threads
[mongod output] 2018-04-15T22:11:34.419-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_2_replicas", members: [ { _id: 0, host: "localhost:27023" }, { _id: 1, host: "localhost:27024" }, { _id: 2, host: "localhost:27025" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 42 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 217ms
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP
[mongod output] 2018-04-15T22:11:34.419-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
[mongod output] 2018-04-15T22:11:34.420-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49172 #6 (5 connections now open)
[mongod output] 2018-04-15T22:11:36.225-0400 I NETWORK  [conn6] end connection 127.0.0.1:49172 (4 connections now open)
[mongod output] 2018-04-15T22:11:36.226-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49173 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49173 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49174 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.227-0400 I NETWORK  [conn3] end connection 127.0.0.1:49174 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49175 #7 (5 connections now open)
[mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [conn7] end connection 127.0.0.1:49175 (4 connections now open)
[mongod output] 2018-04-15T22:11:36.235-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49176 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49176 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49177 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.236-0400 I NETWORK  [conn4] end connection 127.0.0.1:49177 (1 connection now open)
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
[mongod output] 2018-04-15T22:11:36.259-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] This node is localhost:27024 in the config
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:36.259-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:36.265-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49180 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.265-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:36.265-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state STARTUP
[mongod output] 2018-04-15T22:11:36.273-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49183 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:36.273-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27024
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
[mongod output] 2018-04-15T22:11:36.290-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_2_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27024", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27025", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] This node is localhost:27025 in the config
[mongod output] 2018-04-15T22:11:36.290-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:36.291-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state STARTUP2
[mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.267-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:37.278-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.299-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:37.322-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.416-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:37.455-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:37.455-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49184 #8 (5 connections now open)
[mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:37.462-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:37.462-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:37.463-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:37.464-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
[mongod output] 2018-04-15T22:11:37.471-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:37.473-0400 I NETWORK  [conn8] end connection 127.0.0.1:49184 (4 connections now open)
[mongod output] 2018-04-15T22:11:37.473-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:37.475-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:37.494-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:37.494-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49185 #9 (5 connections now open)
[mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:37.499-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:37.499-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:37.500-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:37.501-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:34:1)
[mongod output] 2018-04-15T22:11:37.506-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:37.508-0400 I NETWORK  [conn9] end connection 127.0.0.1:49185 (4 connections now open)
[mongod output] 2018-04-15T22:11:37.508-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:37.509-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:38.272-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:38.272-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:38.273-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:38.306-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:38.307-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:38.307-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27024 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:39.424-0400 I REPL     [ReplicationExecutor] Member localhost:27025 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn2] end connection 127.0.0.1:49155 (3 connections now open)
[mongod output] 2018-04-15T22:11:39.433-0400 I NETWORK  [conn1] end connection 127.0.0.1:49154 (2 connections now open)
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:39.637-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48891 port=27020 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:39.638-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27020 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:39.639-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:39.753-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:39.772-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:39.772-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:39.772-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-35c4b812-4723-4d85-ae63-6b7ab22802c7/diagnostic.data'
[mongod output] 2018-04-15T22:11:39.791-0400 I NETWORK  [initandlisten] waiting for connections on port 27020
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:40.009-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48892 port=27021 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:40.010-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27021 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:40.011-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:40.127-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:40.148-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:40.148-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:40.148-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-9d1d694f-317c-4ba6-b7c9-e43f86eff51e/diagnostic.data'
[mongod output] 2018-04-15T22:11:40.167-0400 I NETWORK  [initandlisten] waiting for connections on port 27021
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:40.375-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48893 port=27022 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:40.376-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27022 }, replication: { replSet: "shard_1_replicas" }, security: { authorization: "disabled" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:40.377-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:40.487-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
[mongod output] 2018-04-15T22:11:40.508-0400 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
[mongod output] 2018-04-15T22:11:40.509-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-b558d489-b835-408b-9b0d-89e804ad9113/diagnostic.data'
[mongod output] 2018-04-15T22:11:40.509-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:40.530-0400 I NETWORK  [initandlisten] waiting for connections on port 27022
[mongod output] 2018-04-15T22:11:41.539-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49196 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.543-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49197 #2 (2 connections now open)
[mongod output] 2018-04-15T22:11:41.547-0400 I REPL     [conn2] replSetInitiate admin command received from client
[mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49198 #3 (3 connections now open)
[mongod output] 2018-04-15T22:11:41.548-0400 I NETWORK  [conn3] end connection 127.0.0.1:49198 (2 connections now open)
[mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49199 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [conn1] end connection 127.0.0.1:49199 (0 connections now open)
[mongod output] 2018-04-15T22:11:41.549-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49200 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.550-0400 I REPL     [conn2] replSetInitiate config object with 3 members parses ok
[mongod output] 2018-04-15T22:11:41.550-0400 I NETWORK  [conn1] end connection 127.0.0.1:49200 (0 connections now open)
[mongod output] 2018-04-15T22:11:41.555-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49203 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.556-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
[mongod output] 2018-04-15T22:11:41.564-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49206 #2 (1 connection now open)
[mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:41.565-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49209 #4 (3 connections now open)
[mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:41.565-0400 I REPL     [conn2] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:41.565-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
[mongod output] 2018-04-15T22:11:41.571-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49212 #5 (4 connections now open)
[mongod output] 2018-04-15T22:11:41.571-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27020
[mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:41.575-0400 I STORAGE  [conn2] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:41.650-0400 I REPL     [conn2] ******
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] This node is localhost:27020 in the config
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [conn2] Starting replication applier threads
[mongod output] 2018-04-15T22:11:41.669-0400 I COMMAND  [conn2] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "shard_1_replicas", members: [ { _id: 0, host: "localhost:27020" }, { _id: 1, host: "localhost:27021" }, { _id: 2, host: "localhost:27022" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 32 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 121ms
[mongod output] 2018-04-15T22:11:41.669-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP
[mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP
[mongod output] 2018-04-15T22:11:41.670-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:43.575-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49214 #6 (5 connections now open)
[mongod output] 2018-04-15T22:11:43.576-0400 I NETWORK  [conn6] end connection 127.0.0.1:49214 (4 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49215 #7 (5 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn7] end connection 127.0.0.1:49215 (4 connections now open)
[mongod output] 2018-04-15T22:11:43.577-0400 I NETWORK  [conn3] end connection 127.0.0.1:49216 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49217 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49218 #3 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn4] end connection 127.0.0.1:49217 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.578-0400 I NETWORK  [conn3] end connection 127.0.0.1:49218 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49219 #4 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.579-0400 I NETWORK  [conn4] end connection 127.0.0.1:49219 (1 connection now open)
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [replExecDBWorker-1] Starting replication applier threads
[mongod output] 2018-04-15T22:11:43.604-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] This node is localhost:27021 in the config
[mongod output] 2018-04-15T22:11:43.604-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:43.605-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [replExecDBWorker-0] Starting replication applier threads
[mongod output] 2018-04-15T22:11:43.609-0400 W REPL     [rsSync] did not receive a valid config yet
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "shard_1_replicas", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "localhost:27020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] This node is localhost:27022 in the config
[mongod output] 2018-04-15T22:11:43.609-0400 I REPL     [ReplicationExecutor] transition to STARTUP2
[mongod output] 2018-04-15T22:11:43.610-0400 I REPL     [ReplicationExecutor] Member localhost:27020 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49222 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.617-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:43.617-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49225 #5 (2 connections now open)
[mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state STARTUP2
[mongod output] 2018-04-15T22:11:43.618-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27021
[mongod output] 2018-04-15T22:11:43.618-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state STARTUP2
[mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.609-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.617-0400 I REPL     [rsSync] creating replication oplog of size: 192MB...
[mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:44.620-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs
[mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:44.633-0400 I STORAGE  [rsSync] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
[mongod output] 2018-04-15T22:11:44.710-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
[mongod output] 2018-04-15T22:11:44.772-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
[mongod output] 2018-04-15T22:11:44.775-0400 I NETWORK  [conn2] end connection 127.0.0.1:49164 (1 connection now open)
[mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
[mongod output] 2018-04-15T22:11:44.810-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
[mongod output] 2018-04-15T22:11:44.810-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.812-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:44.817-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49230 #6 (2 connections now open)
[mongod output] 2018-04-15T22:11:44.818-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:44.822-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49233 #7 (3 connections now open)
[mongod output] 2018-04-15T22:11:44.823-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27025
[mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] ******
[mongod output] 2018-04-15T22:11:44.833-0400 I REPL     [rsSync] initial sync pending
[mongod output] 2018-04-15T22:11:44.842-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
[mongod output] 2018-04-15T22:11:44.842-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49234 #8 (5 connections now open)
[mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:44.850-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:44.850-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:44.851-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:44.852-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
[mongod output] 2018-04-15T22:11:44.858-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:44.859-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27020
[mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49235 #9 (6 connections now open)
[mongod output] 2018-04-15T22:11:44.860-0400 I NETWORK  [conn8] end connection 127.0.0.1:49234 (5 connections now open)
[mongod output] 2018-04-15T22:11:44.860-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:44.862-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync drop all databases
[mongod output] 2018-04-15T22:11:44.867-0400 I STORAGE  [rsSync] dropAllDatabasesExceptLocal 1
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync clone all databases
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] initial sync data copy, starting syncup
[mongod output] 2018-04-15T22:11:44.867-0400 I REPL     [rsSync] oplog sync 1 of 3
[mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 2 of 3
[mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] initial sync building indexes
[mongod output] 2018-04-15T22:11:44.868-0400 I REPL     [rsSync] oplog sync 3 of 3
[mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] initial sync finishing up
[mongod output] 2018-04-15T22:11:44.869-0400 I REPL     [rsSync] set minValid=(term: -1, timestamp: Apr 15 22:11:41:1)
[mongod output] 2018-04-15T22:11:44.874-0400 I REPL     [rsSync] initial sync done
[mongod output] 2018-04-15T22:11:44.875-0400 I NETWORK  [conn9] end connection 127.0.0.1:49235 (4 connections now open)
[mongod output] 2018-04-15T22:11:44.875-0400 I REPL     [ReplicationExecutor] transition to RECOVERING
[mongod output] 2018-04-15T22:11:44.876-0400 I REPL     [ReplicationExecutor] transition to SECONDARY
[mongod output] 2018-04-15T22:11:45.480-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
[mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:45.616-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:45.616-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:45.625-0400 I REPL     [ReplicationExecutor] could not find member to sync from
[mongod output] 2018-04-15T22:11:45.625-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:45.626-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27022 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:46.672-0400 I REPL     [ReplicationExecutor] Member localhost:27021 is now in state SECONDARY
[mongod output] 2018-04-15T22:11:46.675-0400 I NETWORK  [conn2] end connection 127.0.0.1:49197 (3 connections now open)
[mongod output] 2018-04-15T22:11:46.676-0400 I NETWORK  [conn1] end connection 127.0.0.1:49196 (2 connections now open)
[mongod output] [mongod output]note: noprealloc may hurt performance in many applications
[mongod output] 2018-04-15T22:11:46.869-0400 I CONTROL  [initandlisten] MongoDB starting : pid=48894 port=27019 dbpath=/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3 master=1 64-bit host=Charles-MacBook-Pro.local
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] db version v3.2.1
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] allocator: system
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] modules: none
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] build environment:
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     distarch: x86_64
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten]     target_arch: x86_64
[mongod output] 2018-04-15T22:11:46.870-0400 I CONTROL  [initandlisten] options: { net: { bindIp: "localhost", http: { enabled: false }, port: 27019 }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3", mmapv1: { preallocDataFiles: false, smallFiles: true }, syncPeriodSecs: 0.0 } }
[mongod output] 2018-04-15T22:11:46.872-0400 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=0,log_size=2GB),statistics_log=(wait=0),
[mongod output] 2018-04-15T22:11:46.977-0400 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
[mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] ******
[mongod output] 2018-04-15T22:11:46.998-0400 I REPL     [initandlisten] creating replication oplog of size: 5MB...
[mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.$main
[mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[mongod output] 2018-04-15T22:11:47.008-0400 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
[mongod output] 2018-04-15T22:11:47.069-0400 I REPL     [initandlisten] ******
[mongod output] 2018-04-15T22:11:47.069-0400 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
[mongod output] 2018-04-15T22:11:47.069-0400 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/Users/cgivre/github/drill-dev/drill-log/drill/contrib/storage-mongo/target/embedmongo-db-fdcabecb-e7a2-4389-9fc8-13518a39f7b3/diagnostic.data'
[mongod output] 2018-04-15T22:11:47.099-0400 I NETWORK  [initandlisten] waiting for connections on port 27019
[mongod output] 2018-04-15T22:11:47.219-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49244 #1 (1 connection now open)
[mongod output] 2018-04-15T22:11:47.220-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49245 #2 (2 connections now open)
[mongod output] 2018-04-15T22:11:47.221-0400 I COMMAND  [conn1] CMD fsync: sync:1 lock:0
[mongod output] 2018-04-15T22:11:47.290-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49246 #3 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49247 #4 (4 connections now open)
[mongod output] 2018-04-15T22:11:47.337-0400 I SHARDING [conn4] remote client 127.0.0.1:49247 initialized this host as shard config
[mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
[mongod output] 2018-04-15T22:11:47.414-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.419-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
[mongod output] 2018-04-15T22:11:47.432-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.436-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
[mongod output] 2018-04-15T22:11:47.450-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.455-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
[mongod output] 2018-04-15T22:11:47.487-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.491-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
[mongod output] 2018-04-15T22:11:47.508-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.514-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
[mongod output] 2018-04-15T22:11:47.527-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.532-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
[mongod output] 2018-04-15T22:11:47.544-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.549-0400 I INDEX    [conn4] build index done.  scanned 1 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
[mongod output] 2018-04-15T22:11:47.581-0400 I INDEX    [conn4]   building index using bulk method
[mongod output] 2018-04-15T22:11:47.586-0400 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:47.587-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49248 #5 (5 connections now open)
[mongod output] 2018-04-15T22:11:47.593-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49251 #8 (4 connections now open)
[mongod output] 2018-04-15T22:11:47.594-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49252 #10 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.601-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49255 #11 (4 connections now open)
[mongod output] 2018-04-15T22:11:47.603-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49256 #6 (6 connections now open)
[mongod output] 2018-04-15T22:11:47.644-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49257 #6 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.645-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49258 #10 (3 connections now open)
[mongod output] 2018-04-15T22:11:47.646-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49259 #6 (3 connections now open)
[mongod output] 2018-04-15T22:11:48.279-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
[mongod output] 2018-04-15T22:11:48.315-0400 I REPL     [ReplicationExecutor] Member localhost:27023 is now in state PRIMARY
[mongod output] 2018-04-15T22:11:48.332-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:48.332-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49260 #12 (5 connections now open)
[mongod output] 2018-04-15T22:11:48.333-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
[mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [conn12] end connection 127.0.0.1:49260 (4 connections now open)
[mongod output] 2018-04-15T22:11:48.333-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49261 #13 (5 connections now open)
[mongod output] 2018-04-15T22:11:48.339-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49264 #14 (6 connections now open)
[mongod output] 2018-04-15T22:11:48.340-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:48.371-0400 I REPL     [ReplicationExecutor] syncing from: localhost:27023
[mongod output] 2018-04-15T22:11:48.371-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49265 #15 (7 connections now open)
[mongod output] 2018-04-15T22:11:48.372-0400 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to localhost:27023
[mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [conn15] end connection 127.0.0.1:49265 (6 connections now open)
[mongod output] 2018-04-15T22:11:48.372-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49266 #16 (7 connections now open)
[mongod output] 2018-04-15T22:11:48.378-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49269 #17 (8 connections now open)
[mongod output] 2018-04-15T22:11:48.379-0400 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:27023
[mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
[mongod output] 2018-04-15T22:11:52.447-0400 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
[mongod output] 2018-04-15T22:11:52.476-0400 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
[mongod output] 2018-04-15T22:11:52.479-0400 I NETWORK  [conn2] end connection 127.0.0.1:49206 (2 connections now open)
[mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
[mongod output] 2018-04-15T22:11:52.498-0400 I REPL     [ReplicationExecutor] transition to PRIMARY
[mongod output] 2018-04-15T22:11:52.498-0400 W REPL     [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it.
[mongod output] 2018-04-15T22:11:52.507-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49281 #7 (3 connections now open)
[mongod output] 2018-04-15T22:11:52.507-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:52.511-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49284 #8 (4 connections now open)
[mongod output] 2018-04-15T22:11:52.512-0400 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:27022
[mongod output] 2018-04-15T22:11:52.742-0400 I REPL     [rsSync] transition to primary complete; database writes are now permitted
[mongod output] 2018-04-15T22:11:53.236-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49288 #11 (4 connections now open)
[mongod output] 2018-04-15T22:11:53.254-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49291 #12 (5 connections now open)
[mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [conn12] remote client 127.0.0.1:49291 initialized this host as shard shard_1_replicas
[mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
[mongod output] 2018-04-15T22:11:53.255-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
[mongod output] 2018-04-15T22:11:53.262-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49292 #7 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
[mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
[mongod output] 2018-04-15T22:11:53.263-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
[mongod output] 2018-04-15T22:11:53.269-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49295 #18 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [conn18] remote client 127.0.0.1:49295 initialized this host as shard shard_2_replicas
[mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
[mongod output] 2018-04-15T22:11:53.269-0400 I SHARDING [ShardingState initialization] Updating config server connection string to: localhost:27019
[mongod output] 2018-04-15T22:11:53.275-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49296 #8 (8 connections now open)
[mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_2_replicas/localhost:27023,localhost:27024,localhost:27025
[mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ReplicaSetMonitorWatcher] starting
[mongod output] 2018-04-15T22:11:53.276-0400 I NETWORK  [ShardingState initialization] Starting new replica set monitor for shard_1_replicas/localhost:27020,localhost:27021,localhost:27022
[mongod output] 2018-04-15T22:11:53.312-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49297 #13 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13] build index on: employee.empinfo properties: { v: 1, key: { employee_id: 1 }, name: "employee_id_1", ns: "employee.empinfo" }
[mongod output] 2018-04-15T22:11:53.342-0400 I INDEX    [conn13]          building index using bulk method
[mongod output] 2018-04-15T22:11:53.346-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.348-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49298 #14 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.390-0400 I SHARDING [conn13] remotely refreshing metadata for employee.empinfo with requested shard version 1|0||5ad40669e78315f611b9a277, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000
[mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection employee.empinfo was previously unsharded, new metadata loaded with shard version 1|0||5ad40669e78315f611b9a277
[mongod output] 2018-04-15T22:11:53.391-0400 I SHARDING [conn13] collection version was loaded at version 1|0||5ad40669e78315f611b9a277, took 0ms
[mongod output] 2018-04-15T22:11:53.409-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49301 #9 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.434-0400 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49304 #19 (10 connections now open)
[mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
[mongod output] 2018-04-15T22:11:53.464-0400 I INDEX    [conn19]          building index using bulk method
[mongod output] 2018-04-15T22:11:53.472-0400 I INDEX    [conn19] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2] build index on: donuts.donuts properties: { v: 1, unique: true, key: { id: 1 }, name: "id", ns: "donuts.donuts" }
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
[mongod output] 2018-04-15T22:11:53.513-0400 I INDEX    [repl writer worker 2]    building index using bulk method
[mongod output] 2018-04-15T22:11:53.522-0400 I INDEX    [repl writer worker 2] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.530-0400 I INDEX    [repl writer worker 2] build index done.  scanned 0 total records. 0 secs
[mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13] build index on: employee.empty properties: { v: 1, unique: true, key: { field_2: 1 }, name: "field_2", ns: "employee.empty" }
[mongod output] 2018-04-15T22:11:53.533-0400 I INDEX    [conn13]          building index using bulk method
[mongod output] 2018-04-15T22:11:53.541-0400 I INDEX    [conn13] build index done.  scanned 0 total records. 0 secs
[mongod output] [mongoimport error]failed MSpanList_Insert 0x604000 0x2d55416193820 0x0
[mongoimport error] fatal error: MSpanList_Insert
[mongoimport error]
[mongoimport error] runtime stack:
[mongoimport error] runtime.throw(0x468d0b)
[mongoimport error]      /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7ffeefbff0d0 sp=0x7ffeefbff0a0
[mongoimport error] runtime.MSpanList_Insert(0x48a0c8, 0x604000)
[mongoimport error]      /usr/local/go/src/runtime/mheap.c:692 +0x8f fp=0x7ffeefbff0f8 sp=0x7ffeefbff0d0
[mongoimport error] MHeap_FreeSpanLocked(0x486cc0, 0x604000, 0x100)
[mongoimport error]      /usr/local/go/src/runtime/mheap.c:583 +0x163 fp=0x7ffeefbff138 sp=0x7ffeefbff0f8
[mongoimport error] MHeap_Grow(0x486cc0, 0x8, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/mheap.c:420 +0x1a8 fp=0x7ffeefbff178 sp=0x7ffeefbff138
[mongoimport error] MHeap_AllocSpanLocked(0x486cc0, 0x1, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7ffeefbff1b8 sp=0x7ffeefbff178
[mongoimport error] mheap_alloc(0x486cc0, 0x1, 0x12, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7ffeefbff1e0 sp=0x7ffeefbff1b8
[mongoimport error] runtime.MHeap_Alloc(0x486cc0, 0x1, 0x10000000012, 0xdbc9)
[mongoimport error]      /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7ffeefbff218 sp=0x7ffeefbff1e0
[mongoimport error] MCentral_Grow(0x48ea38, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/mcentral.c:197 +0x8b fp=0x7ffeefbff280 sp=0x7ffeefbff218
[mongoimport error] runtime.MCentral_CacheSpan(0x48ea38, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/mcentral.c:85 +0x167 fp=0x7ffeefbff2b8 sp=0x7ffeefbff280
[mongoimport error] runtime.MCache_Refill(0x600000, 0x12, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/mcache.c:90 +0xa0 fp=0x7ffeefbff2e0 sp=0x7ffeefbff2b8
[mongoimport error] runtime.mcacheRefill_m()
[mongoimport error]      /usr/local/go/src/runtime/malloc.c:368 +0x57 fp=0x7ffeefbff300 sp=0x7ffeefbff2e0
[mongoimport error] runtime.onM(0x3917f8)
[mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:273 +0x9a fp=0x7ffeefbff308 sp=0x7ffeefbff300
[mongoimport error] runtime.mallocgc(0x120, 0x2f1120, 0x0, 0x0)
[mongoimport error]      /usr/local/go/src/runtime/malloc.go:178 +0x849 fp=0x7ffeefbff3b8 sp=0x7ffeefbff308
[mongoimport error] runtime.newobject(0x2f1120, 0x600000)
[mongoimport error]      /usr/local/go/src/runtime/malloc.go:353 +0x49 fp=0x7ffeefbff3e0 sp=0x7ffeefbff3b8
[mongoimport error] runtime.newG(0x29c0a)
[mongoimport error]      /usr/local/go/src/runtime/proc.go:233 +0x2a fp=0x7ffeefbff3f8 sp=0x7ffeefbff3e0
[mongoimport error] allocg(0x478540)
[mongoimport error]      /usr/local/go/src/runtime/proc.c:925 +0x1f fp=0x7ffeefbff408 sp=0x7ffeefbff3f8
[mongoimport error] runtime.malg(0x8000, 0x4785e0)
[mongoimport error]      /usr/local/go/src/runtime/proc.c:2106 +0x1f fp=0x7ffeefbff438 sp=0x7ffeefbff408
[mongoimport error] runtime.mpreinit(0x478bc0)
[mongoimport error]      /usr/local/go/src/runtime/os_darwin.c:137 +0x27 fp=0x7ffeefbff450 sp=0x7ffeefbff438
[mongoimport error] mcommoninit(0x478bc0)
[mongoimport error]      /usr/local/go/src/runtime/proc.c:201 +0xc9 fp=0x7ffeefbff478 sp=0x7ffeefbff450
[mongoimport error] runtime.schedinit()
[mongoimport error]      /usr/local/go/src/runtime/proc.c:138 +0x55 fp=0x7ffeefbff4a0 sp=0x7ffeefbff478
[mongoimport error] runtime.rt0_go(0x7ffeefbff4d0, 0xd, 0x7ffeefbff4d0, 0x0, 0xd, 0x7ffeefbff730, 0x7ffeefbff7bb, 0x7ffeefbff7c2, 0x7ffeefbff7c8, 0x7ffeefbff7cd, ...)
[mongoimport error]      /usr/local/go/src/runtime/asm_amd64.s:95 +0x116 fp=0x7ffeefbff4a8 sp=0x7ffeefbff4a0
[mongoimport error]
22:11:53.659 [main] ERROR d.f.e.p.runtime.AbstractProcess - failed to call onAfterProcessStart()
java.io.IOException: Could not start process: <EOF>
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.onAfterProcessStart(AbstractMongoProcess.java:79) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
        at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:114) ~[de.flapdoodle.embed.process-1.50.2.jar:na]
        at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
        at de.flapdoodle.embed.mongo.MongoImportProcess.<init>(MongoImportProcess.java:40) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:44) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
        at de.flapdoodle.embed.mongo.MongoImportExecutable.start(MongoImportExecutable.java:35) [de.flapdoodle.embed.mongo-1.50.5.jar:na]
        at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101) [de.flapdoodle.embed.process-1.50.2.jar:na]
        at org.apache.drill.exec.store.mongo.TestTableGenerator.generateTable(TestTableGenerator.java:62) [test-classes/:na]
        at org.apache.drill.exec.store.mongo.TestTableGenerator.importData(TestTableGenerator.java:47) [test-classes/:na]
        at org.apache.drill.exec.store.mongo.MongoTestSuit.initMongo(MongoTestSuit.java:231) [test-classes/:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
        at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) [junit-4.12.jar:4.12]
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) [junit-4.12.jar:4.12]
        at mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:44) [jmockit-1.3.jar:na]
        at mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29) [jmockit-1.3.jar:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_65]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_65]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_65]
        at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_65]
        at mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95) [jmockit-1.3.jar:na]
        at mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76) [jmockit-1.3.jar:na]
        at mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) [jmockit-1.3.jar:na]
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) [junit-4.12.jar:4.12]
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) [junit-4.12.jar:4.12]
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
        at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.12.jar:4.12]
        at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
        at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
        at org.junit.runner.JUnitCore.run(JUnitCore.java:115) [junit-4.12.jar:4.12]
        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113) [surefire-junit47-2.17.jar:2.17]
        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94) [surefire-junit47-2.17.jar:2.17]
        at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58) [surefire-junit47-2.17.jar:2.17]
        at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134) [surefire-junit47-2.17.jar:2.17]
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) [surefire-booter-2.17.jar:2.17]
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) [surefire-booter-2.17.jar:2.17]
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) [surefire-booter-2.17.jar:2.17]
2018-04-15T22:11:53.664-0400 I NETWORK  [conn19] end connection 127.0.0.1:49304 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn9] end connection 127.0.0.1:49301 (8 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49256 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn5] end connection 127.0.0.1:49248 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn13] end connection 127.0.0.1:49297 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn14] end connection 127.0.0.1:49298 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn18] end connection 127.0.0.1:49295 (9 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49255 (7 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49257 (3 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn4] end connection 127.0.0.1:49247 (5 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn8] end connection 127.0.0.1:49251 (3 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49252 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn6] end connection 127.0.0.1:49259 (2 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn12] end connection 127.0.0.1:49291 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn3] end connection 127.0.0.1:49246 (4 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn11] end connection 127.0.0.1:49288 (6 connections now open)
[mongod output] 2018-04-15T22:11:53.665-0400 I NETWORK  [conn1] end connection 127.0.0.1:49244 (4 connections now open)
[mongod output] 2018-04-15T22:11:53.664-0400 I NETWORK  [conn10] end connection 127.0.0.1:49258 (5 connections now open)
[mongod output] 2018-04-15T22:11:53.666-0400 I NETWORK  [conn2] end connection 127.0.0.1:49245 (2 connections now open)
[mongod output] ]]></system-out>
  </testcase>
</testsuite>





> On Apr 17, 2018, at 13:15, Timothy Farkas <tf...@mapr.com> wrote:
>
> Hi Charles,
>
> You can cd into the submodule that you want to test and run mvn clean install as usual. This will run only the unit tests for the sub module you cd'ed into.
>
> Could you post the tests that are failing for MongoDB along with the exceptions for those tests? If you could provide the complete mvn output for your failed build that would help as well. Keeping the build healthy and stable is really important to us, so it would be a big help if you could give us these details.
>
> Thanks,
> Tim
>
>
>
>
>
> ________________________________
> From: Charles Givre <cg...@gmail.com>
> Sent: Tuesday, April 17, 2018 6:22:13 AM
> To: dev@drill.apache.org
> Subject: Drill Tests Fail on Local Machine
>
> All,
> I’m trying to test some things in Drill and the unit tests for the MongoDB are failing.   I would like to know:
> 1.  If it is possible to only run the unit tests for the components I am working on OR
> 2.  It it is possible to skip the MongoDB tests
> 3.  If it isn’t, does MongoDB ned to be installed on my local machine in order for this test to pass
>
> Any advice would be appreciated.
> Thanks,
> — C