You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Subash D'Souza <sd...@truecar.com> on 2012/10/30 02:17:45 UTC

Issue with running Impala

I'm hoping this is the right place to post questions about Impala. I'm playing around with Impala and have configured and got it running. I tried a running a query though and it comes back with a very abstract error. Any help would be appreciated.

Thanks
Subash

Here are the error and log files for the same
[hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
ERROR: Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
Error(255): Unknown error 255
ERROR: Invalid query handle

My log files don't seem to give much information

Impala State Server

 I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions {
  01: abort_on_error (bool) = false,
  02: max_errors (i32) = 0,
  03: disable_codegen (bool) = false,
  04: batch_size (i32) = 0,
  05: return_as_ascii (bool) = true,
  06: num_nodes (i32) = 0,
  07: max_scan_range_length (i64) = 0,
  08: num_scanner_threads (i32) = 0,
  09: max_io_buffers (i32) = 0,
  10: allow_unsupported_formats (bool) = false,
  11: partition_agg (bool) = false,
}
I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2 limit 5
I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):  (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010 -> 10.5.22.23:22000)
I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage 100% (3 out of 3)
I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
tuples:
Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
 null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
 type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 coord=10.5.22.24:22000 backend#=2
I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
tuples:
Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
 null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
 type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2 node_id=1
I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3 backend=10.5.22.22:22000
I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 backend=10.5.22.24:22000
I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1 failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2, node=1
I1029 18:01:02.962131 23269

 Impala DataNode

I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4 coord=10.5.22.24:22000 backend#=1
I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
tuples:
Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
 null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
 type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (unknown)
    @           0x731bdb  (unknown)
    @           0x731e1a  (unknown)
    @     0x7f2cb64bbd97  (unknown)
    @     0x7f2cb4ab67f1  start_thread
    @     0x7f2cb405370d  clone
I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (unknown)
    @           0x731bdb  (unknown)
    @           0x731e1a  (unknown)
    @     0x7f2cb64bbd97  (unknown)
    @     0x7f2cb4ab67f1  start_thread
    @     0x7f2cb405370d  clone
I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (unknown)
    @           0x731bdb  (unknown)
    @           0x731e1a  (unknown)
    @     0x7f2cb64bbd97  (unknown)
    @     0x7f2cb4ab67f1  start_thread
    @     0x7f2cb405370d  clone
I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (un

 and here is the configuration of my datanodes

Hadoop Configuration

Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
<
td>mapreduce.job.end-notification.retry.attempts<
tr><
/tr>
Key     Value
dfs.datanode.data.dir   /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dn
dfs.namenode.checkpoint.txns    40000
s3.replication  3
mapreduce.output.fileoutputformat.compress.type RECORD
mapreduce.jobtracker.jobhistory.lru.cache.size  5
dfs.datanode.failed.volumes.tolerated   0
hadoop.http.filter.initializers org.apache.hadoop.http.lib.StaticUserWebFilter
mapreduce.cluster.temp.dir      ${hadoop.tmp.dir}/mapred/temp
mapreduce.reduce.shuffle.memory.limit.percent   0.25
yarn.nodemanager.keytab /etc/krb5.keytab
mapreduce.reduce.skip.maxgroups 0
dfs.https.server.keystore.resource      ssl-server.xml
hadoop.http.authenti
cation.kerberos.keytab  ${user.home}/hadoop.keytab
yarn.nodemanager.localizer.client.thread-count  5
mapreduce.framework.name        local
io.file.buffer.size     4096
mapreduce.task.tmp.dir  ./tmp
dfs.namenode.checkpoint.period  3600
ipc.client.kill.max     10
mapreduce.jobtracker.taskcache.levels   2
s3.stream-buffer-size   4096
dfs.namenode.secondary.http-address     0.0.0.0:50090
dfs.namenode.decommission.interval      30
dfs.namenode.http-address       0.0.0.0:50070
mapreduce.task.files.preserve.failedtasks       false
dfs.encrypt.data.transfer       false
dfs.datanode.address    0.0.0.0:50010
hadoop.http.authentication.token.validi
ty      36000
hadoop.security.group.mapping.ldap.search.filter.group  (objectClass=group)
dfs.client.failover.max.attempts        15
kfs.client-write-packet-size    65536
yarn.admin.acl  *
yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs        86400
dfs.client.failover.connection.retries.on.timeouts      0
mapreduce.map.sort.spill.percent        0.80
file.stream-buffer-size 4096
dfs.webhdfs.enabled     true
ipc.client.connection.maxidletime       10000
mapreduce.jobtracker.persist.jobstatus.hours    1
dfs.datanode.ipc.address        0.0.0.0:50020
yarn.nodemanager.address        0.0.0.0:0
yarn.app.mapreduce.am.job.task.listener.thread-count    30
dfs.client.read.shortcircuit    true
dfs.namenode.safemode.extension 30000
ha.zookeeper.parent-znode       /hadoop-ha
yarn.nodemanager.container-executor.class       org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
io.skip.checksum.errors false
yarn.resourcemanager.scheduler.client.thread-count      50
hadoop.http.authentication.kerberos.principal   HTTP/_HOST@LOCALHOST
mapreduce.reduce.log.level      INFO
fs.s3.maxRetries        4
hadoop.kerberos.kinit.command   kinit
yarn.nodemanager.process-kill-wait.ms   2000
dfs.namenode.name.dir.restore   false
mapreduce.jobtracker.handler.count      10
yarn.app.mapreduce.client-am.ipc.max-retries
        1
dfs.client.use.datanode.hostname        false
hadoop.util.hash.type   murmur
io.seqfile.lazydecompress       true
dfs.datanode.dns.interface      default
yarn.nodemanager.disk-health-checker.min-healthy-disks  0.25
mapreduce.job.maxtaskfailures.per.tracker       4
mapreduce.tasktracker.healthchecker.script.timeout      600000
hadoop.security.group.mapping.ldap.search.attr.group.name       cn
fs.df.interval  60000
dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
mapreduce.jobtracker.address    local
mapreduce.tasktracker.tasks.sleeptimebeforesigkill      5000
dfs.journalnode.rpc-address     0.0.0.0:8485
mapreduce.job.a
cl-view-job
dfs.client.block.write.replace-datanode-on-failure.policy       DEFAULT
dfs.namenode.replication.interval       3
dfs.namenode.num.checkpoints.retained   2
mapreduce.tasktracker.http.address      0.0.0.0:50060
yarn.resourcemanager.scheduler.address  0.0.0.0:8030
dfs.datanode.directoryscan.threads      1
hadoop.security.group.mapping.ldap.ssl  false
mapreduce.task.merge.progress.records   10000
dfs.heartbeat.interval  3
net.topology.script.number.args 100
mapreduce.local.clientfactory.class.name        org.apache.hadoop.mapred.LocalClientFactory
dfs.client-write-packet-size    65536
io.native.lib.available true
dfs.client.failover.conne
ction.retries   0
yarn.nodemanager.disk-health-checker.interval-ms        120000
dfs.blocksize   67108864
mapreduce.jobhistory.webapp.address     0.0.0.0:19888
yarn.resourcemanager.resource-tracker.client.thread-count       50
dfs.blockreport.initialDelay    0
mapreduce.reduce.markreset.buffer.percent       0.0
dfs.ha.tail-edits.period        60
mapreduce.admin.user.env        LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native
yarn.nodemanager.health-checker.script.timeout-ms       1200000
yarn.resourcemanager.client.thread-count        50
file.bytes-per-checksum 512
dfs.replication.max     512
io.map.index.skip       0
mapreduce.task.timeout  600000
dfs.d
atanode.du.reserved     0
dfs.support.append      true
ftp.blocksize   67108864
dfs.client.file-block-storage-locations.num-threads     10
yarn.nodemanager.container-manager.thread-count 20
ipc.server.listen.queue.size    128
yarn.resourcemanager.amliveliness-monitor.interval-ms   1000
hadoop.ssl.hostname.verifier    DEFAULT
mapreduce.tasktracker.dns.interface     default
hadoop.security.group.mapping.ldap.search.attr.member   member
mapreduce.tasktracker.outofband.heartbeat       false
mapreduce.job.userlog.retain.hours      24
yarn.nodemanager.resource.memory-mb     8192
dfs.namenode.delegation.token.renew-interval    86400000
hadoop.ssl.keystores.factor
y.class org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
dfs.datanode.sync.behind.writes false
mapreduce.map.maxattempts       4
dfs.client.read.shortcircuit.skip.checksum      false
dfs.datanode.handler.count      10
hadoop.ssl.require.client.cert  false
ftp.client-write-packet-size    65536
ipc.server.tcpnodelay   false
mapreduce.task.profile.reduces  0-2
hadoop.fuse.connection.timeout  300
dfs.permissions.superusergroup  hadoop
mapreduce.jobtracker.jobhistory.task.numberprogresssplits       12
mapreduce.map.speculative       true
fs.ftp.host.port        21
dfs.datanode.data.dir.perm      700
mapreduce.client.submit.file.re
plication       10
s3native.blocksize      67108864
mapreduce.job.ubertask.maxmaps  9
dfs.namenode.replication.min    1
mapreduce.cluster.acls.enabled  false
yarn.nodemanager.localizer.fetch.thread-count   4
map.sort.class  org.apache.hadoop.util.QuickSort
fs.trash.checkpoint.interval    0
dfs.namenode.name.dir   /home/data/1/dfs/nn
yarn.app.mapreduce.am.staging-dir       /tmp/hadoop-yarn/staging
fs.AbstractFileSystem.file.impl org.apache.hadoop.fs.local.LocalFs
yarn.nodemanager.env-whitelist  JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME
dfs.image.compression.codec     org.apache.hadoop.io.compress.DefaultCodec
mapreduce.job.reduces   1

mapreduce.job.complete.cancel.delegation.tokens true
hadoop.security.group.mapping.ldap.search.filter.user   (&(objectClass=user)(sAMAccountName={0}))
yarn.nodemanager.sleep-delay-before-sigkill.ms  250
mapreduce.tasktracker.healthchecker.interval    60000
mapreduce.jobtracker.heartbeats.in.second       100
kfs.bytes-per-checksum  512
mapreduce.jobtracker.persist.jobstatus.dir      /jobtracker/jobsInfo
dfs.namenode.backup.http-address        0.0.0.0:50105
hadoop.rpc.protection   authentication
dfs.namenode.https-address      0.0.0.0:50470
ftp.stream-buffer-size  4096
dfs.ha.log-roll.period  120
yarn.resourcemanager.admin.client.thread-count  1
yar
n.resourcemanager.zookeeper-store.session.timeout-ms    60000
file.client-write-packet-size   65536
hadoop.http.authentication.simple.anonymous.allowed     true
yarn.nodemanager.log.retain-seconds     10800
dfs.datanode.drop.cache.behind.reads    false
dfs.image.transfer.bandwidthPerSec      0
mapreduce.tasktracker.instrumentation   org.apache.hadoop.mapred.TaskTrackerMetricsInst
io.mapfile.bloom.size   1048576
dfs.ha.fencing.ssh.connect-timeout      30000
s3.bytes-per-checksum   512
fs.automatic.close      true
fs.trash.interval       0
hadoop.security.authentication  simple
fs.defaultFS    hdfs://hadoop1.rad.wc.truecarcorp.com:8020
hadoop.ssl.server.conf
        ssl-server.xml
ipc.client.connect.max.retries  10
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms       30000
dfs.journalnode.http-address    0.0.0.0:8480
mapreduce.jobtracker.taskscheduler      org.apache.hadoop.mapred.JobQueueTaskScheduler
mapreduce.job.speculative.speculativecap        0.1
yarn.am.liveness-monitor.expiry-interval-ms     600000
mapreduce.output.fileoutputformat.compress      false
net.topology.node.switch.mapping.impl   org.apache.hadoop.net.ScriptBasedMapping
dfs.namenode.replication.considerLoad   true
mapreduce.job.counters.max      120
yarn.resourcemanager.address    0.0.0.0:8032
dfs.client.block.write.retries  3
yarn.resourcemanager.nm.liveness-monitor.interval-ms    1000
io.map.index.interval   128
mapred.child.java.opts  -Xmx200m
mapreduce.tasktracker.local.dir.minspacestart   0
dfs.client.https.keystore.resource      ssl-client.xml
mapreduce.client.progressmonitor.pollinterval   1000
mapreduce.jobtracker.tasktracker.maxblacklists  4
mapreduce.job.queuename default
yarn.nodemanager.localizer.address      0.0.0.0:8040
io.mapfile.bloom.error.rate     0.005
mapreduce.job.split.metainfo.maxsize    10000000
yarn.nodemanager.delete.thread-count    4
ipc.client.tcpnodelay   false
yarn.app.mapreduce.am.resource.mb       1536
dfs.datanode.dns.nameserver     default
mapreduce.map.output.compress.codec     org.apache.hadoop.io.compress.DefaultCodec
dfs.namenode.accesstime.precision       3600000
mapreduce.map.log.level INFO
io.seqfile.compress.blocksize   1000000
mapreduce.tasktracker.taskcontroller    org.apache.hadoop.mapred.DefaultTaskController
hadoop.security.groups.cache.secs       300
mapreduce.job.end-notification.max.attempts     5
yarn.nodemanager.webapp.address 0.0.0.0:8042
mapreduce.jobtracker.expire.trackers.interval   600000
yarn.resourcemanager.webapp.address     0.0.0.0:8088
yarn.nodemanager.health-checker.interval-ms     600000
hadoop.security.authorization   false
fs.ftp.host     0.0.0.0
yarn.app.mapreduce.am.scheduler
.heartbeat.interval-ms  1000
mapreduce.ifile.readahead       true
ha.zookeeper.session-timeout.ms 5000
mapreduce.tasktracker.taskmemorymanager.monitoringinterval      5000
mapreduce.reduce.shuffle.parallelcopies 5
mapreduce.map.skip.maxrecords   0
dfs.https.enable        false
mapreduce.reduce.shuffle.read.timeout   180000
mapreduce.output.fileoutputformat.compress.codec        org.apache.hadoop.io.compress.DefaultCodec
mapreduce.jobtracker.instrumentation    org.apache.hadoop.mapred.JobTrackerMetricsInst
yarn.nodemanager.remote-app-log-dir-suffix      logs
dfs.blockreport.intervalMsec    21600000
mapreduce.reduce.speculative    true
mapreduce.jobhistory.keytab     /etc/sec
urity/keytab/jhs.service.keytab
dfs.datanode.balance.bandwidthPerSec    1048576
file.blocksize  67108864
yarn.resourcemanager.admin.address      0.0.0.0:8033
yarn.resourcemanager.resource-tracker.address   0.0.0.0:8031
mapreduce.tasktracker.local.dir.minspacekill    0
mapreduce.jobtracker.staging.root.dir   ${hadoop.tmp.dir}/mapred/staging
mapreduce.jobtracker.retiredjobs.cache.size     1000
ipc.client.connect.max.retries.on.timeouts      45
ha.zookeeper.acl        world:anyone:rwcda
yarn.nodemanager.local-dirs     /tmp/nm-local-dir
mapreduce.reduce.shuffle.connect.timeout        180000
dfs.block.access.key.update.interval    600
dfs.block.access.token.lifetime 600
5
mapreduce.jobtracker.system.dir ${hadoop.tmp.dir}/mapred/system
yarn.nodemanager.admin-env      MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
mapreduce.jobtracker.jobhistory.block.size      3145728
mapreduce.tasktracker.indexcache.mb     10
dfs.namenode.checkpoint.check.period    60
dfs.client.block.write.replace-datanode-on-failure.enable       true
dfs.datanode.directoryscan.interval     21600
yarn.nodemanager.container-monitor.interval-ms  3000
dfs.default.chunk.view.size     32768
mapreduce.job.speculative.slownodethreshold     1.0
mapreduce.job.reduce.slowstart.completedmaps    0.05
hadoop.security.instrumentation.requires.admin  false
dfs.namenode.safemode.min.datanodes     0
hadoop.http.authentication.signature.secret.file        ${user.home}/hadoop-http-auth-signature-secret
mapreduce.reduce.maxattempts    4
yarn.nodemanager.localizer.cache.target-size-mb 10240
s3native.replication    3
dfs.datanode.https.address      0.0.0.0:50475
mapreduce.reduce.skip.proc.count.autoincr       true
file.replication        1
hadoop.hdfs.configuration.version       1
ipc.client.idlethreshold        4000
hadoop.tmp.dir  /tmp/hadoop-${user.name}
mapreduce.jobhistory.address    0.0.0.0:10020
mapreduce.jobtracker.restart.recover    false
mapreduce.cluster.local.dir     ${hadoop.tmp.dir}/mapred/local
yarn.ipc.s
erializer.type  protocolbuffers
dfs.namenode.decommission.nodes.per.interval    5
dfs.namenode.delegation.key.update-interval     86400000
fs.s3.buffer.dir        ${hadoop.tmp.dir}/s3
dfs.namenode.support.allow.format       true
yarn.nodemanager.remote-app-log-dir     /tmp/logs
hadoop.work.around.non.threadsafe.getpwuid      false
dfs.ha.automatic-failover.enabled       false
mapreduce.jobtracker.persist.jobstatus.active   true
dfs.namenode.logging.level      info
yarn.nodemanager.log-dirs       /tmp/logs
dfs.namenode.checkpoint.edits.dir       ${dfs.namenode.checkpoint.dir}
hadoop.rpc.socket.factory.class.default org.apache.hadoop.net.StandardSocketFactory
yarn.resourcemanager.keytab
        /etc/krb5.keytab
dfs.datanode.http.address       0.0.0.0:50075
mapreduce.task.profile  false
dfs.namenode.edits.dir  ${dfs.namenode.name.dir}
hadoop.fuse.timer.period        5
mapreduce.map.skip.proc.count.autoincr  true
fs.AbstractFileSystem.viewfs.impl       org.apache.hadoop.fs.viewfs.ViewFs
mapreduce.job.speculative.slowtaskthreshold     1.0
s3native.stream-buffer-size     4096
yarn.nodemanager.delete.debug-delay-sec 0
dfs.secondary.namenode.kerberos.internal.spnego.principal       ${dfs.web.authentication.kerberos.principal}
dfs.namenode.safemode.threshold-pct     0.999f
mapreduce.ifile.readahead.bytes 4194304
yarn.scheduler.maximum-allocation-mb    10240
s3native.bytes-per-checksum     512
mapreduce.job.committer.setup.cleanup.needed    true
kfs.replication 3
yarn.nodemanager.log-aggregation.compression-type       none
hadoop.http.authentication.type simple
dfs.client.failover.sleep.base.millis   500
yarn.nodemanager.heartbeat.interval-ms  1000
hadoop.jetty.logs.serve.aliases true
mapreduce.reduce.shuffle.input.buffer.percent   0.70
dfs.datanode.max.transfer.threads       4096
mapreduce.task.io.sort.mb       100
mapreduce.reduce.merge.inmem.threshold  1000
dfs.namenode.handler.count      10
hadoop.ssl.client.conf  ssl-client.xml
yarn.resourcemanager.container.liveness-monitor.interval-ms
        600000
mapreduce.client.completion.pollinterval        5000
yarn.nodemanager.vmem-pmem-ratio        2.1
yarn.app.mapreduce.client.max-retries   3
hadoop.ssl.enabled      false
fs.AbstractFileSystem.hdfs.impl org.apache.hadoop.fs.Hdfs
mapreduce.tasktracker.reduce.tasks.maximum      2
mapreduce.reduce.input.buffer.percent   0.0
kfs.stream-buffer-size  4096
dfs.namenode.invalidate.work.pct.per.iteration  0.32f
dfs.bytes-per-checksum  512
dfs.replication 3
mapreduce.shuffle.ssl.file.buffer.size  65536
dfs.permissions.enabled true
mapreduce.jobtracker.maxtasks.perjob    -1
dfs.datanode.use.datanode.hostname      false
mapreduce.task.userlog.limit.kb 0
dfs.namenode.fs-limits.max-directory-items      0
s3.client-write-packet-size     65536
dfs.client.failover.sleep.max.millis    15000
mapreduce.job.maps      2
dfs.namenode.fs-limits.max-component-length     0
mapreduce.map.output.compress   false
s3.blocksize    67108864
kfs.blocksize   67108864
dfs.namenode.edits.journal-plugin.qjournal      org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager
dfs.client.https.need-auth      false
yarn.scheduler.minimum-allocation-mb    128
ftp.replication 3
mapreduce.input.fileinputformat.split.minsize   0
fs.s3n.block.size       67108864
yarn.i
pc.rpc.class    org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
dfs.namenode.num.extra.edits.retained   1000000
hadoop.http.staticuser.user     dr.who
yarn.nodemanager.localizer.cache.cleanup.interval-ms    600000
mapreduce.job.jvm.numtasks      1
mapreduce.task.profile.maps     0-2
mapreduce.shuffle.port  8080
mapreduce.jobtracker.http.address       0.0.0.0:50030
mapreduce.reduce.shuffle.merge.percent  0.66
mapreduce.task.skip.start.attempts      2
mapreduce.task.io.sort.factor   10
dfs.namenode.checkpoint.dir     file://${hadoop.tmp.dir}/dfs/namesecondary
tfile.fs.input.buffer.size      262144
fs.s3.block.size        67108864
tfile.io.chunk.size     1048576

io.serializations       org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
yarn.resourcemanager.max-completed-applications 10000
mapreduce.jobhistory.principal  jhs/_HOST@REALM.TLD
mapreduce.job.end-notification.retry.interval   1
dfs.namenode.backup.address     0.0.0.0:50100
dfs.block.access.token.enable   false
io.seqfile.sorter.recordlimit   1000000
s3native.client-write-packet-size       65536
ftp.bytes-per-checksum  512
hadoop.security.group.mapping   org.apache.hadoop.security.ShellBasedUnixGroupsMapping
dfs.client.file-block-storage-locations.timeout 60
mapre
duce.job.end-notification.max.retry.interval    5
yarn.acl.enable true
yarn.nm.liveness-monitor.expiry-interval-ms     600000
mapreduce.tasktracker.map.tasks.maximum 2
dfs.namenode.max.objects        0
dfs.namenode.delegation.token.max-lifetime      604800000
mapreduce.job.hdfs-servers      ${fs.defaultFS}
yarn.application.classpath      $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*
dfs.datanode.hdfs-blocks-metadata.enabled       true
yarn.nodemanager.aux-services.mapreduce.shuffle.class
        org.apache.hadoop.mapred.ShuffleHandler
mapreduce.tasktracker.dns.nameserver    default
dfs.datanode.readahead.bytes    4193404
mapreduce.job.ubertask.maxreduces       1
dfs.image.compress      false
mapreduce.shuffle.ssl.enabled   false
yarn.log-aggregation-enable     false
mapreduce.tasktracker.report.address    127.0.0.1:0
mapreduce.tasktracker.http.threads      40
dfs.stream-buffer-size  4096
tfile.fs.output.buffer.size     262144
yarn.resourcemanager.am.max-retries     1
dfs.datanode.drop.cache.behind.writes   false
mapreduce.job.ubertask.enable   false
hadoop.common.configuration.version     0.23.0
dfs.namenode.replication.work.m
ultiplier.per.iteration 2
mapreduce.job.acl-modify-job
io.seqfile.local.dir    ${hadoop.tmp.dir}/io/local
fs.s3.sleepTimeSeconds  10
mapreduce.client.output.filter  FAILED




[cid:DBF9A811-A509-4C8E-834F-8CC497761F5B]

Re: Issue with running Impala

Posted by Brock Noland <br...@cloudera.com>.
Hi,

This question should go to the impala-user group which you can subscribe to
here:

https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/impala-user

Sorry for the confusion.

Brock

On Mon, Oct 29, 2012 at 8:17 PM, Subash D'Souza <sd...@truecar.com> wrote:

> I'm hoping this is the right place to post questions about Impala. I'm
> playing around with Impala and have configured and got it running. I tried
> a running a query though and it comes back with a very abstract error. Any
> help would be appreciated.
>
> Thanks
> Subash
>
> Here are the error and log files for the same
> [hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
> ERROR: Failed to open HDFS file hdfs://
> hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255): Unknown error 255
> ERROR: Invalid query handle
>
> My log files don't seem to give much information
>
> Impala State Server
>
>
>  I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions {
>   01: abort_on_error (bool) = false,
>   02: max_errors (i32) = 0,
>   03: disable_codegen (bool) = false,
>   04: batch_size (i32) = 0,
>   05: return_as_ascii (bool) = true,
>   06: num_nodes (i32) = 0,
>   07: max_scan_range_length (i64) = 0,
>   08: num_scanner_threads (i32) = 0,
>   09: max_io_buffers (i32) = 0,
>   10: allow_unsupported_formats (bool) = false,
>   11: partition_agg (bool) = false,
> }
> I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2 limit 5
> I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):  (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010 -> 10.5.22.23:22000)
> I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage 100% (3 out of 3)
> I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 coord=10.5.22.24:22000 backend#=2
> I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2 node_id=1
> I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3 backend=10.5.22.22:22000
> I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 backend=10.5.22.24:22000
> I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1 failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
> I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
> I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2, node=1
> I1029 18:01:02.962131 23269
>
>  Impala DataNode
>
> I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4 coord=10.5.22.24:22000 backend#=1
> I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (un
>
>  and here is the configuration of my datanodes
>
>
> Hadoop Configuration
>
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
> <
> td>mapreduce.job.end-notification.retry.attempts<
> tr><
> /tr>KeyValuedfs.datanode.data.dir
> /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dndfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.fileoutputformat.compress.typeRECORDmapreduce.jobtracker.jobhistory.lru.cache.size5dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializersorg.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent0.25yarn.nodemanager.keytab
> /etc/krb5.keytabmapreduce.reduce.skip.maxgroups0dfs.https.server.keystore.resource
> ssl-server.xmlhadoop.http.authenti
> cation.kerberos.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5mapreduce.framework.namelocalio.file.buffer.size4096mapreduce.task.tmp.dir./tmpdfs.namenode.checkpoint.period3600ipc.client.kill.max10mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-address0.0.0.0:50090dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transferfalsedfs.datanode.address0.0.0.0:50010hadoop.http.authentication.token.validi
> ty36000hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
> dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
> yarn.admin.acl*yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs86400dfs.client.failover.connection.retries.on.timeouts0mapreduce.map.sort.spill.percent
> 0.80file.stream-buffer-size4096dfs.webhdfs.enabledtrueipc.client.connection.maxidletime10000mapreduce.jobtracker.persist.jobstatus.hours
> 1dfs.datanode.ipc.address0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0yarn.app.mapreduce.am.job.task.listener.thread-count30dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-executor.class
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorio.skip.checksum.errorsfalseyarn.resourcemanager.scheduler.client.thread-count50hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
> mapreduce.reduce.log.levelINFOfs.s3.maxRetries4hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
> 2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count
> 10yarn.app.mapreduce.client-am.ipc.max-retries1dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefaultyarn.nodemanager.disk-health-checker.min-healthy-disks0.25
> mapreduce.job.maxtaskfailures.per.tracker4mapreduce.tasktracker.healthchecker.script.timeout600000hadoop.security.group.mapping.ldap.search.attr.group.name
> cnfs.df.interval60000dfs.namenode.kerberos.internal.spnego.principal
> ${dfs.web.authentication.kerberos.principal}mapreduce.jobtracker.addresslocalmapreduce.tasktracker.tasks.sleeptimebeforesigkill5000dfs.journalnode.rpc-address0.0.0.0:8485
> mapreduce.job.a
> cl-view-jobdfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
> dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
> mapreduce.tasktracker.http.address0.0.0.0:50060yarn.resourcemanager.scheduler.address0.0.0.0:8030dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.sslfalsemapreduce.task.merge.progress.records
> 10000dfs.heartbeat.interval3net.topology.script.number.args
> 100mapreduce.local.clientfactory.class.nameorg.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size65536io.native.lib.availabletruedfs.client.failover.conne
> ction.retries0yarn.nodemanager.disk-health-checker.interval-ms120000
> dfs.blocksize67108864mapreduce.jobhistory.webapp.address0.0.0.0:19888yarn.resourcemanager.resource-tracker.client.thread-count50dfs.blockreport.initialDelay
> 0mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period
> 60mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/nativeyarn.nodemanager.health-checker.script.timeout-ms1200000yarn.resourcemanager.client.thread-count50file.bytes-per-checksum512dfs.replication.max512io.map.index.skip
> 0mapreduce.task.timeout600000dfs.d
> atanode.du.reserved0dfs.support.appendtrueftp.blocksize67108864dfs.client.file-block-storage-locations.num-threads10yarn.nodemanager.container-manager.thread-count20ipc.server.listen.queue.size128yarn.resourcemanager.amliveliness-monitor.interval-ms1000hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interfacedefaulthadoop.security.group.mapping.ldap.search.attr.membermember
> mapreduce.tasktracker.outofband.heartbeatfalsemapreduce.job.userlog.retain.hours24
> yarn.nodemanager.resource.memory-mb8192dfs.namenode.delegation.token.renew-interval86400000hadoop.ssl.keystores.factor
> y.classorg.apache.hadoop.security.ssl.FileBasedKeyStoresFactorydfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4dfs.client.read.shortcircuit.skip.checksum
> falsedfs.datanode.handler.count10hadoop.ssl.require.client.cert
> falseftp.client-write-packet-size65536ipc.server.tcpnodelay
> falsemapreduce.task.profile.reduces0-2hadoop.fuse.connection.timeout
> 300dfs.permissions.superusergrouphadoopmapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapreduce.map.speculativetruefs.ftp.host.port
> 21dfs.datanode.data.dir.perm700mapreduce.client.submit.file.re
> plication10s3native.blocksize67108864mapreduce.job.ubertask.maxmaps9dfs.namenode.replication.min1mapreduce.cluster.acls.enabledfalseyarn.nodemanager.localizer.fetch.thread-count4map.sort.classorg.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0dfs.namenode.name.dir/home/data/1/dfs/nnyarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/staging
> fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOMEdfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.job.reduces
> 1mapreduce.job.complete.cancel.delegation.tokenstruehadoop.security.group.mapping.ldap.search.filter.user(&(objectClass=user)(sAMAccountName={0}))yarn.nodemanager.sleep-delay-before-sigkill.ms250mapreduce.tasktracker.healthchecker.interval60000mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfodfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protectionauthenticationdfs.namenode.https-address0.0.0.0:50470ftp.stream-buffer-size4096dfs.ha.log-roll.period120yarn.resourcemanager.admin.client.thread-count1yar
> n.resourcemanager.zookeeper-store.session.timeout-ms60000file.client-write-packet-size65536hadoop.http.authentication.simple.anonymous.allowedtrueyarn.nodemanager.log.retain-seconds
> 10800dfs.datanode.drop.cache.behind.readsfalsedfs.image.transfer.bandwidthPerSec
> 0mapreduce.tasktracker.instrumentationorg.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size1048576dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512fs.automatic.closetruefs.trash.interval
> 0hadoop.security.authenticationsimplefs.defaultFShdfs://hadoop1.rad.wc.truecarcorp.com:8020hadoop.ssl.server.confssl-server.xmlipc.client.connect.max.retries10yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskschedulerorg.apache.hadoop.mapred.JobQueueTaskSchedulermapreduce.job.speculative.speculativecap0.1yarn.am.liveness-monitor.expiry-interval-ms600000mapreduce.output.fileoutputformat.compressfalsenet.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping
> dfs.namenode.replication.considerLoadtruemapreduce.job.counters.max120
> yarn.resourcemanager.address0.0.0.0:8032dfs.client.block.write.retries
> 3yarn.resourcemanager.nm.liveness-monitor.interval-ms1000io.map.index.interval
> 128mapred.child.java.opts-Xmx200mmapreduce.tasktracker.local.dir.minspacestart
> 0dfs.client.https.keystore.resourcessl-client.xmlmapreduce.client.progressmonitor.pollinterval1000mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuenamedefaultyarn.nodemanager.localizer.address0.0.0.0:8040io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize10000000yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalseyarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserver
> defaultmapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecdfs.namenode.accesstime.precision3600000mapreduce.map.log.levelINFOio.seqfile.compress.blocksize1000000mapreduce.tasktracker.taskcontrollerorg.apache.hadoop.mapred.DefaultTaskController
> hadoop.security.groups.cache.secs300mapreduce.job.end-notification.max.attempts5
> yarn.nodemanager.webapp.address0.0.0.0:8042mapreduce.jobtracker.expire.trackers.interval600000yarn.resourcemanager.webapp.address0.0.0.0:8088yarn.nodemanager.health-checker.interval-ms600000hadoop.security.authorization
> falsefs.ftp.host0.0.0.0yarn.app.mapreduce.am.scheduler
> .heartbeat.interval-ms1000mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
> mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
> dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
> mapreduce.output.fileoutputformat.compress.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.jobtracker.instrumentation
> org.apache.hadoop.mapred.JobTrackerMetricsInstyarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab/etc/sec
> urity/keytab/jhs.service.keytabdfs.datanode.balance.bandwidthPerSec1048576file.blocksize
> 67108864yarn.resourcemanager.admin.address0.0.0.0:8033
> yarn.resourcemanager.resource-tracker.address0.0.0.0:8031mapreduce.tasktracker.local.dir.minspacekill0mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
> mapreduce.jobtracker.retiredjobs.cache.size1000ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.aclworld:anyone:rwcdayarn.nodemanager.local-dirs/tmp/nm-local-dirmapreduce.reduce.shuffle.connect.timeout180000dfs.block.access.key.update.interval
> 600dfs.block.access.token.lifetime6005mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemyarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
> mapreduce.jobtracker.jobhistory.block.size3145728mapreduce.tasktracker.indexcache.mb10
> dfs.namenode.checkpoint.check.period60dfs.client.block.write.replace-datanode-on-failure.enabletruedfs.datanode.directoryscan.interval21600yarn.nodemanager.container-monitor.interval-ms
> 3000dfs.default.chunk.view.size32768mapreduce.job.speculative.slownodethreshold
> 1.0mapreduce.job.reduce.slowstart.completedmaps0.05hadoop.security.instrumentation.requires.adminfalsedfs.namenode.safemode.min.datanodes0hadoop.http.authentication.signature.secret.file${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts4
> yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
> dfs.datanode.https.address0.0.0.0:50475mapreduce.reduce.skip.proc.count.autoincr
> truefile.replication1hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000hadoop.tmp.dir/tmp/hadoop-${user.name}mapreduce.jobhistory.address0.0.0.0:10020mapreduce.jobtracker.restart.recoverfalsemapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/localyarn.ipc.s
> erializer.typeprotocolbuffersdfs.namenode.decommission.nodes.per.interval5
> dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir${hadoop.tmp.dir}/s3
> dfs.namenode.support.allow.formattrueyarn.nodemanager.remote-app-log-dir/tmp/logs
> hadoop.work.around.non.threadsafe.getpwuidfalsedfs.ha.automatic-failover.enabledfalse
> mapreduce.jobtracker.persist.jobstatus.activetruedfs.namenode.logging.levelinfo
> yarn.nodemanager.log-dirs/tmp/logsdfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab/etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075mapreduce.task.profilefalsedfs.namenode.edits.dir${dfs.namenode.name.dir}hadoop.fuse.timer.period5mapreduce.map.skip.proc.count.autoincrtruefs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFsmapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size4096yarn.nodemanager.delete.debug-delay-sec0dfs.secondary.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
> 4194304yarn.scheduler.maximum-allocation-mb10240s3native.bytes-per-checksum
> 512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication
> 3yarn.nodemanager.log-aggregation.compression-typenonehadoop.http.authentication.type
> simpledfs.client.failover.sleep.base.millis500yarn.nodemanager.heartbeat.interval-ms
> 1000hadoop.jetty.logs.serve.aliasestruemapreduce.reduce.shuffle.input.buffer.percent
> 0.70dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb
> 100mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count
> 10hadoop.ssl.client.confssl-client.xmlyarn.resourcemanager.container.liveness-monitor.interval-ms600000mapreduce.client.completion.pollinterval5000yarn.nodemanager.vmem-pmem-ratio2.1yarn.app.mapreduce.client.max-retries3hadoop.ssl.enabledfalsefs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfsmapreduce.tasktracker.reduce.tasks.maximum2mapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size4096dfs.namenode.invalidate.work.pct.per.iteration0.32fdfs.bytes-per-checksum512dfs.replication3mapreduce.shuffle.ssl.file.buffer.size
> 65536dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob
> -1dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb
> 0dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size
> 65536dfs.client.failover.sleep.max.millis15000mapreduce.job.maps
> 2dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
> falses3.blocksize67108864kfs.blocksize67108864dfs.namenode.edits.journal-plugin.qjournalorg.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerdfs.client.https.need-authfalseyarn.scheduler.minimum-allocation-mb128ftp.replication3mapreduce.input.fileinputformat.split.minsize0fs.s3n.block.size67108864yarn.i
> pc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCdfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.userdr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms
> 600000mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps
> 0-2mapreduce.shuffle.port8080mapreduce.jobtracker.http.address0.0.0.0:50030mapreduce.reduce.shuffle.merge.percent0.66
> mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
> dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondarytfile.fs.input.buffer.size262144fs.s3.block.size67108864tfile.io.chunk.size1048576io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerializationyarn.resourcemanager.max-completed-applications10000mapreduce.jobhistory.principal
> jhs/_HOST@REALM.TLDmapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address0.0.0.0:50100dfs.block.access.token.enablefalseio.seqfile.sorter.recordlimit1000000s3native.client-write-packet-size65536ftp.bytes-per-checksum512hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMappingdfs.client.file-block-storage-locations.timeout60mapre
> duce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
> yarn.nm.liveness-monitor.expiry-interval-ms600000mapreduce.tasktracker.map.tasks.maximum2
> dfs.namenode.max.objects0dfs.namenode.delegation.token.max-lifetime604800000
> mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*dfs.datanode.hdfs-blocks-metadata.enabledtrueyarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandlermapreduce.tasktracker.dns.nameserverdefault
> dfs.datanode.readahead.bytes4193404mapreduce.job.ubertask.maxreduces1
> dfs.image.compressfalsemapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalsemapreduce.tasktracker.report.address127.0.0.1:0mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096tfile.fs.output.buffer.size262144yarn.resourcemanager.am.max-retries1dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enable
> falsehadoop.common.configuration.version0.23.0dfs.namenode.replication.work.m
> ultiplier.per.iteration2mapreduce.job.acl-modify-jobio.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10mapreduce.client.output.filterFAILED
>
>
>


-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: Issue with running Impala

Posted by Brock Noland <br...@cloudera.com>.
Hi,

This question should go to the impala-user group which you can subscribe to
here:

https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/impala-user

Sorry for the confusion.

Brock

On Mon, Oct 29, 2012 at 8:17 PM, Subash D'Souza <sd...@truecar.com> wrote:

> I'm hoping this is the right place to post questions about Impala. I'm
> playing around with Impala and have configured and got it running. I tried
> a running a query though and it comes back with a very abstract error. Any
> help would be appreciated.
>
> Thanks
> Subash
>
> Here are the error and log files for the same
> [hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
> ERROR: Failed to open HDFS file hdfs://
> hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255): Unknown error 255
> ERROR: Invalid query handle
>
> My log files don't seem to give much information
>
> Impala State Server
>
>
>  I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions {
>   01: abort_on_error (bool) = false,
>   02: max_errors (i32) = 0,
>   03: disable_codegen (bool) = false,
>   04: batch_size (i32) = 0,
>   05: return_as_ascii (bool) = true,
>   06: num_nodes (i32) = 0,
>   07: max_scan_range_length (i64) = 0,
>   08: num_scanner_threads (i32) = 0,
>   09: max_io_buffers (i32) = 0,
>   10: allow_unsupported_formats (bool) = false,
>   11: partition_agg (bool) = false,
> }
> I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2 limit 5
> I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):  (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010 -> 10.5.22.23:22000)
> I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage 100% (3 out of 3)
> I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 coord=10.5.22.24:22000 backend#=2
> I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2 node_id=1
> I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3 backend=10.5.22.22:22000
> I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 backend=10.5.22.24:22000
> I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1 failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
> I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
> I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2, node=1
> I1029 18:01:02.962131 23269
>
>  Impala DataNode
>
> I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4 coord=10.5.22.24:22000 backend#=1
> I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (un
>
>  and here is the configuration of my datanodes
>
>
> Hadoop Configuration
>
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
> <
> td>mapreduce.job.end-notification.retry.attempts<
> tr><
> /tr>KeyValuedfs.datanode.data.dir
> /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dndfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.fileoutputformat.compress.typeRECORDmapreduce.jobtracker.jobhistory.lru.cache.size5dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializersorg.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent0.25yarn.nodemanager.keytab
> /etc/krb5.keytabmapreduce.reduce.skip.maxgroups0dfs.https.server.keystore.resource
> ssl-server.xmlhadoop.http.authenti
> cation.kerberos.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5mapreduce.framework.namelocalio.file.buffer.size4096mapreduce.task.tmp.dir./tmpdfs.namenode.checkpoint.period3600ipc.client.kill.max10mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-address0.0.0.0:50090dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transferfalsedfs.datanode.address0.0.0.0:50010hadoop.http.authentication.token.validi
> ty36000hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
> dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
> yarn.admin.acl*yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs86400dfs.client.failover.connection.retries.on.timeouts0mapreduce.map.sort.spill.percent
> 0.80file.stream-buffer-size4096dfs.webhdfs.enabledtrueipc.client.connection.maxidletime10000mapreduce.jobtracker.persist.jobstatus.hours
> 1dfs.datanode.ipc.address0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0yarn.app.mapreduce.am.job.task.listener.thread-count30dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-executor.class
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorio.skip.checksum.errorsfalseyarn.resourcemanager.scheduler.client.thread-count50hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
> mapreduce.reduce.log.levelINFOfs.s3.maxRetries4hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
> 2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count
> 10yarn.app.mapreduce.client-am.ipc.max-retries1dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefaultyarn.nodemanager.disk-health-checker.min-healthy-disks0.25
> mapreduce.job.maxtaskfailures.per.tracker4mapreduce.tasktracker.healthchecker.script.timeout600000hadoop.security.group.mapping.ldap.search.attr.group.name
> cnfs.df.interval60000dfs.namenode.kerberos.internal.spnego.principal
> ${dfs.web.authentication.kerberos.principal}mapreduce.jobtracker.addresslocalmapreduce.tasktracker.tasks.sleeptimebeforesigkill5000dfs.journalnode.rpc-address0.0.0.0:8485
> mapreduce.job.a
> cl-view-jobdfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
> dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
> mapreduce.tasktracker.http.address0.0.0.0:50060yarn.resourcemanager.scheduler.address0.0.0.0:8030dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.sslfalsemapreduce.task.merge.progress.records
> 10000dfs.heartbeat.interval3net.topology.script.number.args
> 100mapreduce.local.clientfactory.class.nameorg.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size65536io.native.lib.availabletruedfs.client.failover.conne
> ction.retries0yarn.nodemanager.disk-health-checker.interval-ms120000
> dfs.blocksize67108864mapreduce.jobhistory.webapp.address0.0.0.0:19888yarn.resourcemanager.resource-tracker.client.thread-count50dfs.blockreport.initialDelay
> 0mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period
> 60mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/nativeyarn.nodemanager.health-checker.script.timeout-ms1200000yarn.resourcemanager.client.thread-count50file.bytes-per-checksum512dfs.replication.max512io.map.index.skip
> 0mapreduce.task.timeout600000dfs.d
> atanode.du.reserved0dfs.support.appendtrueftp.blocksize67108864dfs.client.file-block-storage-locations.num-threads10yarn.nodemanager.container-manager.thread-count20ipc.server.listen.queue.size128yarn.resourcemanager.amliveliness-monitor.interval-ms1000hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interfacedefaulthadoop.security.group.mapping.ldap.search.attr.membermember
> mapreduce.tasktracker.outofband.heartbeatfalsemapreduce.job.userlog.retain.hours24
> yarn.nodemanager.resource.memory-mb8192dfs.namenode.delegation.token.renew-interval86400000hadoop.ssl.keystores.factor
> y.classorg.apache.hadoop.security.ssl.FileBasedKeyStoresFactorydfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4dfs.client.read.shortcircuit.skip.checksum
> falsedfs.datanode.handler.count10hadoop.ssl.require.client.cert
> falseftp.client-write-packet-size65536ipc.server.tcpnodelay
> falsemapreduce.task.profile.reduces0-2hadoop.fuse.connection.timeout
> 300dfs.permissions.superusergrouphadoopmapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapreduce.map.speculativetruefs.ftp.host.port
> 21dfs.datanode.data.dir.perm700mapreduce.client.submit.file.re
> plication10s3native.blocksize67108864mapreduce.job.ubertask.maxmaps9dfs.namenode.replication.min1mapreduce.cluster.acls.enabledfalseyarn.nodemanager.localizer.fetch.thread-count4map.sort.classorg.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0dfs.namenode.name.dir/home/data/1/dfs/nnyarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/staging
> fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOMEdfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.job.reduces
> 1mapreduce.job.complete.cancel.delegation.tokenstruehadoop.security.group.mapping.ldap.search.filter.user(&(objectClass=user)(sAMAccountName={0}))yarn.nodemanager.sleep-delay-before-sigkill.ms250mapreduce.tasktracker.healthchecker.interval60000mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfodfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protectionauthenticationdfs.namenode.https-address0.0.0.0:50470ftp.stream-buffer-size4096dfs.ha.log-roll.period120yarn.resourcemanager.admin.client.thread-count1yar
> n.resourcemanager.zookeeper-store.session.timeout-ms60000file.client-write-packet-size65536hadoop.http.authentication.simple.anonymous.allowedtrueyarn.nodemanager.log.retain-seconds
> 10800dfs.datanode.drop.cache.behind.readsfalsedfs.image.transfer.bandwidthPerSec
> 0mapreduce.tasktracker.instrumentationorg.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size1048576dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512fs.automatic.closetruefs.trash.interval
> 0hadoop.security.authenticationsimplefs.defaultFShdfs://hadoop1.rad.wc.truecarcorp.com:8020hadoop.ssl.server.confssl-server.xmlipc.client.connect.max.retries10yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskschedulerorg.apache.hadoop.mapred.JobQueueTaskSchedulermapreduce.job.speculative.speculativecap0.1yarn.am.liveness-monitor.expiry-interval-ms600000mapreduce.output.fileoutputformat.compressfalsenet.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping
> dfs.namenode.replication.considerLoadtruemapreduce.job.counters.max120
> yarn.resourcemanager.address0.0.0.0:8032dfs.client.block.write.retries
> 3yarn.resourcemanager.nm.liveness-monitor.interval-ms1000io.map.index.interval
> 128mapred.child.java.opts-Xmx200mmapreduce.tasktracker.local.dir.minspacestart
> 0dfs.client.https.keystore.resourcessl-client.xmlmapreduce.client.progressmonitor.pollinterval1000mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuenamedefaultyarn.nodemanager.localizer.address0.0.0.0:8040io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize10000000yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalseyarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserver
> defaultmapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecdfs.namenode.accesstime.precision3600000mapreduce.map.log.levelINFOio.seqfile.compress.blocksize1000000mapreduce.tasktracker.taskcontrollerorg.apache.hadoop.mapred.DefaultTaskController
> hadoop.security.groups.cache.secs300mapreduce.job.end-notification.max.attempts5
> yarn.nodemanager.webapp.address0.0.0.0:8042mapreduce.jobtracker.expire.trackers.interval600000yarn.resourcemanager.webapp.address0.0.0.0:8088yarn.nodemanager.health-checker.interval-ms600000hadoop.security.authorization
> falsefs.ftp.host0.0.0.0yarn.app.mapreduce.am.scheduler
> .heartbeat.interval-ms1000mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
> mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
> dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
> mapreduce.output.fileoutputformat.compress.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.jobtracker.instrumentation
> org.apache.hadoop.mapred.JobTrackerMetricsInstyarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab/etc/sec
> urity/keytab/jhs.service.keytabdfs.datanode.balance.bandwidthPerSec1048576file.blocksize
> 67108864yarn.resourcemanager.admin.address0.0.0.0:8033
> yarn.resourcemanager.resource-tracker.address0.0.0.0:8031mapreduce.tasktracker.local.dir.minspacekill0mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
> mapreduce.jobtracker.retiredjobs.cache.size1000ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.aclworld:anyone:rwcdayarn.nodemanager.local-dirs/tmp/nm-local-dirmapreduce.reduce.shuffle.connect.timeout180000dfs.block.access.key.update.interval
> 600dfs.block.access.token.lifetime6005mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemyarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
> mapreduce.jobtracker.jobhistory.block.size3145728mapreduce.tasktracker.indexcache.mb10
> dfs.namenode.checkpoint.check.period60dfs.client.block.write.replace-datanode-on-failure.enabletruedfs.datanode.directoryscan.interval21600yarn.nodemanager.container-monitor.interval-ms
> 3000dfs.default.chunk.view.size32768mapreduce.job.speculative.slownodethreshold
> 1.0mapreduce.job.reduce.slowstart.completedmaps0.05hadoop.security.instrumentation.requires.adminfalsedfs.namenode.safemode.min.datanodes0hadoop.http.authentication.signature.secret.file${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts4
> yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
> dfs.datanode.https.address0.0.0.0:50475mapreduce.reduce.skip.proc.count.autoincr
> truefile.replication1hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000hadoop.tmp.dir/tmp/hadoop-${user.name}mapreduce.jobhistory.address0.0.0.0:10020mapreduce.jobtracker.restart.recoverfalsemapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/localyarn.ipc.s
> erializer.typeprotocolbuffersdfs.namenode.decommission.nodes.per.interval5
> dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir${hadoop.tmp.dir}/s3
> dfs.namenode.support.allow.formattrueyarn.nodemanager.remote-app-log-dir/tmp/logs
> hadoop.work.around.non.threadsafe.getpwuidfalsedfs.ha.automatic-failover.enabledfalse
> mapreduce.jobtracker.persist.jobstatus.activetruedfs.namenode.logging.levelinfo
> yarn.nodemanager.log-dirs/tmp/logsdfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab/etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075mapreduce.task.profilefalsedfs.namenode.edits.dir${dfs.namenode.name.dir}hadoop.fuse.timer.period5mapreduce.map.skip.proc.count.autoincrtruefs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFsmapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size4096yarn.nodemanager.delete.debug-delay-sec0dfs.secondary.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
> 4194304yarn.scheduler.maximum-allocation-mb10240s3native.bytes-per-checksum
> 512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication
> 3yarn.nodemanager.log-aggregation.compression-typenonehadoop.http.authentication.type
> simpledfs.client.failover.sleep.base.millis500yarn.nodemanager.heartbeat.interval-ms
> 1000hadoop.jetty.logs.serve.aliasestruemapreduce.reduce.shuffle.input.buffer.percent
> 0.70dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb
> 100mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count
> 10hadoop.ssl.client.confssl-client.xmlyarn.resourcemanager.container.liveness-monitor.interval-ms600000mapreduce.client.completion.pollinterval5000yarn.nodemanager.vmem-pmem-ratio2.1yarn.app.mapreduce.client.max-retries3hadoop.ssl.enabledfalsefs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfsmapreduce.tasktracker.reduce.tasks.maximum2mapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size4096dfs.namenode.invalidate.work.pct.per.iteration0.32fdfs.bytes-per-checksum512dfs.replication3mapreduce.shuffle.ssl.file.buffer.size
> 65536dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob
> -1dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb
> 0dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size
> 65536dfs.client.failover.sleep.max.millis15000mapreduce.job.maps
> 2dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
> falses3.blocksize67108864kfs.blocksize67108864dfs.namenode.edits.journal-plugin.qjournalorg.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerdfs.client.https.need-authfalseyarn.scheduler.minimum-allocation-mb128ftp.replication3mapreduce.input.fileinputformat.split.minsize0fs.s3n.block.size67108864yarn.i
> pc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCdfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.userdr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms
> 600000mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps
> 0-2mapreduce.shuffle.port8080mapreduce.jobtracker.http.address0.0.0.0:50030mapreduce.reduce.shuffle.merge.percent0.66
> mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
> dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondarytfile.fs.input.buffer.size262144fs.s3.block.size67108864tfile.io.chunk.size1048576io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerializationyarn.resourcemanager.max-completed-applications10000mapreduce.jobhistory.principal
> jhs/_HOST@REALM.TLDmapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address0.0.0.0:50100dfs.block.access.token.enablefalseio.seqfile.sorter.recordlimit1000000s3native.client-write-packet-size65536ftp.bytes-per-checksum512hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMappingdfs.client.file-block-storage-locations.timeout60mapre
> duce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
> yarn.nm.liveness-monitor.expiry-interval-ms600000mapreduce.tasktracker.map.tasks.maximum2
> dfs.namenode.max.objects0dfs.namenode.delegation.token.max-lifetime604800000
> mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*dfs.datanode.hdfs-blocks-metadata.enabledtrueyarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandlermapreduce.tasktracker.dns.nameserverdefault
> dfs.datanode.readahead.bytes4193404mapreduce.job.ubertask.maxreduces1
> dfs.image.compressfalsemapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalsemapreduce.tasktracker.report.address127.0.0.1:0mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096tfile.fs.output.buffer.size262144yarn.resourcemanager.am.max-retries1dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enable
> falsehadoop.common.configuration.version0.23.0dfs.namenode.replication.work.m
> ultiplier.per.iteration2mapreduce.job.acl-modify-jobio.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10mapreduce.client.output.filterFAILED
>
>
>


-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: Issue with running Impala

Posted by Brock Noland <br...@cloudera.com>.
Hi,

This question should go to the impala-user group which you can subscribe to
here:

https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/impala-user

Sorry for the confusion.

Brock

On Mon, Oct 29, 2012 at 8:17 PM, Subash D'Souza <sd...@truecar.com> wrote:

> I'm hoping this is the right place to post questions about Impala. I'm
> playing around with Impala and have configured and got it running. I tried
> a running a query though and it comes back with a very abstract error. Any
> help would be appreciated.
>
> Thanks
> Subash
>
> Here are the error and log files for the same
> [hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
> ERROR: Failed to open HDFS file hdfs://
> hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255): Unknown error 255
> ERROR: Invalid query handle
>
> My log files don't seem to give much information
>
> Impala State Server
>
>
>  I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions {
>   01: abort_on_error (bool) = false,
>   02: max_errors (i32) = 0,
>   03: disable_codegen (bool) = false,
>   04: batch_size (i32) = 0,
>   05: return_as_ascii (bool) = true,
>   06: num_nodes (i32) = 0,
>   07: max_scan_range_length (i64) = 0,
>   08: num_scanner_threads (i32) = 0,
>   09: max_io_buffers (i32) = 0,
>   10: allow_unsupported_formats (bool) = false,
>   11: partition_agg (bool) = false,
> }
> I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2 limit 5
> I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):  (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010 -> 10.5.22.23:22000)
> I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage 100% (3 out of 3)
> I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 coord=10.5.22.24:22000 backend#=2
> I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2 node_id=1
> I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3 backend=10.5.22.22:22000
> I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 backend=10.5.22.24:22000
> I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1 failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
> I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
> I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2, node=1
> I1029 18:01:02.962131 23269
>
>  Impala DataNode
>
> I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4 coord=10.5.22.24:22000 backend#=1
> I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (un
>
>  and here is the configuration of my datanodes
>
>
> Hadoop Configuration
>
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
> <
> td>mapreduce.job.end-notification.retry.attempts<
> tr><
> /tr>KeyValuedfs.datanode.data.dir
> /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dndfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.fileoutputformat.compress.typeRECORDmapreduce.jobtracker.jobhistory.lru.cache.size5dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializersorg.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent0.25yarn.nodemanager.keytab
> /etc/krb5.keytabmapreduce.reduce.skip.maxgroups0dfs.https.server.keystore.resource
> ssl-server.xmlhadoop.http.authenti
> cation.kerberos.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5mapreduce.framework.namelocalio.file.buffer.size4096mapreduce.task.tmp.dir./tmpdfs.namenode.checkpoint.period3600ipc.client.kill.max10mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-address0.0.0.0:50090dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transferfalsedfs.datanode.address0.0.0.0:50010hadoop.http.authentication.token.validi
> ty36000hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
> dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
> yarn.admin.acl*yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs86400dfs.client.failover.connection.retries.on.timeouts0mapreduce.map.sort.spill.percent
> 0.80file.stream-buffer-size4096dfs.webhdfs.enabledtrueipc.client.connection.maxidletime10000mapreduce.jobtracker.persist.jobstatus.hours
> 1dfs.datanode.ipc.address0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0yarn.app.mapreduce.am.job.task.listener.thread-count30dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-executor.class
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorio.skip.checksum.errorsfalseyarn.resourcemanager.scheduler.client.thread-count50hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
> mapreduce.reduce.log.levelINFOfs.s3.maxRetries4hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
> 2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count
> 10yarn.app.mapreduce.client-am.ipc.max-retries1dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefaultyarn.nodemanager.disk-health-checker.min-healthy-disks0.25
> mapreduce.job.maxtaskfailures.per.tracker4mapreduce.tasktracker.healthchecker.script.timeout600000hadoop.security.group.mapping.ldap.search.attr.group.name
> cnfs.df.interval60000dfs.namenode.kerberos.internal.spnego.principal
> ${dfs.web.authentication.kerberos.principal}mapreduce.jobtracker.addresslocalmapreduce.tasktracker.tasks.sleeptimebeforesigkill5000dfs.journalnode.rpc-address0.0.0.0:8485
> mapreduce.job.a
> cl-view-jobdfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
> dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
> mapreduce.tasktracker.http.address0.0.0.0:50060yarn.resourcemanager.scheduler.address0.0.0.0:8030dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.sslfalsemapreduce.task.merge.progress.records
> 10000dfs.heartbeat.interval3net.topology.script.number.args
> 100mapreduce.local.clientfactory.class.nameorg.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size65536io.native.lib.availabletruedfs.client.failover.conne
> ction.retries0yarn.nodemanager.disk-health-checker.interval-ms120000
> dfs.blocksize67108864mapreduce.jobhistory.webapp.address0.0.0.0:19888yarn.resourcemanager.resource-tracker.client.thread-count50dfs.blockreport.initialDelay
> 0mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period
> 60mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/nativeyarn.nodemanager.health-checker.script.timeout-ms1200000yarn.resourcemanager.client.thread-count50file.bytes-per-checksum512dfs.replication.max512io.map.index.skip
> 0mapreduce.task.timeout600000dfs.d
> atanode.du.reserved0dfs.support.appendtrueftp.blocksize67108864dfs.client.file-block-storage-locations.num-threads10yarn.nodemanager.container-manager.thread-count20ipc.server.listen.queue.size128yarn.resourcemanager.amliveliness-monitor.interval-ms1000hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interfacedefaulthadoop.security.group.mapping.ldap.search.attr.membermember
> mapreduce.tasktracker.outofband.heartbeatfalsemapreduce.job.userlog.retain.hours24
> yarn.nodemanager.resource.memory-mb8192dfs.namenode.delegation.token.renew-interval86400000hadoop.ssl.keystores.factor
> y.classorg.apache.hadoop.security.ssl.FileBasedKeyStoresFactorydfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4dfs.client.read.shortcircuit.skip.checksum
> falsedfs.datanode.handler.count10hadoop.ssl.require.client.cert
> falseftp.client-write-packet-size65536ipc.server.tcpnodelay
> falsemapreduce.task.profile.reduces0-2hadoop.fuse.connection.timeout
> 300dfs.permissions.superusergrouphadoopmapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapreduce.map.speculativetruefs.ftp.host.port
> 21dfs.datanode.data.dir.perm700mapreduce.client.submit.file.re
> plication10s3native.blocksize67108864mapreduce.job.ubertask.maxmaps9dfs.namenode.replication.min1mapreduce.cluster.acls.enabledfalseyarn.nodemanager.localizer.fetch.thread-count4map.sort.classorg.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0dfs.namenode.name.dir/home/data/1/dfs/nnyarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/staging
> fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOMEdfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.job.reduces
> 1mapreduce.job.complete.cancel.delegation.tokenstruehadoop.security.group.mapping.ldap.search.filter.user(&(objectClass=user)(sAMAccountName={0}))yarn.nodemanager.sleep-delay-before-sigkill.ms250mapreduce.tasktracker.healthchecker.interval60000mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfodfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protectionauthenticationdfs.namenode.https-address0.0.0.0:50470ftp.stream-buffer-size4096dfs.ha.log-roll.period120yarn.resourcemanager.admin.client.thread-count1yar
> n.resourcemanager.zookeeper-store.session.timeout-ms60000file.client-write-packet-size65536hadoop.http.authentication.simple.anonymous.allowedtrueyarn.nodemanager.log.retain-seconds
> 10800dfs.datanode.drop.cache.behind.readsfalsedfs.image.transfer.bandwidthPerSec
> 0mapreduce.tasktracker.instrumentationorg.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size1048576dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512fs.automatic.closetruefs.trash.interval
> 0hadoop.security.authenticationsimplefs.defaultFShdfs://hadoop1.rad.wc.truecarcorp.com:8020hadoop.ssl.server.confssl-server.xmlipc.client.connect.max.retries10yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskschedulerorg.apache.hadoop.mapred.JobQueueTaskSchedulermapreduce.job.speculative.speculativecap0.1yarn.am.liveness-monitor.expiry-interval-ms600000mapreduce.output.fileoutputformat.compressfalsenet.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping
> dfs.namenode.replication.considerLoadtruemapreduce.job.counters.max120
> yarn.resourcemanager.address0.0.0.0:8032dfs.client.block.write.retries
> 3yarn.resourcemanager.nm.liveness-monitor.interval-ms1000io.map.index.interval
> 128mapred.child.java.opts-Xmx200mmapreduce.tasktracker.local.dir.minspacestart
> 0dfs.client.https.keystore.resourcessl-client.xmlmapreduce.client.progressmonitor.pollinterval1000mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuenamedefaultyarn.nodemanager.localizer.address0.0.0.0:8040io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize10000000yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalseyarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserver
> defaultmapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecdfs.namenode.accesstime.precision3600000mapreduce.map.log.levelINFOio.seqfile.compress.blocksize1000000mapreduce.tasktracker.taskcontrollerorg.apache.hadoop.mapred.DefaultTaskController
> hadoop.security.groups.cache.secs300mapreduce.job.end-notification.max.attempts5
> yarn.nodemanager.webapp.address0.0.0.0:8042mapreduce.jobtracker.expire.trackers.interval600000yarn.resourcemanager.webapp.address0.0.0.0:8088yarn.nodemanager.health-checker.interval-ms600000hadoop.security.authorization
> falsefs.ftp.host0.0.0.0yarn.app.mapreduce.am.scheduler
> .heartbeat.interval-ms1000mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
> mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
> dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
> mapreduce.output.fileoutputformat.compress.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.jobtracker.instrumentation
> org.apache.hadoop.mapred.JobTrackerMetricsInstyarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab/etc/sec
> urity/keytab/jhs.service.keytabdfs.datanode.balance.bandwidthPerSec1048576file.blocksize
> 67108864yarn.resourcemanager.admin.address0.0.0.0:8033
> yarn.resourcemanager.resource-tracker.address0.0.0.0:8031mapreduce.tasktracker.local.dir.minspacekill0mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
> mapreduce.jobtracker.retiredjobs.cache.size1000ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.aclworld:anyone:rwcdayarn.nodemanager.local-dirs/tmp/nm-local-dirmapreduce.reduce.shuffle.connect.timeout180000dfs.block.access.key.update.interval
> 600dfs.block.access.token.lifetime6005mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemyarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
> mapreduce.jobtracker.jobhistory.block.size3145728mapreduce.tasktracker.indexcache.mb10
> dfs.namenode.checkpoint.check.period60dfs.client.block.write.replace-datanode-on-failure.enabletruedfs.datanode.directoryscan.interval21600yarn.nodemanager.container-monitor.interval-ms
> 3000dfs.default.chunk.view.size32768mapreduce.job.speculative.slownodethreshold
> 1.0mapreduce.job.reduce.slowstart.completedmaps0.05hadoop.security.instrumentation.requires.adminfalsedfs.namenode.safemode.min.datanodes0hadoop.http.authentication.signature.secret.file${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts4
> yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
> dfs.datanode.https.address0.0.0.0:50475mapreduce.reduce.skip.proc.count.autoincr
> truefile.replication1hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000hadoop.tmp.dir/tmp/hadoop-${user.name}mapreduce.jobhistory.address0.0.0.0:10020mapreduce.jobtracker.restart.recoverfalsemapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/localyarn.ipc.s
> erializer.typeprotocolbuffersdfs.namenode.decommission.nodes.per.interval5
> dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir${hadoop.tmp.dir}/s3
> dfs.namenode.support.allow.formattrueyarn.nodemanager.remote-app-log-dir/tmp/logs
> hadoop.work.around.non.threadsafe.getpwuidfalsedfs.ha.automatic-failover.enabledfalse
> mapreduce.jobtracker.persist.jobstatus.activetruedfs.namenode.logging.levelinfo
> yarn.nodemanager.log-dirs/tmp/logsdfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab/etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075mapreduce.task.profilefalsedfs.namenode.edits.dir${dfs.namenode.name.dir}hadoop.fuse.timer.period5mapreduce.map.skip.proc.count.autoincrtruefs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFsmapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size4096yarn.nodemanager.delete.debug-delay-sec0dfs.secondary.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
> 4194304yarn.scheduler.maximum-allocation-mb10240s3native.bytes-per-checksum
> 512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication
> 3yarn.nodemanager.log-aggregation.compression-typenonehadoop.http.authentication.type
> simpledfs.client.failover.sleep.base.millis500yarn.nodemanager.heartbeat.interval-ms
> 1000hadoop.jetty.logs.serve.aliasestruemapreduce.reduce.shuffle.input.buffer.percent
> 0.70dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb
> 100mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count
> 10hadoop.ssl.client.confssl-client.xmlyarn.resourcemanager.container.liveness-monitor.interval-ms600000mapreduce.client.completion.pollinterval5000yarn.nodemanager.vmem-pmem-ratio2.1yarn.app.mapreduce.client.max-retries3hadoop.ssl.enabledfalsefs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfsmapreduce.tasktracker.reduce.tasks.maximum2mapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size4096dfs.namenode.invalidate.work.pct.per.iteration0.32fdfs.bytes-per-checksum512dfs.replication3mapreduce.shuffle.ssl.file.buffer.size
> 65536dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob
> -1dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb
> 0dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size
> 65536dfs.client.failover.sleep.max.millis15000mapreduce.job.maps
> 2dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
> falses3.blocksize67108864kfs.blocksize67108864dfs.namenode.edits.journal-plugin.qjournalorg.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerdfs.client.https.need-authfalseyarn.scheduler.minimum-allocation-mb128ftp.replication3mapreduce.input.fileinputformat.split.minsize0fs.s3n.block.size67108864yarn.i
> pc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCdfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.userdr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms
> 600000mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps
> 0-2mapreduce.shuffle.port8080mapreduce.jobtracker.http.address0.0.0.0:50030mapreduce.reduce.shuffle.merge.percent0.66
> mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
> dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondarytfile.fs.input.buffer.size262144fs.s3.block.size67108864tfile.io.chunk.size1048576io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerializationyarn.resourcemanager.max-completed-applications10000mapreduce.jobhistory.principal
> jhs/_HOST@REALM.TLDmapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address0.0.0.0:50100dfs.block.access.token.enablefalseio.seqfile.sorter.recordlimit1000000s3native.client-write-packet-size65536ftp.bytes-per-checksum512hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMappingdfs.client.file-block-storage-locations.timeout60mapre
> duce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
> yarn.nm.liveness-monitor.expiry-interval-ms600000mapreduce.tasktracker.map.tasks.maximum2
> dfs.namenode.max.objects0dfs.namenode.delegation.token.max-lifetime604800000
> mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*dfs.datanode.hdfs-blocks-metadata.enabledtrueyarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandlermapreduce.tasktracker.dns.nameserverdefault
> dfs.datanode.readahead.bytes4193404mapreduce.job.ubertask.maxreduces1
> dfs.image.compressfalsemapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalsemapreduce.tasktracker.report.address127.0.0.1:0mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096tfile.fs.output.buffer.size262144yarn.resourcemanager.am.max-retries1dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enable
> falsehadoop.common.configuration.version0.23.0dfs.namenode.replication.work.m
> ultiplier.per.iteration2mapreduce.job.acl-modify-jobio.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10mapreduce.client.output.filterFAILED
>
>
>


-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: Issue with running Impala

Posted by Brock Noland <br...@cloudera.com>.
Hi,

This question should go to the impala-user group which you can subscribe to
here:

https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/impala-user

Sorry for the confusion.

Brock

On Mon, Oct 29, 2012 at 8:17 PM, Subash D'Souza <sd...@truecar.com> wrote:

> I'm hoping this is the right place to post questions about Impala. I'm
> playing around with Impala and have configured and got it running. I tried
> a running a query though and it comes back with a very abstract error. Any
> help would be appreciated.
>
> Thanks
> Subash
>
> Here are the error and log files for the same
> [hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
> ERROR: Failed to open HDFS file hdfs://
> hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255): Unknown error 255
> ERROR: Invalid query handle
>
> My log files don't seem to give much information
>
> Impala State Server
>
>
>  I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions {
>   01: abort_on_error (bool) = false,
>   02: max_errors (i32) = 0,
>   03: disable_codegen (bool) = false,
>   04: batch_size (i32) = 0,
>   05: return_as_ascii (bool) = true,
>   06: num_nodes (i32) = 0,
>   07: max_scan_range_length (i64) = 0,
>   08: num_scanner_threads (i32) = 0,
>   09: max_io_buffers (i32) = 0,
>   10: allow_unsupported_formats (bool) = false,
>   11: partition_agg (bool) = false,
> }
> I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2 limit 5
> I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):  (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010 -> 10.5.22.23:22000)
> I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage 100% (3 out of 3)
> I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 coord=10.5.22.24:22000 backend#=2
> I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2 node_id=1
> I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3 backend=10.5.22.22:22000
> I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5 backend=10.5.22.24:22000
> I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1 failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
> I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
> I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2, node=1
> I1029 18:01:02.962131 23269
>
>  Impala DataNode
>
> I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4 coord=10.5.22.24:22000 backend#=1
> I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1 instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)), Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2 offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)), Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5 offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)), Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4 mask=4))])
> I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>): Unknown error 255
>     @           0x75ea41  (un
>
>  and here is the configuration of my datanodes
>
>
> Hadoop Configuration
>
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
> <
> td>mapreduce.job.end-notification.retry.attempts<
> tr><
> /tr>KeyValuedfs.datanode.data.dir
> /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dndfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.fileoutputformat.compress.typeRECORDmapreduce.jobtracker.jobhistory.lru.cache.size5dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializersorg.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent0.25yarn.nodemanager.keytab
> /etc/krb5.keytabmapreduce.reduce.skip.maxgroups0dfs.https.server.keystore.resource
> ssl-server.xmlhadoop.http.authenti
> cation.kerberos.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5mapreduce.framework.namelocalio.file.buffer.size4096mapreduce.task.tmp.dir./tmpdfs.namenode.checkpoint.period3600ipc.client.kill.max10mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-address0.0.0.0:50090dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transferfalsedfs.datanode.address0.0.0.0:50010hadoop.http.authentication.token.validi
> ty36000hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
> dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
> yarn.admin.acl*yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs86400dfs.client.failover.connection.retries.on.timeouts0mapreduce.map.sort.spill.percent
> 0.80file.stream-buffer-size4096dfs.webhdfs.enabledtrueipc.client.connection.maxidletime10000mapreduce.jobtracker.persist.jobstatus.hours
> 1dfs.datanode.ipc.address0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0yarn.app.mapreduce.am.job.task.listener.thread-count30dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-executor.class
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorio.skip.checksum.errorsfalseyarn.resourcemanager.scheduler.client.thread-count50hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
> mapreduce.reduce.log.levelINFOfs.s3.maxRetries4hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
> 2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count
> 10yarn.app.mapreduce.client-am.ipc.max-retries1dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefaultyarn.nodemanager.disk-health-checker.min-healthy-disks0.25
> mapreduce.job.maxtaskfailures.per.tracker4mapreduce.tasktracker.healthchecker.script.timeout600000hadoop.security.group.mapping.ldap.search.attr.group.name
> cnfs.df.interval60000dfs.namenode.kerberos.internal.spnego.principal
> ${dfs.web.authentication.kerberos.principal}mapreduce.jobtracker.addresslocalmapreduce.tasktracker.tasks.sleeptimebeforesigkill5000dfs.journalnode.rpc-address0.0.0.0:8485
> mapreduce.job.a
> cl-view-jobdfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
> dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
> mapreduce.tasktracker.http.address0.0.0.0:50060yarn.resourcemanager.scheduler.address0.0.0.0:8030dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.sslfalsemapreduce.task.merge.progress.records
> 10000dfs.heartbeat.interval3net.topology.script.number.args
> 100mapreduce.local.clientfactory.class.nameorg.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size65536io.native.lib.availabletruedfs.client.failover.conne
> ction.retries0yarn.nodemanager.disk-health-checker.interval-ms120000
> dfs.blocksize67108864mapreduce.jobhistory.webapp.address0.0.0.0:19888yarn.resourcemanager.resource-tracker.client.thread-count50dfs.blockreport.initialDelay
> 0mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period
> 60mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/nativeyarn.nodemanager.health-checker.script.timeout-ms1200000yarn.resourcemanager.client.thread-count50file.bytes-per-checksum512dfs.replication.max512io.map.index.skip
> 0mapreduce.task.timeout600000dfs.d
> atanode.du.reserved0dfs.support.appendtrueftp.blocksize67108864dfs.client.file-block-storage-locations.num-threads10yarn.nodemanager.container-manager.thread-count20ipc.server.listen.queue.size128yarn.resourcemanager.amliveliness-monitor.interval-ms1000hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interfacedefaulthadoop.security.group.mapping.ldap.search.attr.membermember
> mapreduce.tasktracker.outofband.heartbeatfalsemapreduce.job.userlog.retain.hours24
> yarn.nodemanager.resource.memory-mb8192dfs.namenode.delegation.token.renew-interval86400000hadoop.ssl.keystores.factor
> y.classorg.apache.hadoop.security.ssl.FileBasedKeyStoresFactorydfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4dfs.client.read.shortcircuit.skip.checksum
> falsedfs.datanode.handler.count10hadoop.ssl.require.client.cert
> falseftp.client-write-packet-size65536ipc.server.tcpnodelay
> falsemapreduce.task.profile.reduces0-2hadoop.fuse.connection.timeout
> 300dfs.permissions.superusergrouphadoopmapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapreduce.map.speculativetruefs.ftp.host.port
> 21dfs.datanode.data.dir.perm700mapreduce.client.submit.file.re
> plication10s3native.blocksize67108864mapreduce.job.ubertask.maxmaps9dfs.namenode.replication.min1mapreduce.cluster.acls.enabledfalseyarn.nodemanager.localizer.fetch.thread-count4map.sort.classorg.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0dfs.namenode.name.dir/home/data/1/dfs/nnyarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/staging
> fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOMEdfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.job.reduces
> 1mapreduce.job.complete.cancel.delegation.tokenstruehadoop.security.group.mapping.ldap.search.filter.user(&(objectClass=user)(sAMAccountName={0}))yarn.nodemanager.sleep-delay-before-sigkill.ms250mapreduce.tasktracker.healthchecker.interval60000mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfodfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protectionauthenticationdfs.namenode.https-address0.0.0.0:50470ftp.stream-buffer-size4096dfs.ha.log-roll.period120yarn.resourcemanager.admin.client.thread-count1yar
> n.resourcemanager.zookeeper-store.session.timeout-ms60000file.client-write-packet-size65536hadoop.http.authentication.simple.anonymous.allowedtrueyarn.nodemanager.log.retain-seconds
> 10800dfs.datanode.drop.cache.behind.readsfalsedfs.image.transfer.bandwidthPerSec
> 0mapreduce.tasktracker.instrumentationorg.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size1048576dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512fs.automatic.closetruefs.trash.interval
> 0hadoop.security.authenticationsimplefs.defaultFShdfs://hadoop1.rad.wc.truecarcorp.com:8020hadoop.ssl.server.confssl-server.xmlipc.client.connect.max.retries10yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskschedulerorg.apache.hadoop.mapred.JobQueueTaskSchedulermapreduce.job.speculative.speculativecap0.1yarn.am.liveness-monitor.expiry-interval-ms600000mapreduce.output.fileoutputformat.compressfalsenet.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping
> dfs.namenode.replication.considerLoadtruemapreduce.job.counters.max120
> yarn.resourcemanager.address0.0.0.0:8032dfs.client.block.write.retries
> 3yarn.resourcemanager.nm.liveness-monitor.interval-ms1000io.map.index.interval
> 128mapred.child.java.opts-Xmx200mmapreduce.tasktracker.local.dir.minspacestart
> 0dfs.client.https.keystore.resourcessl-client.xmlmapreduce.client.progressmonitor.pollinterval1000mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuenamedefaultyarn.nodemanager.localizer.address0.0.0.0:8040io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize10000000yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalseyarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserver
> defaultmapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecdfs.namenode.accesstime.precision3600000mapreduce.map.log.levelINFOio.seqfile.compress.blocksize1000000mapreduce.tasktracker.taskcontrollerorg.apache.hadoop.mapred.DefaultTaskController
> hadoop.security.groups.cache.secs300mapreduce.job.end-notification.max.attempts5
> yarn.nodemanager.webapp.address0.0.0.0:8042mapreduce.jobtracker.expire.trackers.interval600000yarn.resourcemanager.webapp.address0.0.0.0:8088yarn.nodemanager.health-checker.interval-ms600000hadoop.security.authorization
> falsefs.ftp.host0.0.0.0yarn.app.mapreduce.am.scheduler
> .heartbeat.interval-ms1000mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
> mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
> dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
> mapreduce.output.fileoutputformat.compress.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.jobtracker.instrumentation
> org.apache.hadoop.mapred.JobTrackerMetricsInstyarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab/etc/sec
> urity/keytab/jhs.service.keytabdfs.datanode.balance.bandwidthPerSec1048576file.blocksize
> 67108864yarn.resourcemanager.admin.address0.0.0.0:8033
> yarn.resourcemanager.resource-tracker.address0.0.0.0:8031mapreduce.tasktracker.local.dir.minspacekill0mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
> mapreduce.jobtracker.retiredjobs.cache.size1000ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.aclworld:anyone:rwcdayarn.nodemanager.local-dirs/tmp/nm-local-dirmapreduce.reduce.shuffle.connect.timeout180000dfs.block.access.key.update.interval
> 600dfs.block.access.token.lifetime6005mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemyarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
> mapreduce.jobtracker.jobhistory.block.size3145728mapreduce.tasktracker.indexcache.mb10
> dfs.namenode.checkpoint.check.period60dfs.client.block.write.replace-datanode-on-failure.enabletruedfs.datanode.directoryscan.interval21600yarn.nodemanager.container-monitor.interval-ms
> 3000dfs.default.chunk.view.size32768mapreduce.job.speculative.slownodethreshold
> 1.0mapreduce.job.reduce.slowstart.completedmaps0.05hadoop.security.instrumentation.requires.adminfalsedfs.namenode.safemode.min.datanodes0hadoop.http.authentication.signature.secret.file${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts4
> yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
> dfs.datanode.https.address0.0.0.0:50475mapreduce.reduce.skip.proc.count.autoincr
> truefile.replication1hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000hadoop.tmp.dir/tmp/hadoop-${user.name}mapreduce.jobhistory.address0.0.0.0:10020mapreduce.jobtracker.restart.recoverfalsemapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/localyarn.ipc.s
> erializer.typeprotocolbuffersdfs.namenode.decommission.nodes.per.interval5
> dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir${hadoop.tmp.dir}/s3
> dfs.namenode.support.allow.formattrueyarn.nodemanager.remote-app-log-dir/tmp/logs
> hadoop.work.around.non.threadsafe.getpwuidfalsedfs.ha.automatic-failover.enabledfalse
> mapreduce.jobtracker.persist.jobstatus.activetruedfs.namenode.logging.levelinfo
> yarn.nodemanager.log-dirs/tmp/logsdfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab/etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075mapreduce.task.profilefalsedfs.namenode.edits.dir${dfs.namenode.name.dir}hadoop.fuse.timer.period5mapreduce.map.skip.proc.count.autoincrtruefs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFsmapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size4096yarn.nodemanager.delete.debug-delay-sec0dfs.secondary.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
> 4194304yarn.scheduler.maximum-allocation-mb10240s3native.bytes-per-checksum
> 512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication
> 3yarn.nodemanager.log-aggregation.compression-typenonehadoop.http.authentication.type
> simpledfs.client.failover.sleep.base.millis500yarn.nodemanager.heartbeat.interval-ms
> 1000hadoop.jetty.logs.serve.aliasestruemapreduce.reduce.shuffle.input.buffer.percent
> 0.70dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb
> 100mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count
> 10hadoop.ssl.client.confssl-client.xmlyarn.resourcemanager.container.liveness-monitor.interval-ms600000mapreduce.client.completion.pollinterval5000yarn.nodemanager.vmem-pmem-ratio2.1yarn.app.mapreduce.client.max-retries3hadoop.ssl.enabledfalsefs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfsmapreduce.tasktracker.reduce.tasks.maximum2mapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size4096dfs.namenode.invalidate.work.pct.per.iteration0.32fdfs.bytes-per-checksum512dfs.replication3mapreduce.shuffle.ssl.file.buffer.size
> 65536dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob
> -1dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb
> 0dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size
> 65536dfs.client.failover.sleep.max.millis15000mapreduce.job.maps
> 2dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
> falses3.blocksize67108864kfs.blocksize67108864dfs.namenode.edits.journal-plugin.qjournalorg.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerdfs.client.https.need-authfalseyarn.scheduler.minimum-allocation-mb128ftp.replication3mapreduce.input.fileinputformat.split.minsize0fs.s3n.block.size67108864yarn.i
> pc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCdfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.userdr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms
> 600000mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps
> 0-2mapreduce.shuffle.port8080mapreduce.jobtracker.http.address0.0.0.0:50030mapreduce.reduce.shuffle.merge.percent0.66
> mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
> dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondarytfile.fs.input.buffer.size262144fs.s3.block.size67108864tfile.io.chunk.size1048576io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerializationyarn.resourcemanager.max-completed-applications10000mapreduce.jobhistory.principal
> jhs/_HOST@REALM.TLDmapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address0.0.0.0:50100dfs.block.access.token.enablefalseio.seqfile.sorter.recordlimit1000000s3native.client-write-packet-size65536ftp.bytes-per-checksum512hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMappingdfs.client.file-block-storage-locations.timeout60mapre
> duce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
> yarn.nm.liveness-monitor.expiry-interval-ms600000mapreduce.tasktracker.map.tasks.maximum2
> dfs.namenode.max.objects0dfs.namenode.delegation.token.max-lifetime604800000
> mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*dfs.datanode.hdfs-blocks-metadata.enabledtrueyarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandlermapreduce.tasktracker.dns.nameserverdefault
> dfs.datanode.readahead.bytes4193404mapreduce.job.ubertask.maxreduces1
> dfs.image.compressfalsemapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalsemapreduce.tasktracker.report.address127.0.0.1:0mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096tfile.fs.output.buffer.size262144yarn.resourcemanager.am.max-retries1dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enable
> falsehadoop.common.configuration.version0.23.0dfs.namenode.replication.work.m
> ultiplier.per.iteration2mapreduce.job.acl-modify-jobio.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10mapreduce.client.output.filterFAILED
>
>
>


-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/